Professional Documents
Culture Documents
Resolve Complete
Resolve Complete
User Manual
IPM
RESOLVE
Version 9.5
July 2021
RESOLVE
IPM - Controller OVERVIEW
by Petroleum Experts Ltd.
3
Copyright Notice
The copyright in this manual and the associated computer program are the property of Petroleum Experts
Ltd. All rights reserved. Both, this manual and the computer program have been provided pursuant to a
Licence Agreement containing restriction of use.
No part of this manual may be reproduced, transmitted, transcribed, stored in a retrieval system, or translated
into any language, in any form or by any means, electronic, mechanical, magnetic, optical or otherwise, or
disclose to third parties without prior written consent from Petroleum Experts Ltd., Petex House, 10 Logie
Mill, Edinburgh, EH7 4HG, Scotland, UK.
IPM Suite, GAP, GAP Transient, PROSPER, MBAL, PVTp, REVEAL, RESOLVE, IFM, IVM, Model
Catalogue, OpenServer and MOVE are trademarks of Petroleum Experts Ltd.
We also recognise the registered trademarks of the following corporations that we may make reference to in
this manual: Microsoft, Schlumberger, Honeywell Process, Rock Flow Dynamics, Kongsberg Digital, AVEVA
SimSci, Halliburton, Stone Ridge Technology, CMG, AspenTech, Beicip-Franlab, ConocoPhillips, Emerson,
Shell, ExxonMobil, Saudi Aramco, BP, Chevron, WellDrill, SSI.
The software described in this manual is furnished under a licence agreement. The software may be used or
copied only in accordance with the terms of the agreement. It is against the law to copy the software on any
medium except as specifically allowed in the license agreement. No part of this documentation may be
reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying,
recording, or information storage and retrieval systems for any purpose other than the purchaser's personal
use, unless express written consent has been given by Petroleum Experts Limited.
Address:
email: edinburgh@petex.com
Internet: www.petex.com
Table of Contents
0
II
III RESOLVE
Integrate Intersect
.........................................................................................................................................................
on w indow s cluster w ith PxSub 293
Connecting to..........................................................................................................................................................
Nexus 299
Nexus driver
.........................................................................................................................................................
configuration 299
Loading and.........................................................................................................................................................
editing Nexus case details 300
Remote Linux
.........................................................................................................................................................
Run 304
Overview ......................................................................................................................................... 304
Installation ......................................................................................................................................... 305
Connecting to..........................................................................................................................................................
PSim 305
PSim Overview
......................................................................................................................................................... 305
PSim case.........................................................................................................................................................
setup guidelines 306
Introduction ......................................................................................................................................... 306
Preparing a PSim .........................................................................................................................................
simulation deck 306
Driver configuration ......................................................................................................................................... 309
Setup and configuration.........................................................................................................................................
of mpich 311
PSim model .........................................................................................................................................................
setup 313
IPR models ......................................................................................................................................... 316
Well controls ......................................................................................................................................... 318
Advanced options ......................................................................................................................................... 319
Other PSim functions ......................................................................................................................................... 320
Further Techical
.........................................................................................................................................................
Elements regarding the PSim / GAP connection 320
Running PSim
.........................................................................................................................................................
on a remote computer 323
Running PSim
.........................................................................................................................................................
on a Linux cluster 324
Overview _3 ......................................................................................................................................... 324
Installation_3 ......................................................................................................................................... 325
Administering the .........................................................................................................................................
lxresolve daemon_3 331
Connecting to..........................................................................................................................................................
Tem pest 333
Overview......................................................................................................................................................... 333
Tempest driver
.........................................................................................................................................................
configuration 333
Loading and.........................................................................................................................................................
editing Tempest case details 335
Remote Tempest
.........................................................................................................................................................
run on Linux 340
Overview ......................................................................................................................................... 340
Installation ......................................................................................................................................... 340
Connecting to..........................................................................................................................................................
tNavigator 343
tNavigator.........................................................................................................................................................
overview 343
tNavigator.........................................................................................................................................................
driver configuration 343
Loading and.........................................................................................................................................................
editing tNavigator case details 345
Tokens for .........................................................................................................................................................
tNavigator 349
tNavigator.........................................................................................................................................................
setup on Linux 351
Connecting to..........................................................................................................................................................
Echelon 351
Echelon overview
......................................................................................................................................................... 351
Echelon driver
.........................................................................................................................................................
configuration 352
Loading and.........................................................................................................................................................
editing Echelon case details 353
Remote Linux
.........................................................................................................................................................
Run 356
Overview ......................................................................................................................................... 356
Installation ......................................................................................................................................... 357
Connecting to..........................................................................................................................................................
RN-KIM 358
RN-KIM overview
......................................................................................................................................................... 358
RN-KIM driver
.........................................................................................................................................................
configuration 358
Loading and.........................................................................................................................................................
editing RN-KIM case details 359
Connecting to..........................................................................................................................................................
LedaFlow 362
Overview......................................................................................................................................................... 362
LedaFlow.........................................................................................................................................................
driver configuration 362
Loading and.........................................................................................................................................................
editing LedaFlow case 363
LedaFlow.........................................................................................................................................................
Visual Workflow variables 367
IV
V RESOLVE
Connection
.........................................................................................................................................................
Rules 445
Models and
.........................................................................................................................................................
Loops 447
Composition
.........................................................................................................................................................
Tables 448
Direct Connections
.........................................................................................................................................................
betw een instances 450
6 Data ...................................................................................................................................
Objects 457
@Risk .......................................................................................................................................................... 461
Physical model
......................................................................................................................................................... 463
@Risk model......................................................................................................................................................... 463
Run & Results
......................................................................................................................................................... 466
BO-PVT Data ..........................................................................................................................................................
Object 468
Case Manager .......................................................................................................................................................... 470
Variables.........................................................................................................................................................
and Model 471
Workflow.........................................................................................................................................................
s 473
Cases ......................................................................................................................................................... 473
Crystal Ball .......................................................................................................................................................... 475
Physical model
......................................................................................................................................................... 477
Crystal Ball
.........................................................................................................................................................
model 478
Run & Results
......................................................................................................................................................... 480
Data Analysis.......................................................................................................................................................... 482
Non Uniform.........................................................................................................................................................
Resampler 482
Uniform Resampler
......................................................................................................................................................... 486
Spectral Analysis
......................................................................................................................................................... 487
Advanced settings .........................................................................................................................................
- Period processing 490
Advanced settings .........................................................................................................................................
- Single period settings 491
Advanced settings .........................................................................................................................................
- Multiple periods settings 492
Wavelet Analysis
......................................................................................................................................................... 493
Data Analysis
.........................................................................................................................................................
Functions 494
Spike Filter
......................................................................................................................................................... 498
Window Filter
......................................................................................................................................................... 500
Data Store Data..........................................................................................................................................................
Objects 502
Data Store .........................................................................................................................................................
data object 502
FlexDataStore
.........................................................................................................................................................
data object 504
SamplePt ......................................................................................................................................... 506
SamplePtList ......................................................................................................................................... 507
The use of .........................................................................................................................................................
data store objects in a Visual Workflow 507
Distribution .......................................................................................................................................................... 508
Dual String Gas..........................................................................................................................................................
Lift 510
Input ......................................................................................................................................................... 511
Calculation......................................................................................................................................................... 512
EOS/Flash Data ..........................................................................................................................................................
Objects 513
EOS-PVT.........................................................................................................................................................
Data Object 513
Comp-Allocation
.........................................................................................................................................................
Data Object 515
Comp-Blend .........................................................................................................................................................
Data Object 516
Blending multiple
.........................................................................................................................................................
EOS-PVT objects 519
Comp-Lump-Delump
.........................................................................................................................................................
Data Object 523
Field data: data
..........................................................................................................................................................
object 524
GAP Data Objects
.......................................................................................................................................................... 528
Choke dP.........................................................................................................................................................
Calculator 529
Choke Rate .........................................................................................................................................................
Calculator 529
Choke Size .........................................................................................................................................................
Calculator 529
IPR BHP Calculator
......................................................................................................................................................... 530
IPR Rate Calculator
......................................................................................................................................................... 530
Performance.........................................................................................................................................................
Curve Calculators 530
PCCalculator ......................................................................................................................................... 531
VI
VII RESOLVE
VIII
IX RESOLVE
Fluids (PVT)
......................................................................................................................................................... 701
Reservoir......................................................................................................................................................... 702
Well data ......................................................................................................................................................... 703
Well History
......................................................................................................................................................... 705
Analysis ......................................................................................................................................................... 707
Object properties
.........................................................................................................................................................
and functions 713
Phase multiplier
.........................................................................................................................................................
for history matching of Tight reservoir models 716
Error and.........................................................................................................................................................
w arning messages w hen generating PdTd curves 717
Additional.........................................................................................................................................................
notes 718
Water Chem istry
.......................................................................................................................................................... 719
Introduction
......................................................................................................................................................... 719
Water Chemistry
.........................................................................................................................................................
Data Object 721
Water Chemistry
.........................................................................................................................................................
Mixer 724
Water Chemistry
.........................................................................................................................................................
PVT Mixer 725
Water chemistry
.........................................................................................................................................................
tag data - REVEAL driver 728
Water Chemistry
.........................................................................................................................................................
functions 729
Water Chemistry.........................................................................................................................................
data object 729
Water Chemistry.........................................................................................................................................
Mixer 732
Water Chemistry.........................................................................................................................................
PVT Mixer 733
Well builder data
..........................................................................................................................................................
object 740
General description
......................................................................................................................................................... 740
Reference .........................................................................................................................................................
location 741
Deviation .........................................................................................................................................................
survey 742
Completion .........................................................................................................................................................
designer 746
Adding Equipment ......................................................................................................................................... 747
View ing the completion
......................................................................................................................................... 752
Equipment database ......................................................................................................................................... 755
Well object equipment .........................................................................................................................................
types and correspondence w ith REVEAL 757
Further important
.........................................................................................................................................................
notes 765
Well builder
.........................................................................................................................................................
functions 773
7 Data ...................................................................................................................................
formats 790
JSONData .......................................................................................................................................................... 790
8 Visual
...................................................................................................................................
Workflows 791
Elem ents .......................................................................................................................................................... 792
Palette ......................................................................................................................................................... 792
Decision (If...Then)
......................................................................................................................................................... 793
Sw itch ......................................................................................................................................................... 795
Assignment ......................................................................................................................................................... 797
Operation......................................................................................................................................................... 798
Sub-flow sheet
......................................................................................................................................................... 800
Loop ......................................................................................................................................................... 801
Form Builder
......................................................................................................................................................... 802
Form Designer ......................................................................................................................................... 804
Button ......................................................................................................................................... 807
Label ......................................................................................................................................... 809
Text Box ......................................................................................................................................... 810
List Box ......................................................................................................................................... 811
Combo Box ......................................................................................................................................... 812
Check Box ......................................................................................................................................... 813
Group Box ......................................................................................................................................... 814
Grid ......................................................................................................................................... 815
Chart ......................................................................................................................................... 816
Elastic band select .........................................................................................................................................
w ithin Formbuilder 820
Contents X
X
XI RESOLVE
Window ed .........................................................................................................................................................
and non-w indow ed modes 925
Allow PxCluster
..........................................................................................................................................................
to see shared Directory 927
Standalone Cluster
..........................................................................................................................................................
Installation 929
Additional setup
..........................................................................................................................................................
for running scenarios 931
PXCluster Job..........................................................................................................................................................
Logging 932
Adding a .........................................................................................................................................................
log message 932
Job logging
.........................................................................................................................................................
monitor 933
Com m and line
..........................................................................................................................................................
entry point to the PxCluster - PxSub.exe 936
12 Menu...................................................................................................................................
Commands 941
File .......................................................................................................................................................... 941
"File" Section
......................................................................................................................................................... 941
RESOLVE.........................................................................................................................................................
Archives 942
File Preferences
......................................................................................................................................................... 945
Drivers .......................................................................................................................................................... 946
"Drivers" .........................................................................................................................................................
Section 946
Driver Registration
......................................................................................................................................................... 947
Register data.........................................................................................................................................................
object or library 950
Visual Workflow
.........................................................................................................................................................
s registration 952
Wizards .......................................................................................................................................................... 954
"Wizards".........................................................................................................................................................
Section 955
IT Setup Wizards
......................................................................................................................................................... 956
ECLCONFIG.EXE......................................................................................................................................... 956
PXCluster Console ......................................................................................................................................... 956
PXCluster Job Logging .........................................................................................................................................
Monitor 956
Engineering .........................................................................................................................................................
Wizards 957
Voidage Replacement ......................................................................................................................................... 957
Voidage replacement ...................................................................................................................................
- Method 957
Voidage replacement ...................................................................................................................................
- Setup 959
Voidage replacement ...................................................................................................................................
- Script 961
Voidage replacement ...................................................................................................................................
script - Declarations 962
Voidage replacement ...................................................................................................................................
script - PreSolve 963
Voidage replacement ...................................................................................................................................
script - PostSolve 966
Voidage replacement ...................................................................................................................................
script - StartOfTimestep 967
Drilling Queue ......................................................................................................................................... 968
OpenServer w izards ......................................................................................................................................... 974
Perform GAP - Eclipse .........................................................................................................................................
Validation 976
Simulation to Decline .........................................................................................................................................
Curve 978
GIRO Optimiser Performance
......................................................................................................................................... 983
Execute OpenServer .........................................................................................................................................
Statement 986
Options .......................................................................................................................................................... 987
System Options
......................................................................................................................................................... 988
Lumping /.........................................................................................................................................................
Delumping 991
Process Independence
.........................................................................................................................................................
in Resolve models 992
Introduction ......................................................................................................................................... 992
Setup of Process .........................................................................................................................................
Independent Models 994
Further considerations ......................................................................................................................................... 995
Edit System .......................................................................................................................................................... 996
Connection .........................................................................................................................................................
Wizard 998
Target Connections
......................................................................................................................................................... 1000
Set System .........................................................................................................................................................
State 1004
Variables .......................................................................................................................................................... 1005
Import Application
.........................................................................................................................................................
Variables 1006
Transfer.........................................................................................................................................................
optimisation/imported variables 1008
User defined
.........................................................................................................................................................
variables 1010
Contents XII
User defined
.........................................................................................................................................................
arrays 1011
The change .........................................................................................................................................................
in the size of an array at runtime 1013
Events/Actions.......................................................................................................................................................... 1013
Set Initial.........................................................................................................................................................
State 1014
Dynamic.........................................................................................................................................................
Event Handling 1015
Visual Workflow .........................................................................................................................................
s 1015
Event driven scheduling
......................................................................................................................................... 1016
Event Driven Scheduling
...................................................................................................................................
Overview 1016
Event actions ................................................................................................................................... 1020
Ranking of event ...................................................................................................................................
actions 1021
Event driven scheduling
...................................................................................................................................
- example 1023
VB Script ......................................................................................................................................... 1033
Scripting : An ...................................................................................................................................
Introduction 1033
"Script" Section ................................................................................................................................... 1036
Scripting: IPM5...................................................................................................................................
vs IPM4 1038
Schedule .......................................................................................................................................................... 1039
Schedule .........................................................................................................................................................
Setup Workflow 1039
Timestep.........................................................................................................................................................
Control 1041
Timestep Control .........................................................................................................................................
Setup 1041
Adaptive Timestep ......................................................................................................................................... 1044
Optim isation .......................................................................................................................................................... 1046
Scenarios .......................................................................................................................................................... 1046
Scenario.........................................................................................................................................................
Manager Overview 1046
Adding a.........................................................................................................................................................
scenario 1048
Editing a.........................................................................................................................................................
scenario 1049
Deleting .........................................................................................................................................................
a scenario 1050
Performing .........................................................................................................................................................
a sensitivity 1051
Sensitise on inputs ......................................................................................................................................... 1051
Sensitivity results ......................................................................................................................................... 1052
Scenario.........................................................................................................................................................
examples 1053
Basic ......................................................................................................................................... 1053
Changing global .........................................................................................................................................
model data (e.g. model files) 1055
Changing a script ......................................................................................................................................... 1057
Run .......................................................................................................................................................... 1059
"Run Menu" ......................................................................................................................................................... 1059
Calculation.........................................................................................................................................................
Order 1061
Edit Loops ......................................................................................................................................................... 1064
Running.........................................................................................................................................................
Multiple Scenarios 1067
Running.........................................................................................................................................................
Scenarios on a Cluster: Overview 1069
Results .......................................................................................................................................................... 1069
"Results".........................................................................................................................................................
Section 1069
Tables of.........................................................................................................................................................
Results 1070
Plotting the
.........................................................................................................................................................
Results 1072
Optimisation
.........................................................................................................................................................
Results 1077
Loop Results
......................................................................................................................................................... 1077
Calculation.........................................................................................................................................................
Window 1078
Log Window ......................................................................................................................................................... 1079
Window .......................................................................................................................................................... 1080
View .......................................................................................................................................................... 1082
13 Appendix
................................................................................................................................... 1084
Further Technical
..........................................................................................................................................................
Elem ents - OpenServer 1084
Overview......................................................................................................................................................... 1084
Top Level
.........................................................................................................................................................
Variables 1086
Top Level Variables .........................................................................................................................................
: Overview 1086
XII
XIII RESOLVE
Step 3 - Create.........................................................................................................................................
GAP instances 1191
Step 4 - Establish .........................................................................................................................................
connections 1195
Step 5 - Finalise .........................................................................................................................................
model setup 1196
Step 6 - Setup .........................................................................................................................................
RESOLVE schedule 1200
Step 7 - Publish.........................................................................................................................................
variables 1202
Step 8 - Run the .........................................................................................................................................
forecast 1204
Step 9 - Analyse .........................................................................................................................................
results 1206
Example.........................................................................................................................................................
2.1.2: GAP - REVEAL Connection w ith Event Driven Scheduling 1214
Overview ......................................................................................................................................... 1214
Step 1 - Initialise .........................................................................................................................................
the model 1216
Step 2 - Publish.........................................................................................................................................
variables 1218
Step 3 - Setup .........................................................................................................................................
Event Driven Scheduling 1222
Step 4 - Run the .........................................................................................................................................
forecast 1229
Step 5 - Analyse .........................................................................................................................................
the results 1230
Example.........................................................................................................................................................
2.1.3: GAP - REVEAL Connection w ith Visual Workflow Manager 1233
Overview ......................................................................................................................................... 1233
Step 1 - Initialise .........................................................................................................................................
the model 1236
Step 2 - Publish.........................................................................................................................................
variables 1237
Step 3 - Setup .........................................................................................................................................
Event Driven Scheduling 1244
Step 4 - Run the .........................................................................................................................................
model 1252
Step 5 - Analyse .........................................................................................................................................
the results 1252
Step 6 - Verify.........................................................................................................................................
available variables 1253
Step 7 - Setup .........................................................................................................................................
the w orkflow 1255
Step 8 - Run the .........................................................................................................................................
model 1268
Step 9 - Analyse .........................................................................................................................................
the results 1268
Example.........................................................................................................................................................
2.1.4: GAP - REVEAL Connection w ith Scenario Management 1269
Overview ......................................................................................................................................... 1269
Step 1 - Initialise .........................................................................................................................................
the model 1271
Step 2 - Setup .........................................................................................................................................
scenarios 1273
Step 3 - Run the .........................................................................................................................................
scenarios 1293
Step 4 - Analyse .........................................................................................................................................
results 1296
Example.........................................................................................................................................................
2.1.5: GAP - REVEAL Compositional 1301
Step 1: Create .........................................................................................................................................
new file 1305
Step 2: Add an.........................................................................................................................................
instance of REVEAL 1307
Step 3: Add instances.........................................................................................................................................
of GAP 1309
Step 4: Make the .........................................................................................................................................
connections 1312
Step 5: Import application
.........................................................................................................................................
variables 1313
Step 6: Setup the .........................................................................................................................................
feedback loop 1316
Step 7: Enter the .........................................................................................................................................
schedule 1319
Step 8: Run the.........................................................................................................................................
model 1320
Example.........................................................................................................................................................
2.1.6 GAP - REVEAL Compositional Lumping/Delumping 1323
Step 1: Open the .........................................................................................................................................
RESOLVE model 1326
Step 2: Create .........................................................................................................................................
the lumped composition in PVTp 1327
Step 3: Import the .........................................................................................................................................
lumped composition in REVEAL 1334
Step 4: Import the .........................................................................................................................................
lumping rule in RESOLVE 1336
Step 5: Run the.........................................................................................................................................
model 1337
Eclipse .......................................................................................................................................................... 1343
Example.........................................................................................................................................................
2.2.1: GAP - Eclipse Connection 1343
Overview ......................................................................................................................................... 1343
Step 1 - Initialise .........................................................................................................................................
Model 1346
Step 2 - Create.........................................................................................................................................
Eclipse instance 1347
Step 3 - Create.........................................................................................................................................
GAP production instance 1349
Step 4 - Connect .........................................................................................................................................
the production w ells 1351
Step 5 - Create.........................................................................................................................................
GAP w ater injection instance 1352
XIV
XV RESOLVE
Step 6 - Eclipse.........................................................................................................................................
Setup 1354
Step 7 - Setup .........................................................................................................................................
Forecast Schedule 1359
Step 8 - Publish.........................................................................................................................................
Variables 1361
Step 9 - Run the .........................................................................................................................................
Forecast 1364
Step 10 - Analyse .........................................................................................................................................
the Results 1365
Example.........................................................................................................................................................
2.2.2: GAP - Eclipse Compositional 1367
Step 1: Create .........................................................................................................................................
new file 1370
Step 2: Add an.........................................................................................................................................
instance of Eclipse 1372
Step 3: Add instances
.........................................................................................................................................
of GAP 1375
Step 4: Make the .........................................................................................................................................
connections 1378
Step 5: Import application
.........................................................................................................................................
variables 1379
Step 6: Setup the .........................................................................................................................................
feedback loop 1382
Step 7: Enter the .........................................................................................................................................
schedule 1385
Step 8: Run the.........................................................................................................................................
model 1386
Example.........................................................................................................................................................
2.2.3 GAP - Eclipse Compositional Lumping/Delumping 1389
Step 1: Open the .........................................................................................................................................
RESOLVE model 1392
Step 2: Create .........................................................................................................................................
the lumped composition in PVTp 1392
Step 3: Import the .........................................................................................................................................
lumping rule in RESOLVE 1397
Step 4: Run the.........................................................................................................................................
model 1399
Example.........................................................................................................................................................
2.2.4: Mixed Cluster Sensitivity 1404
Overview ......................................................................................................................................... 1404
Step 1: Test the .........................................................................................................................................
RESOLVE model of the field 1407
Step 2: Create .........................................................................................................................................
a new RESOLVE file and add Case Manager 1412
Step 3: Create .........................................................................................................................................
the Case Manager variables 1414
Step 4: Create .........................................................................................................................................
and import the Case Manager w orkflow 1416
Step 5: Setup the .........................................................................................................................................
controlling w orkflow 1418
Step 6: Run the.........................................................................................................................................
cases and analyse the results 1421
tNavigator .......................................................................................................................................................... 1427
Example.........................................................................................................................................................
2.3.1: GAP - tNavigator Connection 1427
Overview ......................................................................................................................................... 1427
Step 1 - Initialise .........................................................................................................................................
Model 1429
Step 2 - Create.........................................................................................................................................
tNavigator instance 1430
Step 3 - Create.........................................................................................................................................
GAP production instance 1432
Step 4 - Connect .........................................................................................................................................
the production w ells 1434
Step 5 - Create.........................................................................................................................................
GAP w ater injection instance 1435
Step 6 - tNavigator .........................................................................................................................................
Setup 1437
Step 7 - Setup .........................................................................................................................................
Forecast Schedule 1440
Step 8 - Publish.........................................................................................................................................
Variables 1442
Step 9 - Run the .........................................................................................................................................
Forecast 1445
Step 10 - Analyse .........................................................................................................................................
the Results 1446
Example.........................................................................................................................................................
2.3.2: GAP - tNavigator Compositional 1448
Step 1: Create .........................................................................................................................................
new file 1451
Step 2: Add an.........................................................................................................................................
instance of tNavigator 1453
Step 3: Add instances
.........................................................................................................................................
of GAP 1456
Step 4: Make the .........................................................................................................................................
connections 1459
Step 5: Import application
.........................................................................................................................................
variables 1460
Step 6: Setup the .........................................................................................................................................
feedback loop 1463
Step 7: Enter the .........................................................................................................................................
schedule 1466
Step 8: Run the.........................................................................................................................................
model 1467
Example.........................................................................................................................................................
2.3.3 GAP - tNavigator Compositional Lumping/Delumping 1470
Step 1: Open the .........................................................................................................................................
RESOLVE model 1473
Step 2: Create .........................................................................................................................................
the lumped composition in PVTp 1473
Step 3: Import the .........................................................................................................................................
lumping rule in RESOLVE 1480
Step 4: Run the.........................................................................................................................................
model 1482
Contents XVI
Example.........................................................................................................................................................
2.3.4: Mixed Cluster Sensitivity 1487
Overview ......................................................................................................................................... 1487
Step 1: Test the .........................................................................................................................................
RESOLVE model of the field 1490
Step 2: Create .........................................................................................................................................
a new RESOLVE file and add Case Manager 1495
Step 3: Create .........................................................................................................................................
the Case Manager variables 1498
Step 4: Create .........................................................................................................................................
and import the Case Manager w orkflow 1500
Step 5: Setup the .........................................................................................................................................
controlling w orkflow 1502
Step 6: Run the.........................................................................................................................................
cases and analyse the results 1505
Example.........................................................................................................................................................
2.3.5: Sensitivity analysis of production forecast using tokens 1511
Step 1: Start a .........................................................................................................................................
new RESOLVE file 1512
Step 2: Setup a.........................................................................................................................................
tNavigator reservoir model 1513
Step 3: Enable .........................................................................................................................................
IPM tokens in the reservoir model 1514
Step 4: Select and setup the parameters of the reservoir model w hich
w ill be used for.........................................................................................................................................
the sensitivity analysis 1515
Step 5: Add and .........................................................................................................................................
set up the Sibyl data object in the RESOLVE model 1519
Step 6: The run of the tNavigator model and analysis of modelling
results ......................................................................................................................................... 1526
IMEX/GEM .......................................................................................................................................................... 1528
Example.........................................................................................................................................................
2.4.1: GAP-IMEX Connection 1528
Overview ......................................................................................................................................... 1528
Step 1 - Initialise .........................................................................................................................................
Model 1530
Step 2 - Create.........................................................................................................................................
IMEX instance 1531
Step 3 - Create.........................................................................................................................................
GAP production instance 1533
Step 4 - Connect .........................................................................................................................................
the production w ells 1535
Step 5 - Create.........................................................................................................................................
GAP w ater injection instance 1536
Step 6 - IMEX Setup ......................................................................................................................................... 1538
Step 7 - Setup .........................................................................................................................................
Forecast Schedule 1541
Step 8 - Publish.........................................................................................................................................
Variables 1543
Step 9 - Run the .........................................................................................................................................
Forecast 1546
Step 10 - Analyse .........................................................................................................................................
the Results 1547
Example.........................................................................................................................................................
2.4.2: GAP - GEM 1549
Step 1: Create .........................................................................................................................................
new file 1553
Step 2: Add an.........................................................................................................................................
instance of GEM 1554
Step 3: Add instances .........................................................................................................................................
of GAP 1558
Step 4: Make the .........................................................................................................................................
connections 1562
Step 5: Import application
.........................................................................................................................................
variables 1563
Step 6: Setup the .........................................................................................................................................
feedback loop 1566
Step 7: Enter the .........................................................................................................................................
schedule 1570
Step 8: Run the.........................................................................................................................................
model 1571
Example.........................................................................................................................................................
2.4.3 GAP - GEM Lumping/Delumping 1575
Step 1: Open the .........................................................................................................................................
RESOLVE model 1578
Step 2: Create .........................................................................................................................................
the lumped composition in PVTp 1579
Step 3: Import the .........................................................................................................................................
lumping rule in RESOLVE 1584
Step 4: Run the.........................................................................................................................................
model 1586
NEXUS .......................................................................................................................................................... 1590
Example.........................................................................................................................................................
2.5.1: GAP-Nexus Connection 1590
Overview ......................................................................................................................................... 1590
Step 1 - Initialise .........................................................................................................................................
Model 1593
Step 2 - Create.........................................................................................................................................
Nexus instance 1593
Step 3 - Create.........................................................................................................................................
GAP production instance 1596
Step 4 - Connect .........................................................................................................................................
the production w ells 1598
Step 5 - Create.........................................................................................................................................
GAP w ater injection instance 1599
Step 6 - Nexus.........................................................................................................................................
Setup 1601
Step 7 - Setup .........................................................................................................................................
Forecast Schedule 1604
Step 8 - Publish.........................................................................................................................................
Variables 1606
XVI
XVII RESOLVE
Step 10 - Analysis
.........................................................................................................................................
of the Results 1709
5 Example
...................................................................................................................................
Section 3: Connection to Process Modeling Tools 1712
Exam ple 3.1:
..........................................................................................................................................................
GAP - UniSim Connection 1712
Overview ......................................................................................................................................................... 1712
Step 1 - .........................................................................................................................................................
Start a new file 1714
Step 2 - .........................................................................................................................................................
Add an instance of UniSim 1715
Step 3 - .........................................................................................................................................................
Add an instance of GAP 1718
Step 4 - .........................................................................................................................................................
Connect GAP and UniSim 1719
Step 5 - .........................................................................................................................................................
Publish aplication variables 1720
Step 6 - .........................................................................................................................................................
Setup the schedule 1723
Step 7 - .........................................................................................................................................................
Run the forecast 1724
Step 8 - .........................................................................................................................................................
Analyse the results 1726
Exam ple 3.2:
..........................................................................................................................................................
GAP - Hysys Connection 1728
Overview ......................................................................................................................................................... 1728
Step 1 - .........................................................................................................................................................
Start a new file 1730
Step 2 - .........................................................................................................................................................
Add an instance of Hysys 1732
Step 3 - .........................................................................................................................................................
Add an instance of GAP 1734
Step 4 - .........................................................................................................................................................
Connect GAP and Hysys 1736
Step 5 - .........................................................................................................................................................
Publish aplication variables 1737
Step 6 - .........................................................................................................................................................
Setup the schedule 1739
Step 7 - .........................................................................................................................................................
Run the forecast 1740
Step 8 - .........................................................................................................................................................
Analyse the results 1742
Exam ple 3.3:
..........................................................................................................................................................
GAP - ProII Connection 1744
Overview ......................................................................................................................................................... 1744
Step 1 - .........................................................................................................................................................
Start a new file 1746
Step 2 - .........................................................................................................................................................
Add an instance of ProII 1748
Step 3 - .........................................................................................................................................................
Add an instance of GAP 1749
Step 4 - .........................................................................................................................................................
Connect GAP and ProII 1751
Step 5 - .........................................................................................................................................................
Publish aplication variables 1752
Step 6 - .........................................................................................................................................................
Setup the schedule 1755
Step 7 - .........................................................................................................................................................
Run the forecast 1756
Step 8 - .........................................................................................................................................................
Analyse the results 1758
6 Example
...................................................................................................................................
Section 4: Connection to Excel 1761
Exam ple 4.1:
..........................................................................................................................................................
GAP - Excel Connection 1761
GAP - EXCEL
.........................................................................................................................................................
: Overview 1761
GAP - EXCEL
.........................................................................................................................................................
: Step 1 1762
GAP - EXCEL
.........................................................................................................................................................
: Step 2 1763
GAP - EXCEL
.........................................................................................................................................................
: Step 3 1765
GAP - EXCEL
.........................................................................................................................................................
: Step 4 1770
GAP - EXCEL
.........................................................................................................................................................
: Step 5 1771
GAP - EXCEL
.........................................................................................................................................................
: Step 6 1773
7 Example
...................................................................................................................................
Section 5: Advanced RESOLVE Examples 1775
Exam ple 5.1:
..........................................................................................................................................................
Black Oil Delum ping Exam ple 1775
Overview ......................................................................................................................................................... 1775
Step 1 ......................................................................................................................................................... 1776
Step 2 ......................................................................................................................................................... 1777
Step 3 ......................................................................................................................................................... 1779
Step 4 ......................................................................................................................................................... 1782
Step 5 ......................................................................................................................................................... 1782
Step 6 ......................................................................................................................................................... 1784
Step 7 ......................................................................................................................................................... 1785
Step 8 ......................................................................................................................................................... 1791
Step 9 ......................................................................................................................................................... 1792
XVIII
XIX RESOLVE
Step 1 - .........................................................................................................................................................
Start a new RESOLVE file 1946
Step 2 - .........................................................................................................................................................
Setup tight reservoir 1947
Step 3 - .........................................................................................................................................................
Define reservoir parameters 1949
Step 4 - .........................................................................................................................................................
Define w ell parameters 1953
Step 5 - .........................................................................................................................................................
Import w ell history 1955
Step 6 - .........................................................................................................................................................
Analysis 1956
Exam ple 6.4:
..........................................................................................................................................................
Multi Well Allocation 1962
Overview ......................................................................................................................................................... 1962
Step 1 - .........................................................................................................................................................
Start a new RESOLVE file 1964
Step 2 - .........................................................................................................................................................
Field data object 1966
Step 3 - .........................................................................................................................................................
Workflow 1968
Step 4 - .........................................................................................................................................................
Analysis 1975
Exam ple 6.5:
..........................................................................................................................................................
Well builder 1978
Introduction
......................................................................................................................................................... 1978
Step 1: Producer
.........................................................................................................................................................
pre-heater w ell descriptions 1981
Step 2: Injector
.........................................................................................................................................................
pre-heater w ell description 1994
Step 3: Producer
.........................................................................................................................................................
w ell description 2000
Step 4: Injector
.........................................................................................................................................................
w ell description 2004
Exam ple 6.6:
..........................................................................................................................................................
SAGD 2006
Introduction
......................................................................................................................................................... 2006
Step 1: Add
.........................................................................................................................................................
the SAGD data object 2008
Step 2: Define
.........................................................................................................................................................
the SAGD Object 2010
Step : Analysis
.........................................................................................................................................................
of results 2019
Exam ple 6.7:
..........................................................................................................................................................
ICD Analysis 2021
Introduction
.........................................................................................................................................................
File Locations 2021
Introduction:
.........................................................................................................................................................
Objectives 2022
Step 1: Create
.........................................................................................................................................................
the w ell and PVT description 2023
Step 2: Create
.........................................................................................................................................................
the reservoir description 2025
Step 3: Create
.........................................................................................................................................................
w ell scenarios 2028
Step 4: Run
.........................................................................................................................................................
scenarios 2030
Step 5: Reservoir
.........................................................................................................................................................
uncertainty 2038
Exam ple 6.8..........................................................................................................................................................
Case Manager 2040
Overview ......................................................................................................................................................... 2040
Step 1 - .........................................................................................................................................................
Start a new RESOLVE file 2042
Step 2 - .........................................................................................................................................................
Add the CaseManager Data Object 2043
Step 3 - .........................................................................................................................................................
Add the Ledaflow Data Object 2048
Step 4 - .........................................................................................................................................................
Add the DataSet 2050
Step 5 - .........................................................................................................................................................
Add the w orkflow 2051
Step 6 - .........................................................................................................................................................
Run the w orkflow 2059
Exam ple 6.9..........................................................................................................................................................
Sensitivity Tool 2061
Overview ......................................................................................................................................................... 2061
Step 1 - .........................................................................................................................................................
Start a new RESOLVE file 2063
Step 2 - .........................................................................................................................................................
Add the IPM-OS instance 2063
Step 3 - .........................................................................................................................................................
Add the Sensitivity Tool data object 2065
Step 4 - .........................................................................................................................................................
Run the model 2069
Exam ple 6.10:
..........................................................................................................................................................
Crystal Ball 2072
Overview ......................................................................................................................................................... 2072
Step 1 - .........................................................................................................................................................
Start a new RESOLVE file 2076
Step 2 - .........................................................................................................................................................
Add Gap model 2076
Step 3 - .........................................................................................................................................................
Add Crystal Ball data object 2078
Step 4 - .........................................................................................................................................................
Setup the 'Physical Model' tab 2079
Step 5 - .........................................................................................................................................................
Setup the Crystal Ball spreadsheet 2081
Step 6 - .........................................................................................................................................................
Map the Crystal Ball variables 2086
Step 7 - .........................................................................................................................................................
Modify the w orkflow 2088
XX
XXI RESOLVE
Step 8 - .........................................................................................................................................................
Run Crystal Ball 2099
Exam ple 6.11:
..........................................................................................................................................................
@Risk 2103
Overview ......................................................................................................................................................... 2103
Step 1 - .........................................................................................................................................................
Start a new RESOLVE file 2107
Step 2 - .........................................................................................................................................................
Add Gap model 2107
Step 3 - .........................................................................................................................................................
Add @Risk data object 2109
Step 4 - .........................................................................................................................................................
Setup the 'Physical Model' tab 2109
Step 5 - .........................................................................................................................................................
Setup the @Risk spreadsheet_2 2111
Step 6 - .........................................................................................................................................................
Map the @Risk variables 2116
Step 7 - .........................................................................................................................................................
Modify the w orkflow 2117
Step 8 - .........................................................................................................................................................
Run @Risk 2128
Exam ple 6.12:
..........................................................................................................................................................
Sibyl 2133
Overview ......................................................................................................................................................... 2133
Step 1 - .........................................................................................................................................................
Start a new RESOLVE file 2136
Step 2 - .........................................................................................................................................................
Add Gap model 2137
Step 3 - .........................................................................................................................................................
Add Sibyl data object 2138
Step 4 - .........................................................................................................................................................
Setup the 'Physical Model' tab 2139
Step 5 - .........................................................................................................................................................
Setup the Sibyl model 2141
Step 6 - .........................................................................................................................................................
Map the Sibyl variables 2144
Step 7 - .........................................................................................................................................................
Modify the w orkflow 2146
Step 8 - .........................................................................................................................................................
Run Sibyl 2157
Exam ple 6.13:
..........................................................................................................................................................
Particle Sw arm 2162
Overview ......................................................................................................................................................... 2162
Step 1 - .........................................................................................................................................................
Start a new RESOLVE file 2163
Step 2 - .........................................................................................................................................................
Add the REVEAL model 2164
Step 3 - .........................................................................................................................................................
Add the Particle Sw arm object 2165
Step 4 - .........................................................................................................................................................
Load the REVEAL template 2165
Step 5 - .........................................................................................................................................................
Define the optimisation variables 2166
Step 6 - .........................................................................................................................................................
Edit the w orkflow 2168
Step 7 - .........................................................................................................................................................
Test the w orkflow 2180
Step 8 - .........................................................................................................................................................
Run the model 2182
Exam ple 6.14a:
..........................................................................................................................................................
History Matching Tool A 2187
Overview ......................................................................................................................................................... 2187
Step 1 - .........................................................................................................................................................
Start a new RESOLVE file 2189
Step 2 - .........................................................................................................................................................
Add the REVEAL model 2190
Step 3 - .........................................................................................................................................................
Add the History Matching Tool 2191
Step 4 - .........................................................................................................................................................
Import the production history 2192
Step 5 - .........................................................................................................................................................
Define w ell controls and variable w eights 2197
Step 6 - .........................................................................................................................................................
Create a run 2199
Exam ple 6.14b:
..........................................................................................................................................................
History Matching Tool B 2202
Overview ......................................................................................................................................................... 2202
Step 1 - .........................................................................................................................................................
Start a new RESOLVE file 2204
Step 2 - .........................................................................................................................................................
Add the REVEAL model 2204
Step 3 - .........................................................................................................................................................
Add the History Matching Tool 2205
Step 4 - .........................................................................................................................................................
Import the production history 2206
Step 5 - .........................................................................................................................................................
Define w ell controls and variable w eights 2209
Step 6 - .........................................................................................................................................................
Create a run 2210
Exam ple 6.15:
..........................................................................................................................................................
Dual String Gas Lift 2215
Overview ......................................................................................................................................................... 2215
Step 1 - .........................................................................................................................................................
Start a new RESOLVE file 2217
Step 2 - .........................................................................................................................................................
Add the PROSPER models 2217
Step 3 - .........................................................................................................................................................
Add the Dual String Gas Lift object 2218
Step 4 - .........................................................................................................................................................
Enter the know n data 2218
Step 5 - .........................................................................................................................................................
Set the method and calculate 2219
Contents XXII
Step 6 - .........................................................................................................................................................
Review the results 2220
Exam ple 6.16:
..........................................................................................................................................................
Spectral Analysis 2220
Overview ......................................................................................................................................................... 2220
Step 1- Start
.........................................................................................................................................................
a new RESOLVE file 2222
Step 2- Generate
.........................................................................................................................................................
a source signal 2222
Step 3 -Fast
.........................................................................................................................................................
Fourier Transformation of the signal 2240
Step 4- Calculaiton
.........................................................................................................................................................
results 2251
Exam ple 6.17:
..........................................................................................................................................................
Wavelet Analysis 2255
Overview ......................................................................................................................................................... 2255
Step 1-Start
.........................................................................................................................................................
a new RESOLVE file 2257
Step 2-Generate
.........................................................................................................................................................
a source signal 2257
Exam ple 6.18:
..........................................................................................................................................................
Neural Netw orks 2276
Overview ......................................................................................................................................................... 2276
Exam ple 6.19:
..........................................................................................................................................................
Python 2277
Overview ......................................................................................................................................................... 2277
Step 1- Start
.........................................................................................................................................................
a new RESOLVE file 2280
Step 2- Extract
.........................................................................................................................................................
and open the *gar file 2280
Step 3- Import
.........................................................................................................................................................
the GAP model into RESOLVE 2280
Step 4- Defining
.........................................................................................................................................................
distribution functions for the input parameters 2282
Step 5- Creating
.........................................................................................................................................................
a w orkflow of the w orking process 2284
Step 6- Executing
.........................................................................................................................................................
the Workflow 2321
9 Example
...................................................................................................................................
Section 7: Global Optimisation 2325
Exam ple 7.1:
..........................................................................................................................................................
Introduction to Global Optim isation 2325
Overview ......................................................................................................................................................... 2325
Step 1: Create
.........................................................................................................................................................
new file 2328
Step 2: Add
.........................................................................................................................................................
Gap model 2329
Step 3: Add
.........................................................................................................................................................
Excel instance 2331
Step 4: Import
.........................................................................................................................................................
application variables 2336
Step 5: Run
.........................................................................................................................................................
the reference case 2339
Step 6: Set
.........................................................................................................................................................
up the global optimisation problem 2342
Step 7: Run
.........................................................................................................................................................
the optimisation 2351
Exam ple 7.2:
..........................................................................................................................................................
GAP-Process Optim isation 2353
Example.........................................................................................................................................................
7.2.1: GAP - UniSim Optimisation 2353
Overview ......................................................................................................................................... 2353
Step 1 - Enable.........................................................................................................................................
optimisation 2359
Step 2 - Setup .........................................................................................................................................
the optimisation problem 2361
Step 3 - Run the .........................................................................................................................................
forecast 2368
Step 4 - Analyse .........................................................................................................................................
the results 2369
Example.........................................................................................................................................................
7.2.2: GAP - Hysys Optimisation 2371
Overview ......................................................................................................................................... 2371
Step 1 - Enable.........................................................................................................................................
optimisation 2377
Step 2 - Setup .........................................................................................................................................
the optimisation problem 2378
Step 3 - Run the .........................................................................................................................................
forecast 2385
Step 4 - Analyse .........................................................................................................................................
the results 2386
Example.........................................................................................................................................................
7.2.3: GAP - ProII Optimisation 2388
Overview ......................................................................................................................................... 2388
Step 1 - Enable.........................................................................................................................................
optimisation 2394
Step 2 - Setup .........................................................................................................................................
the optimisation problem 2396
Step 3 - Run the .........................................................................................................................................
forecast 2403
Step 4 - Analyse .........................................................................................................................................
the results 2404
Exam ple 7.3:
..........................................................................................................................................................
Optim iser Control 2406
Example.........................................................................................................................................................
7.3.1: GAP - UniSim Optimiser Control 2406
Overview ......................................................................................................................................... 2406
Step 1: Open the .........................................................................................................................................
file 2410
XXII
XXIII RESOLVE
Step 2 - .........................................................................................................................................................
Define MOVE project 2575
Step 3 - .........................................................................................................................................................
Create ClearData Operation 2577
Step 4 - .........................................................................................................................................................
Create OpenMove Operation 2585
Step 5 - .........................................................................................................................................................
Open Section Analysis 2590
Step 6 - .........................................................................................................................................................
Perform Analysis 2597
Step 7 - .........................................................................................................................................................
Transfer Results 2600
Step 8 - .........................................................................................................................................................
Close MOVE 2603
Step 9 - .........................................................................................................................................................
End Workflow 2608
Step 10 .........................................................................................................................................................
- Test run the Workflow 2610
XXIV
Chapter
Technical Overview
Technical Overview 2
1 Technical Overview
1.1 Introduction to RESOLVE
RESOLVE is a tool for developing advanced models to solve engineering problems. It is a
platform which brings together external applications, engineering calculation engines and data
management engines. All these elements are flexible; they can be assembled and connected to
create a model which formulates the answer to a particular problem.
The scope of modelling objectives that can be achieved in RESOLVE includes but is not limited
to: full field integration, flow assurance studies, well and pipeline stability analysis, well
allocation, global optimisation, conventional/unconventional reservoir history matching etc. This
large engineering scope is made possible by the flexibility and modularity of the different
components of RESOLVE.
1. Application drivers
RESOLVE provides a set of drivers which allows external applications to be included in the
model. These applications include:
Reservoir simulators
Surface network models
Process simulators
Excel
These applications, which model different parts of the field, can therefore be connected together
and exchange data dynamically to implement a full field model from the reservoir to the sales
point. This effectively removes the artificial boundary conditions that have been put in place by
the traditional approach of modelling different parts of the production system independently.
2. Data Objects
The Data Objects are libraries of data and calculations that can be called as part of a RESOLVE
run, often from a Visual Workflow. They can be used standalone or connected to other objects to
from part of a larger model. The different Data Objects have different purposes:
Encapsulate physics calculations e.g. thermodynamic flash, hydrate test, water
chemistry, pressure gradient, choke calculation etc.
Facilitate the analysis of complex systems e.g. tight reservoir, SAGD, ICD design, well
allocation
Facilitate running cases, sensitivities and probabilistic analysis
Data Objects have an open architecture and expose their inputs, outputs and calculation
July, 2021
RESOLVE Manual
Technical Overview 4
methods to the rest of the RESOLVE model. This means that during a run, data can be passed
dynamically to Data Objects, calculations can be triggered, and results can be read and used
as part of the field management logic.
3. RESOLVE Engines
RESOLVE includes internal engines that can be used in conjunction with external applications
and Data Objects in the formulation of the problem:
Visual Workflows
A Visual Workflow is a set of instructions, in a visual form
They implement a user defined logic, which may be necessary for
various reasons, ranging from automating repetitive or regular
tasks to setting up field management logic to control a field during
a forecast run
Visual Workflows have access to applications’ variables and
Data Objects, which can be used to create this logic.
Lumping/Delumping
Global Optimisation
Scenarios
RESOLVE includes a scenario manager for creating, running and
comparing different scenarios (see also Sensitivity Tool, Probabilistic Data
Objects).
The scenarios can be run on a cluster (see below)
Clustering
A gas condensate field is being produced and under the current operating conditions and well
head chokes, solid wax at the separator is reported. This is confirmed by the GAP model of the
field, which is indicating a wax risk within the flowline of two of the four producing wells.
July, 2021
RESOLVE Manual
Technical Overview 6
There is a constraint on the total gas production, and the objective of the RESOLVE model will
be to guide the GAP model towards an allocation of rates between the wells which does not
exhibit a wax risk. Operationally, this will provide the well head choke settings required for this to
be implemented in the field.
The allocation of rates which does not show a wax risk is calculated by a workflow, which makes
use of the different components of the model. The workflow logic is the following:
Check all the pipelines in GAP. If there is no wax risk, do nothing. If there is a wax risk
anywhere in the system, enter the workflow logic
For each well:
Get the producing composition by performing a target GOR calculation on the original
composition with the GOR from GAP.
Calculate the wax appearance temperature at the WHP
Calculate the minimum rate required for the FWHT to be above the calculated wax
appearance temperature
Apply constraints at the well level to change the rate allocation between the wells:
If a well’s WHT is above the wax appearance temperature, reduce its rate
This will enable wells with WHT below the wax appearance temperature to produce
more and increase their WHT.
After the model is run, the GAP model is still honouring the gas demand at the separator, and
none of the pipelines show a wax risk warning. As a result, we obtain the pressure drop at the
wellhead chokes required to implement this allocation, which can be converted into choke
settings to create field operational guidelines.
July, 2021
RESOLVE Manual
Technical Overview 8
July, 2021
RESOLVE Manual
Technical Overview 10
The table below will illustrate the RESOLVE capabilites, and link to the corresponding sections
of the user guide and worked examples.
Worked Examples
Capability Description User Guide Section
Section
Main Features
Parallelisation of solver Distributed N/A
algorithm Applications
Use of local and network Setting up a
machine resources Cluster
Hyper-threaded
Support link to PC and Unix OS
machines
Support link to cluster of
machines
No fixed concept of upstream Edit Loops N/A
Allows any and downstream
topography of Possibility of bi-directional (i.e.
connected system looping) configuration between
modules
Possibility of developing user OpenServer Example Section 6:
specific connections Creating a OpenServer
Application as a whole can be RESOLVE optimiser Examples
Open Architecture
OpenServer controlled from an external in Excel
concept controller such as an Excel
macro for instance
Possibility of integrating user-
defined optimisers
Solve Network or Models can be run predictively Options Section Example Section 1 |
Forecast or at a specific snapshot in time Getting Started
Reservoir
Coupling
REVEAL Connections in Example Section 2 |
Schlumberger Eclipse 100 and RESOLVE: Further Connection to
300 Details Reservoir Simulation
CMG IMEX and GEM Connections to Tools
Numerous
IFP PumaFlow External
Reservoir
VIP/Nexus Applications -
Connections
Proprietary Reservoir Specific Elements
Available
Connections (Shell MORES,
Saudi-Aramco POWERS,
ConocoPhilips PSIM, Chevron
CHEARS, Total INTERSECT)
Surface Network
Coupling
Non-Linear Optimisation Connection to GAP Example Section 1 |
Connection to Material Balance - GAP driver details Getting Started
Take full advantage
(MBAL) Example Section 5 |
of Petroleum
Production and Injection Advanced RESOLVE
Experts GAP
systems in a SINGLE license Examples
software
capabilities Production and Gas Lift
Injection systems in a SINGLE
license
IPM-OS
(OPENSERVER)
coupling
Take full advantage Link to any of the IPM tools Connection to IPM-
of Petroleum Possibility to input, perform OS - IPM-OS driver
Experts IPM external calculations with any details
(PROSPER, GAP, IPM model and feedback the
MBAL, PVTP, results to RESOLVE or any
RESOLVE, other application connected
REVEAL) software
capabilities
Thermodynamics
and PVT
PVT information passed from PVT within Example Section 3 |
Thermodynamic one model to another RESOLVE: Connection to
consistency Black-Oil models can be mixed Importance of PVT Process Modelling
between with Fully Compositional Models Consistency Tools
applications Connection Details:
Composition Tables
Enables to pass from a "simple" Lumping / Example Section 2:
Lumping / compositional fluid description Delumping Options Connection to
Delumping at reservoir level to a "detailed" reservoir simulation
Facilities compositional fluid description tools
July, 2021
RESOLVE Manual
Technical Overview 12
Event / Well
Management
Event driven scheduling options Event Management Example Section 2 |
available. Connection to
Possibility of publishing and Reservoir Simulation
monitoring any variable from Tools | Example 2.3:
Conditional any application to perform IF... GAP - REVEAL
Scheduling THEN .. ACTION directives connection with event
Possibility of ranking the actions driven scheduling
Possibility of re-solving the
system after the action has
been performed
VIsual Workflows Seamless definition of Visual_Workflows Example Section 2 |
workflows from draft plans to Connection to
implementation in RESOLVE Reservoir Simulation
Tools | Example 2.4:
GAP - REVEAL
connection
with_visual_workflow
_manager
Data Objects Exposing some functionality or Data Objects Example Section 6:
calculation logic which can be Data Objects
used as part of a wider
workflow
Possibility of setting-up different Scenario Manager Example Section 2 |
scenarios and run them Connection to
automatically sequentially or in Reservoir simulation
parallel through the use of a Tools | Example 2.5:
Scenario
cluster GAP - REVEAL
Management
Results of each scenario are connection with
displayed in the same location, Scenario
enabling easy comparisons to Management
be performed
Optimisation
Non-Linear Optimisation in GAP Optimisation Example Section 7:
Two Levels of Successive linear optimisation Section - Setting-up Global Optimisation
Optimisation in RESOLVE the RESOLVE
optimisation
Optimisation problems can be Optimisation Example Section 7:
Distribution of distributed over ALL Section - Setting-up Global Optimisation
Optimisation applications in an integrated the RESOLVE
Problems model: objective function, optimisation
constraints and controls can be
defined at any level in the model
Version Control
Tight Integration RESOLVE models can be Refer to N/A
with Petroleum checked IN and OUT, as well as ModelCatalogue
Experts all associated models Manual
ModelCatalogue
Excel Link
Calculation Connection to Excel Example Section 4:
Reporting - Excel driver details Connection to Excel
Dynamic Link with
Excel Compositional stream splitting /
manipulation
Economics calculation
Internal in RESOLVE OpenServer Example Section 5 |
External to RESOLVE using the Advanced RESOLVE
External or Internal OpenServer architecture: The Examples |
Link RESOLVE model can be OpenServer
controlled externally using a Examples
VBA macro
Process Model
Links
Hysys Connection to Example Section 3 |
UniSim Design Hysys - Hysys Connection to
Numerous Links driver details Process modelling
Available to Connection to Tools
Process Models UniSim Design -
UniSim Design
driver details
GUI
Comprehensive Results appear dynamically Analysing and Example Section 1 |
and Dynamic during the run Reporting the Getting Started
Reporting Direct access to Debug Results
Logging, enabling to analyse
every single process happening
during the run, whatever
software is considered
Direct Export to different file
types: text, Excel spreadsheet
July, 2021
RESOLVE Manual
Technical Overview 14
July, 2021
RESOLVE Manual
Technical Overview 16
First and foremost, every User of the software has access to the Online Help and User Guide
Manuals which are included with the installation of the software.
For those client's who have a maintenance contract with Petroleum Experts, access to our
Web User Area and Technical Support team is also available.
More information on how to access all these different elements of help and support are provided
below.
1.3.1 Online Help
The online Help of each program can be found via the Help | Contents menu on the top tool
bar:
To use this facility, the help file must be located in the same directory as the program. Please
note that due to security settings in Windows, if the help file is located on a shared drive, then
the help may have issues loading.
Help through the Almost every screen contains a Help button which, if selected, will take
interface the user directly to the appropriate help section for the given screen.
Help through From the menu bar, choose Help | Index or ALT H I, and select the
the menu desired subject from the list of help topics provided
Getting help To get help through the mouse, Press SHIFT+F1. The mouse pointer
using the changes to a question mark. Next, choose the menu command or option
mouse and to view. An alternative way is to click the menu command or option to
keyboard view, and holding the mouse button down, press F1. To get help using
the keyboard press the ALT key followed by the first letter of the menu
name or option and press F1
Minimising If the Help window is to be closed, but not exiting the help facility, click the
Help minimise button in the upper-right corner of the help window. If use of the
keyboard is preferred, press ALT Spacebar N
The help facility uses function buttons and jump terms to move around the 'Help' system. The
function buttons are found at the top of the window and are useful in finding general information
about Windows help. If a feature is not available, the button associated with that function is
dimmed.
'Jump' terms are words marked with a solid underline that appear in green if a colour VDU is in
use. Clicking on a jump term, takes the user directly to the topic associated with the underlined
word(s).
Using the Help This option is useful for viewing specific sections listed in the help menu.
Index Go to the topic of interest and select the necessary subject item
Using the Help This facility is useful for finding specific information about particular
Search feature topics.
For example, 'Production Constraints'. Type in the keyword 'constraints'
to search the system for the phrase, or select the corresponding topic
from the list displayed
The User Guide Manuals are installed in the IPM installation directory (normally ...\Program
Files\Petroleum Experts\IPM XX\pdf\) or can be accessed via the Windows START menu:
July, 2021
RESOLVE Manual
Technical Overview 18
In addition to information on the different models, screens and inputs which are required for
each tool, the User Manuals contain numerous step-by-step worked examples for each of the
tools. Completed models for each of these examples (and initial files where required) can be
found in the same IPM installation folder structure as the User Guides (normally ...\Program
Files\Petroleum Experts\IPM XX\Samples\). Please note that due to the size of these files, they
are not installed as part of the main installation of the software and are installed via a separate
installer. If the sample files are not present on a machine, please contact the IT department or
the user who downloaded the IPM installation files to obtain access to the worked example
installer.
First and foremost, this includes access to our dedicated Technical Support teams based out of
Edinburgh, Scotland and Houston, USA. The role of the Technical Support team is to ensure
that the tools work as documented and also, where appropriate, demonstrate how options and
setting within the software can be used to achieve certain engineering objectives.
To ensure fairness to all of our clients, support queries are answered on a first come, first
served basis with the target of 50% of queries being answered within 24 hours and 95% of
emails being answered within 48 hours. In order to facilitate this, the primary route to obtain
Technical Support is via email. While it is possible to phone the Technical Support team, in
cases where models are required in order to understand the query, an email with the model will
be required. Where models are too large to send via email, the Technical Support team can
send out a link which can be used to upload the model. Where required, phone calls or video
meetings can be set up to discuss queries after the initial query has been received.
As well as access to Technical Support, maintained clients can also profit from a number of
other benefits.
1.3.3.1 Software Upgrades
Petroleum Experts are constantly improving the software and also researching new ways of
obtaining value from the software. These improvements and advancements are provided to
maintained clients via new builds and versions of the software.
Petroleum Experts aims to release a new version of IPM every 12-18 months and these new
versions are denoted with a change in the version number i.e. IPM 11.0 to IPM 12.0. Within
each version of the software, a number of different builds will be released to fix known issues
and add certain advancements.
It is important to note that all builds and versions of IPM are backwards compatible which
means that models build in older versions/builds of the tools will always be able to open and run
in newer versions. For example, a model build in IPM v9 can be opened in IPM v10. Models
build or saved in newer versions, however, cannot be opened in older versions of the software.
For example, a model saved in IPM v12 cannot then be opened in IPM v11 or earlier. Different
builds of the same IPM version are both backwards and forwards compatible which means that
a model saved in IPM v12 build 100 could be opened in previous builds (build 90 etc) and also
future builds (build 101 etc).
To find out the version of the tools which is currently being used, the User can browse to Help |
About... in any of the programs. The version and build is then shown on the screen:
July, 2021
RESOLVE Manual
Technical Overview 20
To find out the current commercial version of the software, please either contact the Technical
Support team or log onto the Web User Area.
1.3.3.2 Annual User Group Meeting
Every year, Petroleum Experts hosts a meeting of our Users in Edinburgh, Scotland. This User
Group Meeting is an opportunity for Petroleum Experts to present the new functionality and
also for clients to present their work to each other to gain a better understanding of how others
are creating value. The User Group Meetings are also used as an opportunity to receive
development requests from clients and discuss each one with all attendees. Based upon the
feedback of the user community, each request will be given a priority and then added to the
development plan of the next version of the software as appropriate.
Invitations to the meeting are sent out to our main contacts in all maintained clients and then
each company decide to send a representative or team to attend the meeting.
As the number of clients who can attend the meetings from each client are limited, all
presentations which are given by Petroleum Experts (and all client who provide permission) are
uploaded to our Web User Area so that all clients can understand what new developments are
coming and also the presented case studies. A database of the presentations from all of the
User Group Meetings through time is also available and is a fantastic source of information on
the advancements and developments which have been made to the software over time.
1.3.3.3 Web User Area
Maintained clients also have access to the Petroleum Experts Web User Area which can be
accessed through the Help | Web User Area menu of any product:
In addition to access to the Web User Area, clients can also directly contact the Technical
Support team via the Help | Technical Support menu. This part of the Web User Area not only
allows support queries to be registered with the support team, but also stores all previous
queries which each user has sent so that they can be easily accessed and referenced in the
future.
When the Web User Area is accessed, different information can be seen:
July, 2021
RESOLVE Manual
Technical Overview 22
Drivers
o RN-KIM new reservoir simulator driver
o Echelon new reservoir simulator driver
o Intersect driver is updated - allows use of EclRun
New Drivers
o MOVE geological modelling package.
MOVE driver:
o Suite of visual workflow functions developed
Enhancements to Existing Applications
o Tokens for reservoir simulators
o A new example for tNavigator on the use of tokens has been introduced
Visual Workflow Enhancements
o A workflow exception point has been introduced
o Form Builder Enhancements
PxCluster Enhancements
o Allow PXCluster to see shared directory through network drive mapping
o Command line entry point to the PXCluster through PxSub.exe and PxJob.exe
PBS scheduler with Nexus on Linux
Interface Enhancements
o New ‘home’ screen
o Redevelopment of the system-view tab on main screen
New Data Objects
o Probabilistic Modelling – ‘Sybil’ and 3rd Party
o Optimisation Engines –NMSimplex
o GIRO – Integer Based Optimisation
o History Matching
o GAP – Transfer IPR
o GAP – MatchIPRData
o Data Analysis – WaveletAnalysis
o Phase Envelope
o DualStringGasLift
Enhancements to Existing Data Objects
o ESP Fluid Temperature added as match parameter for MWA
o Choke Model selection in Choke RDOs
o PSwarm uses a latin-hypercube approach to enforce a spread of the initial
population
o Unit support added to FlexDataStores
Visual Workflow Enhancements
o If…ElseIf element replaces If..Else block
o Enhancements to the Form Builder
o Profiling tool
o Difference tool
o Improved ‘watch’ windows
o Integration of unit system into workflows
o Operation and property browsers
o Variables – Division of variables by category and addition of comments
o Inline Functions
o On-screen Annotations
o Improved handling of DataAndTime workflow methods
o Workflows can dynamically add objects to an RDO system object
o Workflow method added to read/write a whole text file in one call
o Run can now be stopped from a Visual Workflow
Ability to control RESOLVE time-stepping from a workflow
All RESOLVE file types (*.rsl and *.rsa) now selectable in file browser
Numerical simulators compatible with Windows based clusters
When publishing variables, Add To Plot is set on as default and new Add To Workflow
option
Import/Export on Connection Wizard
Annotations can be added to RESOLVE’s main screen
Run a workflow by right clicking on it
Development of IFM dump facility
o Allows direct transfer of workflows (including data objects and variables) from DOF
to RESOLVE and visa-versa
Built-in Visual Workflows
o VLP Generation
o PVT Transformation
July, 2021
RESOLVE Manual
Technical Overview 24
Wax
Hydrate
Salt
Water Composition
Water Saturation
LNG Plant
Path to Surface: added chillers, LNG, LPG and Condensate export lines
Water Chemistry
Water Chemistry
Water Chemistry PVT mixer
Water Chemistry mixer
PROSPER Calculators
Corrosion
Erosion
Slug Catcher
Optimisation
Particle Swarm: stochastic optimiser
Neural Network
Neural Network
Data Normaliser
DataStore
FlexDataStore: DataStore with variable types.
GAP driver
New connection type to pass data to the Prosper Calculator Data Object
Supports Parallel mode
REVEAL driver
New connection type to pass data to the Water Chemistry Data Objects
Optimisation
New interface for two-level optimisation containing the global optimum
Optimiser parameters available from a workflow
DataSet DataObject
Visual workflows
Interface
Function description for all operations
Operation browser
New element
Iterator: cycles through all Data Objects of a given type
Functionality
Copy/paste of individual operations between Operation elements
Ability to disable individual operations within an Operation element
User interrupt of workflows (e.g. infinite loop)
Levenberg-Marquardt exposed
Setting and getting arrays from DataStores and DataSets
Data manipulation using DataAnalysis functions
When an operation requires an Enumeration as an argument, the appropriate
Enumeration name appears automatically
New Drivers
Ledaflow dynamic flow modelling package
Nexus reservoir simulator (Halliburton) (to be released with compatible simulator version)
Tempest reservoir simulator (Roxar)
tNavigator reservoir simulator (RFD)
Pro/II process simulator (Schneider)
ICD Analysis
ICD design and sensitivity studies
Case Management
July, 2021
RESOLVE Manual
Technical Overview 26
Case Manager
Sensitivity Tool
Crystal Ball
@Risk
GAP calculators:
Choke dP, rate, and sizing
IPR (P from Q, Q from P)
Performance curves
TPDs
VLP/IPR intersections
PROSPER objects:
Data repository (file or online)
Gradient calculator
Visual workflows
Interface
Annotations with variable substitution
Improved copy and paste
Sub-flowsheet navigator
Sub-flowsheet visual popups
Auto-generation of sub-flowsheets
Logging in watch window and redirection to file
Find and replace
New element
Subroutines: highly portable subworkflow
Portability
Abstraction (internal and external variables)
External DLL dependencies included in VWF file
Workflow registration
External module registration
Other elements
Arrays of data objects are allowed
PxCluster. Fast creation of local clusters with minimal configuration for multicore standalone
computers.
New drivers:
MBAL - Petroleum Experts MBAL IPM tool
Visual Workflow - Petroleum Experts Visual Workflow
PumaFlow - IFP reservoir simulator
VIP - Halliburton reservoir simulator
Visual Workflows for implementation of field managements rules / event driven scheduling
Resolve Data Objects. These are a new species of object within Resolve which coexist with
application driver objects. In short, they expose some functionality or calculation logic which
can be used as part of a wider workflow. An example would be the ability to call standalone
(black oil or EOS) PVT calculations.
PxCluster developments:
Improved scalability with number of machines
New centralised installation and monitoring console
New wizard to take the results of a full coupled simulation run and turn it into a decline curve
model for fast evaluation and scoping
Extension of integer optimisation to Excel variables. As Excel variables can be mapped onto
variables in other applications, this extends integer optimisation (through GIRO) to all
applications which expose integer variables (e.g. plant models in Hysys or UnisimDesign).
Interface developments:
Connection popup windows. A quick graph, or pie chart, or raw data, can be popped up
when the user holds the mouse over a connection icon
July, 2021
RESOLVE Manual
Technical Overview 28
Tools to align icons along a horizontal or vertical line, and space icons equally
A snap to grid option for all icons in the RESOLVE interface
Ability to highlight those modules involved in a global optimisation
Ability to highlight those modules that are part of a loop
Tabbed window displays
Detachable toolbars which can be hidden if required
Detachable plots, useful with dual monitors
Rework of the menu to group functionality more logically (to make it less forecasting
oriented)
Graphing enhancements - easily configurable axis titles and ranges, nodes lists now
sorted alphabetically
IMEX/GEM developments:
Closed wells shut with or without crossflow, depending on context and/or user
requirements.
(IMEX only) Forced synchronisation of forecast timesteps with date card in simulator deck
(for convenient reporting).
The links to the CMG simulators and PSim on Linux are now 'cluster enabled'. LSF (or similar)
can be used to queue and load balance when these jobs are launched from RESOLVE.
An 'IPM-OS' driver has been added for generic interactions with the IPM tools (i.e. getting/
setting data or performing commands). Any IPM model can now be included in a RESOLVE
workflow, as required.
A new wizard has been added to take advantage of the above IPM-OS functionality. This
wizard will be used to perform generic batch OpenServer tasks. The functionality currently
extends to the batch generation of lift curves from a GAP model. The generation can be done
sequentially or over the nodes of a cluster.
Significant performance improvements have been made in the data transfer speed for large
models, and the generation of GAP variable lists.
Node lists in the reporting and plotting are now sorted alphabetically.
Ability to copy variables between 'optimisation' sets and general 'exported' sets (currently
GAP only).
Optimisation states can be saved. This means that the system state that is evaluated at a
particular iteration (or the optimal iteration) can be stored to be re-implemented at a later
time. The state can be restored as an action in the event driven scheduling, or from the Visual
Basic script.
REVEAL Link
IPR models and control modes can now be set per well, rather than globally.
IMEX Link
Well layer data can now be accessed from the RESOLVE script to automate
workovers/recompletions.
Improved advanced scheduling screen to indicate immediately which sections have data in
them.
Improved archiving screen - multiple baggage files can now be added simultaneously from a
multiple-selection file selector.
July, 2021
RESOLVE Manual
Technical Overview 30
Added the ability to export cumulative data (node or system level) from GAP into RESOLVE
(for plotting or scheduling, etc, purposes).
Support of gas lift networks in GAP. The source of gas lift can be an external process or other
model (e.g. Excel). RESOLVE will automatically iterate across production and gas lift injection
systems to ensure that the casing pressure to inject the gas can be achieved.
Addition of an Excel optimisation plug-in. This allows users to code (i.e. using simple Excel
VBA) their own optimisation routines to solve RESOLVE optimisation problems. A sample
Excel spreadsheet is also supplied.
Addition of an interface to automate WAG processes (currently REVEAL and PSim only).
Improvement to the logic implemented when a RESOLVE archive is opened: after the archive
is extracted it will open up the master file under the assumption that all the client files are
located in the directory to which the files have just been extracted.
The option has been implemented to cascade variables which are passed on to an Excel
REVEAL Link
Integration with WAG tool (see above).
Integration with compositional model.
Eclipse Link
Direct (one way) connections from Eclipse wells. In this case, Eclipse is allowed to
control the well(s) and sends only the operating data of the well to the downstream
module. Along with the compositional and black oil delumping capability, this means
that direct connections between Eclipse and Hysys are now possible.
SCHEDULE section data can now be sent directly from a RESOLVE script or external
macro to Eclipse.
Excel Link
Macros called from the Excel module can now be run at different points in the
RESOLVE timeline: Start of Run, End of Run, PreSolve, PostSolve, Start of Timestep,
End of Timestep.
The Excel spreadsheet can be hidden during the run to prevent accidental interaction
with the application while under the control of RESOLVE.
Support for parallel Eclipse under the LSF version of the link to Linux; LSF is allowed to
choose the running nodes for the Eclipse job.
IPR pre-calculation (for the corrected IPR model) in IMEX/GEM improved so all tests are
performed simultaneously.
July, 2021
RESOLVE Manual
Technical Overview 32
Added a flag to allow events to be ignored (DATE and TSTEP keywords) in Eclipse.
Added facility to reload selected client modules at the start of the run of each scenario under
the scenario management. This affects sequential (rather than clustered) runs only. It may be
used if (for example) a model file is to be changed as part of the scenario.
Improved the reporting and handling of failures in the advanced ("scaling") IPR calculation of
the Eclipse link driver.
Added ability to retrieve the currently running scenario index from the internal script
Properties available from correlations under stream property view can now be reported
or exported into RESOLVE. "Standard" and "Gas" correlations are added by default;
others can be added as required.
[Bug Fix] Limit on number of GAP variables which can be saved in a RESOLVE model has
been removed
Addition of Petroleum Experts drivers for IMEX and GEM reservoir simulators
[Bug Fix] Problem in E300 of passing data for large numbers of components is fixed
Improvements to the interface that allows GAP OpenServer variables to be imported into
RESOLVE
The input separator pressures can now be brought into the screen automatically
The difference between output and input variables in this screen has been clarified
Implementation of "targets"
A target value in one application variable can now be met by adjustment of a single
variable in another application, without recourse to setting up an optimisation problem.
An example would be the adjustment of a separator pressure in GAP to meet an export
line pressure target in the process module
July, 2021
RESOLVE Manual
Technical Overview 34
Added more variables to Hysys and UniSim Design output lists. These are the variables
which are normally accessed from the property correlations in Hysys / UniSim Design, and
which do not appear automatically in the programming interfaces (type libraries) of these
products.
For example, Wobbe Index (Gas property) and True Vapour Pressure (Standard
property) can now be queried
Additional property correlations can be added by the user as required
RESOLVE
Use of clusters
Integration with LSF to distribute multiple runs (scenarios) over the nodes of a cluster
Open interface for integration with other clustering tools
Automatic configuration of OpenServer for remote launching and running of all IPM
tools
OpenServer improvements
A single, unified OpenServer interface can be used to access the data from all client
modules, e.g. GAP, Hysys, UniSim Design
Optimiser enhancements
Various implementations of 3rd party optimisation packages (e.g. AIMMS)
Not released but available on request for testing
Regression can be performed on IPR data passed from reservoir simulation models
Can improve run stability in cases where simulator data is suspect
Additional IPR models for improved drainage region calculations (run stability)
Direct calculation of drainage region
Build-up calculation
Scaling (proprietary model)
July, 2021
RESOLVE Manual
Technical Overview 36
July, 2021
RESOLVE Manual
Technical Overview 38
GAP / REVEAL
2 GHz processor
2 Gb RAM
GAP / REVEAL
Standard licences for each of the applications to be linked. One licence is required for each
instance of the application in the RESOLVE model.
Eclipse
A standard Eclipse licence which includes any options that are required to run the model in
question. One licence is required for each Eclipse model in the RESOLVE model.
An OpenEclipse licence. One licence is required for each Eclipse model in the RESOLVE model.
When linked into RESOLVE, Eclipse is supported on any Win32 platform as recommended by SIS.
When the Win32 version is linked, Intel MPI is required. This comes with Eclipse on the distribution
CD.
Eclipse is also supported on Linux platforms with additional software supplied by Petroleum
Experts. The Petroleum Experts software must run on a rhel4 system; Eclipse will run on any
Linux platform supported by SIS (except for caveat in note vi).
Licences of Platform (formerly Scali) MPI will be required to control Eclipse on Linux. These
licences are provided by Platform ‘per node’.
LSF licences will be required if LSF is to be used to distribute jobs about the Linux cluster. If LSF is
to be used, then (currently) Eclipse must also run on rhel4 systems.
IMEX / GEM
Each IMEX model in RESOLVE will require a corresponding license of IMEX for each RESOLVE
run performed. For example, if the RESOLVE file contains two IMEX reservoirs, then two IMEX
licences will be checked out when running the RESOLVE model.
Excel
No special options are required for this.
Other software
These are generally proprietary and so advice should be sought separately if these are required.
For a stand-alone license, the security key (Bitlock) must be attached to the USB port of the PC.
For a network installation, the security hey (HARDLOCK) can be attached to any PC communicating
with the network.
The installation procedure for a network HARDLOCK can be found in the manual provided with the
purchase of a HARDLOCK licence.
July, 2021
RESOLVE Manual
Technical Overview 40
User Guide
User Guide 42
2 User Guide
2.1 What is in this guide?
This RESOLVE user guide will include a description of all the different options available in
RESOLVE.
The flowchart below outlines the basic procedure required to setup a RESOLVE model to carry
out full field modelling and optimisation by dynamically connecting several engineering
applications.
The "Further Technical Elements" section focuses on providing information on specific technical
aspect such as clustering, running applications distributed over a network, etc...
The organisation of the manual adheres to the RESOLVE processing and main menu logic and
is detailed below.
Introduction
Getting Started with RESOLVE
Menu commands
Connecting to external applications
Data Objects
Appendix
Examples Guide
The setup and usage of RESOLVE models is illustrated on case examples in the "Worked
Examples" section.
July, 2021
RESOLVE Manual
User Guide 44
From the programs themselves, a feature allows accessing the technical support web area
(under Help | Technical Support).
When contacting Petroleum Experts technical support team, the user should include the
following details in the request:
Information on the overall objectives and a detailed description of the question or issue
experienced.
A copy of the RESOLVE model, including a copy of all the models required to run this
particular RESOLVE model.
If this is too large to go through email, please do not hesitate to let us know and we will
provide a transfer link.
July, 2021
RESOLVE Manual
User Guide 46
2.3 Introduction
This section will include recommendations regarding the different ways the User Guide can be
used, as well as describing the main symbols, conventions and definitions used.
A general introduction to the RESOLVE concept and a detailed description of its capabilities
can be found in the "Technical Overview" section.
2.3.1 How to Use This Guide?
Depending on the user needs and the amount of time the user wishes to spend becoming
familiar with the program, the User guide can be used in the following ways:
Client Program / Both terms are used to define an application (i.e. provided by
Connected Petroleum Experts or by another company) that can be used within a
Application RESOLVE model
Data Object Object which can be used within a RESOLVE model. The different
Data Objects have different purposes:
Encapsulate areas of functionality of the IPM tools e.g. PVTp, GAP or
PROSPER calculations. These calculations can be called
dynamically during the run.
Facilitate the analysis of complex systems e.g. Tight Reservoirs,
SAGD, ICD Design
Facilitate running sensitivities and probabilistic analysis e.g. Case
Manager, Sensitivity Tool, Crystal Ball, @Risk
Data Provider / This term is use to define a part of a model loaded in RESOLVE that
Source can be used to provide data to another model
For instance, if one considers a link between a surface network model
and a process model, the separator node of the surface network model
will be a data provider: it will provide the pressure, temperature, fluid
flow rates and compositions that are passed to the process model
Data Receiver / Sink This term is use to define a part of a model loaded in RESOLVE that
can be used to receive data from another model
For instance, if one considers a link between a surface network model
and a process model, the inlet stream of the process model will be a
data receiver: it will receive the pressure, temperature, fluid flow rates
and compositions that are passed by the surface network model
Published Variable This term is used to define a variable that has been imported from a
data provider or receiver directly into the RESOLVE model, enabling
this variable values to be monitored directly in RESOLVE.
For instance, if one considers a surface network model, it will be
possible to directly report the oil production rate at the separator in
RESOLVE: to do so, the oil production rate will have to be published
within RESOLVE before the start of the calculation
July, 2021
RESOLVE Manual
User Guide 48
This section describes, in a general sense, the steps that are required to setup a RESOLVE
model.
The "Getting Started"and "Worked Examples"sections give more specific information on how to
build coupled systems.
The first step is to create an "instance" of one of the models that the user wishes to connect
within RESOLVE.
To do so, the following procedure can be used:
Select Edit System | Add Client Program | <program name> and click in the
graphical view section.
This will create an icon representing the application model.
When a case of a client application is loaded, RESOLVE will create a slave instance of that
application and will load the model (which may be a single file) into the application.
RESOLVE will then query the case for the inputs and ouputs of the system, referred to as
sources and sinks.
In the case of GAP, for example, the sinks in the system are production wells and injection
manifolds, whereas the sources are injection wells and separators.
In simulators, wells can be sources or sinks depending on whether they are production or
injection wells.
The sources and sinks of the model will be displayed as icons underneath the main application
icon in the RESOLVE graphical view. These icons will be labeled with the same name as they
are given in the application model (i.e. icons corresponding to simulation wells will be labeled
with the well name).
As far as RESOLVE is concerned, the application is a black-box calculator that has inputs and
outputs according to the sources and sinks it exposes.
Later, it will be seen how the functionality of the system can be enhanced by exposing
July, 2021
RESOLVE Manual
User Guide 50
('publishing') variables from the client applications for use by RESOLVE. These variables can be
used for optimisation, event driven scheduling, reporting, and so on.
For details as to what sources and sinks are exposed by an application, see the section of the
user guide dedicated to the individual driver in question.
After the first instance has been created in RESOLVE, the user can continue to create as many
other instances in the interface as are required in the coupled system.
As a general rule, integrated modelling is improved by ensuring that the individual models
are working satisfactorily in their individual applications, prior to the model being built.
Once the different instances are loaded into RESOLVE, equivalent sources and sinks can be
connected together graphically.
For example, wells in a REVEAL (reservoir simulator) model are the outputs of the REVEAL
model and therefore can be connected to their equivalent wells in GAP, that are the inputs of
the GAP model.
To make a connection, go to Edit System | Link and click and drag between the
corresponding icons on the main screen. Alternatively, the "Connection Wizard"can be used to
connect large numbers of items together automatically.
Connecting items together can result in loops being formed, where model A depends on the
data from model B and model B depends on the data from model A. RESOLVE will detect this
and additional data will be required to handle such loops.
See "Models and Loops"to see how loops are solved.
Before running the system, the type of model considered needs to be specified.
To set the system up, go to Options | System Options to invoke the "System Options"screen.
On this screen one can select the forecast mode, enable the "Scripting" options or set various
flags.
Go to the "Schedule Setup" screen by clicking Schedule | Forecast data on the RESOLVE
menu. The timesteps of the RESOLVE forecast are set on this screen; timesteps can either be
fixed in duration or adaptive (adapting to changes in the simulation properties).
The RESOLVE timestep length does not determine the timestep length of the individual
applications, it just gives a date at which the individual applications should synchronise (See
"How does RESOLVE work ?" section for further details).
If the RESOLVE optimisation option has been selected, then the objective function, constraints,
and control information will need to be specified. This can be done from the "Optimisation
Setup" screen which can be accessed from Optimisation | Setup.
A forecast or solve calculation is performed by clicking on Run | Start from the main menu. If
there are errors in the data set (e.g. no connections are made) the run will immediately
terminate and the errors will be displayed.
Once the calculation is started, the following sequence of events will occur:
If specified on the "System Options"screen, all the client models will be reloaded from
scratch. This is to ensure that there is no historical data in the model (e.g. from a
previous run) that might affect the forecast.
Loops will be "collapsed" so that they can be treated as if they were a single model
which can be solved as a separate entity. The "models" in the diagram below can
equally represent loops of connected models. Go to the "Model and Loops"section for
more information.
Each model is "initialised". The meaning of this depends on the application: for a
reservoir simulator this implies that an equilibration will be performed or a restart file
loaded.
For a fully compositional model the "base composition" for the model will be obtained.
RESOLVE needs to know this so that it can map a component from one application
(e.g. which may be called "methane") with a component from another (e.g. which may
be called "C1"). Refer to the "Compositional tables" section for more information.
RESOLVE will then solving the system using a "bottom to top" technique.
July, 2021
RESOLVE Manual
User Guide 52
Model E will be "solved", i.e. whatever data is present within E will be used to calculate
the data at the Model E outputs.
Data is obtained from the outputs of model E that are connected to model D inputs.
The dataset passed includes the pressure, temperature, phase rates, mass flow rates,
and compositional data. Other data can be passed as required by the model. Some
items (e.g. a separator in GAP) pass a single data point. Other items (e.g. a reservoir
simulator well) pass a table of points to represent a full response curve. When this
happens the receiving model is expected to calculate the operating point on this
response curve when it performs its solve.
All the data is present now for models B, C, and D. These models are solved using this
data. If they are on different machines (or can be put on separate processors) then the
solve processes will be performed in parallel.
Model A is solved.
Some of the results of the calculations will be extracted (especially relating to the data
transferred between the models) to allow them to be displayed in RESOLVE.
Those models that passed a full response curve to the next model downstream will
receive the result of the upstream model calculation in the form of a point on the
response curve: for instance, a reservoir simulation model that passed an inflow
performance curve for one well to the surface network model will receive the result of
the surface network model under the form of a rate, bottom-hole or well-head pressure.
If a single solve and / or optimisation is being performed, the run stops here. If a
forecast is being performed then a timestep command will be broadcasted to all the
models that have time dependency to run the forecast from t0 to t1. After the timestep,
all the models will all be synchronised at t1, and RESOLVE will return to the Model E
solve and perfom a similar workflow until the schedule data runs out.
The results of the calculations can be viewed by going to Results | View Results. The data that
is passed between the models (i.e. pressures, temperatures, rates) are stored.
Internal module variables that have been published in RESOLVE can also be plotted in the
results.
The IPM suite "core" set of tools (i.e. GAP / PROSPER / MBAL / PVTp) enables the
establishment of a fully integrated model of a field from the reservoir level up to the separator
level.
This concept of integrating the different parts of a producing system is now a standard in the
industry, and allows us to gain a detailed understanding of the interactions between reservoirs,
wells and surface network.
For instance, during the design phase of a project, such a model will enable to diagnose easily
where the bottlenecks are in the system: are the tubing sizes selected too small or too large, are
the surface facilities adapted to the reservoir and wells potential?
It allows as well, when associated with a powerful non-linear optimisation engine, the
optimisation of the way the producing system is setup: what will be the optimum wells to choke
down to respect a production plateau, what choke settings will need to be selected, what will be
the optimum gas lift gas allocation...
Value can be added to this type of models by pushing the integration concept further including
both upstream (i.e. reservoir) and downstream (i.e. process / economics) aspects of the
producing system, as illustrated below.
July, 2021
RESOLVE Manual
User Guide 54
Some of the applications used at these upstream and downstream levels are not provided by
Petroleum Experts which therefore requires the possibility of dynamically linking engineering
tools from different providers.
This will allow the engineers to design "No Compromise" systems where the most appropriate
tools can be used to model each section of the system.
RESOLVE is the IPM tool that will allow dynamic coupling between different petroleum
engineering packages.
RESOLVE is a master controller which allows Petroleum Experts and third party software
applications to be connected and controlled centrally: while each application runs
autonomously, RESOLVE takes care of synchronisation, data transfer, reporting, data gathering,
solving and optimisation.
RESOLVE will then allow systems such as the ones described below to be setup:
Material Balance, Well and Surface Network Model + Process Model + Economic
Spreadsheet:
Re-cycling setups (e.g. passing fluid from a process model to gas-lift injection or produced
water re-injection models) can be handled as well.
July, 2021
RESOLVE Manual
User Guide 56
It is also possible to embed discrete calculations into an integrated model and use the results of
these to drive or control the system. These calculations are integrated into the framework in the
form of RESOLVE data objects, and an example would be the performance of an EOS
calculation to determine whether a flow assurance condition is being met. They are separate
and complementary to the application drivers described above.
RESOLVE is the IPM tool that allows dynamic coupling between different engineering
packages, such as economic spreadsheets, reservoir and process simulators.
This section describes the general flow of data when RESOLVE is used to run a simple
integrated model.
In order to establish these connections, RESOLVE uses an "application driver" scheme: each
application "talks" to RESOLVE through its own driver. More information on the drivers can be
found under the discussion on Connecting to external applications.
This technique has the advantage of allowing users with specific connectivity requirements (i.e.
for instance wanting to connect to a self or company developed tool) to build their own
connections.
the transfer of data from one application to the other, making the data formats
compatible. This data transfer is performed for every RESOLVE timestep (i.e.
see the timestep description below for more details).
The RESOLVE timesteps are the times at which dynamic coupling between the
applications take place: at this point in time data is passed from one application
to another and results are written in RESOLVE.
July, 2021
RESOLVE Manual
User Guide 58
RESOLVE can be used to solve the field model at a specific point in time (i.e. equivalent to the
solve network capacity of GAP), or to run a prediction forecast over a pre-defined length of time.
The reservoir models (i.e. this could be either material balance or numerical
simulation models) are initialised at the start date of the RESOLVE run.
For each well model, the fluid PVT description (i.e. black oil or compositional)
and the well inflow performance data (i.e. IPR) are passed to the network
models.
The production and injection systems are solved and optimised against the
GAP objective function.
The fluid rates and PVT descriptions (i.e. including the fluid composition if
available) at the separators are passed to the process model and the process
model is solved.
The well performance results are passed back to the simulation models as,
depending on the user choice, fixed rate, fixed bottom hole pressure or fixed
wellhead pressure.
If a global optimisation setup has been setup in the RESOLVE model, RESOLVE will iterate on
the third and fourth points in order to optimise the system against the overall objective function
before passing back any data to the simulation models.
The simulation models then take the required timesteps up to the next
synchronisation time of RESOLVE.
Events that occur in any model will be accounted for during the run and reports will be generated
dynamically.
It may be noted that this workflow uses an explicit procedure in its coupling. The adverse affects
of this (instability, material balance errors) in reservoir-surface network coupling are mitigated
through a careful choice of the IPR that is passed from the simulator to the network.
July, 2021
RESOLVE Manual
User Guide 60
The user interface consists of five main sections, as indicated on the diagram above:
Main Menu This is the menu used to issue commands to RESOLVE - The user guide
Toolbar follows the structure of the main menu toolbar, allowing for a detailed
description of all the options available
Shortcut Icons This toolbar contains menu accelerators icons - They are described in
Toolbar details in the "Shortcut Icons Toolbar" section
Main Display By default this window will include the graphical view, which describes the
Window network drawing. In some cases, for instance when the RESOLVE model is
run or when an optimisation is performed, other child windows will appear in
the main display window section. Further information regarding this window
can be found in the "Main Display Window" section
HelpViewer This window enables the user to access specific sections of the user guide
Window directly from RESOLVE. Further information regarding this window can be
found in the "HelpViewer Window" section
System This window enables the user to view all the client modules included in the
Window model and their respective elements. Further information regarding this
window can be found in the "System Window" section
Scenarios This window enables the user to view all the scenarios setup in the model.
Window
The user can switch between the "HelpViewer" window or the "System" window by clicking on
the tab corresponding to the view to display, as illustrated below.
July, 2021
RESOLVE Manual
User Guide 62
The section below describes the options available as shortcuts on the RESOLVE icon bar.
Help Functions
When this is selected, a list of available client modules is displayed. The user has to
choose the client program required from that list and click on the location where that
client program is to be located on the graphical view.
When either of these is selected, clicking on an equipment item in the system window
with mask or unmask the item as directed.
The mask option will not modify the scheduling of the items: if an item is masked but
an scheduled event specifies to open it, it will automatically be unmasked.
Accelerator for Edit System | Disable and Edit System | Enable.
When either of these is selected, clicking on an equipment item in the system window
with disable or enable the item as directed.
The disable option will modify the scheduling of the items: if an item is disabled, then it
will be kept inactive irrespective of its associated schedule.
This will enable to respectively Zoom In / Zoom Out the graphical display.
When "zoom in" is selected, a zoom can be achieved either by clicking the mouse on
the system window, which will zoom in a fixed amount and set the centre of the view to
the position clicked, or by sweeping an area with the mouse which RESOLVE will then
display.
The aspect ratio will be retained when an areal zoom is performed.
This will enable to automatically un-zoom all the way to the default view.
Scheduling Functions
July, 2021
RESOLVE Manual
User Guide 64
This will enable to automatically access the Visual workflows module for the
various sections: Start, Pre-solve,Post-solve and Run iterator sections.
Accelerator for Event/Actions | Event-Driven Scheduling.
This will enable to automatically access the event driven scheduling section,
which enables to setup conditional scheduling (IF this happens THEN
perform this action).
Accelerator for Event/Actions | Scheduling options
This will enable to automatically access the scenario manager section, which
enables to setup different scenarios.
Scripting Functions
Forecasting Functions
This will enable to launch the calculations of the scenarios specified in the scenario
manager section.
Accelerator for Run | Single Step.
This icon is only active if the RESOLVE optimiser has been selected and will enable to
run a single optimiser iteration. The run will be paused after this.
Results Functions
This enables to access directly the results of the latest (i.e. or even currently running)
forecast run as well as all the previously saved results under a tabular format.
Accelerator for Results | View Forecast Plots.
This enables to access directly the results of the latest (i.e. or even currently running)
forecast run as well as all the previously saved results as plots.
Accelerator for Results | View Forecast Plots (New Window).
This enables to open a new plotting section in which the results of the current run as
well as all the previously saved results can be viewed.
Accelerator for Results | View Scenario Results (Table).
This enables to access directly the results of the scenarios that have been run under a
tabular format.
Accelerator for Results | View Scenario Plots.
This enables to access directly the results of the scenarios that have been run as
plots.
Accelerator for Results | View Scenario Plots.
This enables to open a new plotting section in which the results of the scenarios run
can be viewed.
Accelerator for Results | View Optimisation Results.
This enables to access directly the inflow performance data coming from the reservoir
simulation models for every well at every timestep.
This is only active if the Run | IPR Logging option was previously selected.
Annotation Functions
Draw a box on any part of the RESOLVE canvas to help annotate the model elements.
Properties can be accessed by right-clicking on the resulting box.
Annotate with a text box on any part of the RESOLVE canvas to help annotate the
model elements. Properties can be accessed by right-clicking on the resulting box.
July, 2021
RESOLVE Manual
User Guide 66
Annotate with an arrow on any part of the RESOLVE canvas to help annotate the
model elements. Properties can be accessed by right-clicking on the resulting arrow.
When starting a RESOLVE model, the main display window will only contain one child window,
the graphical view window, which describes the connections between the different client
modules.
Several other child windows will be displayed at different stages of model building / running, as
described below:
This child window describes the connections between the different client
modules.
Working in this window, the user can modify the location of the different
icons, as well as mask / unmask; disable / enable or delete some
connections.
New client modules can be created and connected to the already existing
ones.
In this window, elements with an arrow coming IN the icon will be data
receivers and elements with an arrow coming OUT of the icon will be
data providers, as illustrated below.
The colour of the connection icon will describe the type of fluid flowing
through that connection: red for gas, green for oil and blue for water.
July, 2021
RESOLVE Manual
User Guide 68
Window
Any error messages coming from one of the applications used within the
model will appear in red text
Output As soon as a calculation is launched, an output log window will be
Log displayed, as illustrated below.
Window
This window displays more detailed information regarding the status of
the run, both DURING and AFTER the calculations have been performed.
July, 2021
RESOLVE Manual
User Guide 70
Optimisation
Progress Window
The body of the screen consists of four tabbed sections and a "Save
Results" section, as described below.
Ctrl This displays, for each iteration, the values that the control
(control) variables have been set to
Results
Fn This contains a list of the results for the objective function
(function) and the constraint equations.
July, 2021
RESOLVE Manual
User Guide 72
Results
The first row represents the value of the objective function
obtained by RESOLVE following a pass through the
system (i.e. at each iteration of the optimiser). The
second row contains the value predicted for this equation
by the optimiser. The third and fourth rows contain the
error term calculated from the first two rows. This term is
used to calculate new bounds for the control variables at
the next iteration.
The child windows are generally displayed on top one another: clicking on one window will
automatically activate that window.
It is also possible to organise the different windows displayed by going to the "Window" section
of the main menu toolbar.
Cascade will organise all the windows one on top one another, for example:
Tile will re-arrange the size of the windows so that they all fit in the main display
window, for example:
July, 2021
RESOLVE Manual
User Guide 74
It lists all the subjects that are described in the RESOLVE help sections and user guide and
enables the user to directly access one of these subjects by selecting it on the list and double-
clicking on it.
By default, it is not displayed: the user will have to click on the "System" tag to display this
window.
It lists all the client modules and their corresponding data receivers and data providers.
By clicking on one of them, for instance here the "Well1" element of the "Reservoir1" client
module, a summary screen will appear describing the main characteristics of that element, as
illustrated below.
For instance, this element is labelled "Well1", it is a Well, it is a data provider and it is controlled
by system response.
July, 2021
RESOLVE Manual
User Guide 76
This section will describe in detail how to setup each of the clients modules available to use
within RESOLVE as well as describing all the options that are available for each client module.
As each client module is connected with RESOLVE through a driver, this section also doubles
as a description of the features supported by each driver.
July, 2021
RESOLVE Manual
User Guide 78
Before the driver can be used effectively, it must be configured for use with GAP.
The configuration screen can be accessed from the RESOLVE driver registration screen
(Select Drivers | Register Drivers on the main menu).
Click on the GAP driver in the list and press the "Configure" button.
GAP In this field the user should enter the directory in which the GAP executable is
executable installed. If this is left blank, RESOLVE will attempt to start GAP in the same
path directory as the OpenServer executable (i.e. PXServer.exe) is running.
For safety it is recommended to enter the directory from which the GAP
executable has to be started
Application In case of difficulties, RESOLVE will wait a certain period when attempting to
timeout start up GAP.
If it is not able to do so then RESOLVE will call an error after the timeout period
defined.
For most cases a timeout of 30 seconds should be appropriate. If a fairly slow
machine is used then the user may want to make this value larger to avoid
RESOLVE "timing out" unnecessarily.
Two different timeout duration can be specified, depending whether the model
is run on a single machine or over a cluster setup
Input fields
General The settings under this tab relate to the GAP model settings.
More information regarding these settings can be found under the
GAP Driver : General section
July, 2021
RESOLVE Manual
User Guide 80
Source/Sinks This tab allows to add extra sources and sinks to those displayed in
RESOLVE by default.
More information regarding these settings can be found under the
GAP Driver : Sources and Sinks Section
Wells / Inflows IPR This section enables to specify the way IPRs are generated / handled
before being sent to the GAP model.
More information regarding these settings can be found under the
GAP Driver : Wells / Inflows IPR Section
Compositional This section enables to specify which composition (i.e. list of
components) is taken as a default list in GAP.
More information regarding these settings can be found under the
GAP Driver : Compositional Section
'Flowing Conditions' This section allows well sources to be nominated to pass extra
tags information to PROSPER calculator objects. More information is
covered in the GAP Driver: Extra Tagged Data Sections.
'Tight Reservoir' tags Used to select wells to be used in tandem with a Tight Reservoir
Data Object. More information is covered in the GAP Driver: Extra
Tagged Data Sections.
'Multi tubing' Source/ This section allows sources and sinks in GAP to be connected to
Sink tags REVEAL wells with multiple tubing strings. More information is
covered in the GAP Driver: Extra Tagged Data Sections.
2.5.1.2.2 GAP Driver : General Section
Once the GAP module icon has been located in the graphical view, it will be required to
associate this module icon to a specific GAP model as well as to setup the options that are
going to be used to integrate the GAP model to the overall RESOLVE model.
This can be done through the "Edit Case Details" screen that can be obtained by double-
clicking on the GAP module icon.
Input fields
The following options are available on this screen:
GAP Filename This is where the location of the GAP case (i.e. extension *.gap) considered
has to be specified
System As well as the main system, GAP cases can optionally contain associated
water and gas injection systems.
This control can be used to select one of these systems rather than the main
production system. This is useful to connect separately to a GAP case and
its associated injection system.
This can be done by using the following procedure:
Create one GAP instance for the production system. Open the "Edit
the GAP case" screen and specify which GAP file this GAP instance
relates to. Keep the "System" option to "Main System".
Create another GAP instance for the water OR gas injection system
OR gas lift injection system. Open the "Edit the GAP case" screen and
July, 2021
RESOLVE Manual
User Guide 82
specify which GAP file this GAP instance relates to - THIS SHOULD
BE THE MAIN GAP FILE, NOT THE INJECTION FILE. Change the
"System" option to "Associated Water Injection" OR "Associated Gas
Injection" OR "Associated Gas Lift Injection".
Using that method will allow to use only one GAP license to run the
production and associated injection models
Machine Name GAP cases run from RESOLVE can be distributed over a network.
Enter in this field the name of the machine on the network on which the GAP
case should run. The machine name can be given as an IP address or a
name in the DNS register (e.g. "dave-8200").
When entering file (case) names for remote machines, the file name entered
should be relative to that machine and not the local machine
Use specified The machine name (above) is not used if the "use cluster" option is
computer / use specified.
cluster In this case, RESOLVE will start the GAP model on some node of a cluster
configured either with PxCluster or LSF.
For more information on PxCluster (i.e. the Petroleum Experts cluster
software), refer to the "Setting up a Cluster" section.
Note:
In the general case, if GAP is connected to a reservoir simulator, this option
is not required. It is sufficient to start the simulator from a restart file (and the
RESOLVE start on the same date), as GAP is effectively performing
instantaneous solves.
Snapshot Through this setting, RESOLVE can force GAP to save or not to save
Mode prediction snapshots, independently of the value of the corresponding
setting in GAP prediction settings
Action to take This setting is useful when several types of reservoir models are available
if MBAL wells for one reservoir: for instance, a MBAL model and a numerical simulation
connected to model are both available for the reservoir considered.
simulator In this case, it will be possible for the GAP model to be connected within
GAP to a MBAL model for "Reservoir 1" and to be also linked through
RESOLVE to a numerical simulation model for "Reservoir 1".
In that case, that option enable the user to automatically decide which one of
the two is used for the run.
By default, the numerical simulation model will be used over the MBAL
model
Model GAP can be setup to calculate the system potential at each timestep or not.
The system potential can be added to the "Variable reporting set" in the
usual way or, if GAP is running predictively, the system potentials will be
stored in the GAP forecast results. It takes a little longer for GAP to calculate
potentials at each timestep and so for this reason the default setting is "do
not calculate system potential"
Solve mode Rule based solver
o This option triggers the use in GAP of the Rule Based Network Solver.
Full optimization
o This option triggers the use in GAP of Optimiser.
Solver only
o This option invokes the solver only with no Optimization.
Parallel This option triggers the use in GAP of the parallelised calculations.
Stop execution If this setting is selected and a convergence error is experienced in GAP,
if solver / then the entire prediction will be stopped.
optimiser fails If not, a message will be logged on the calculation progress window of
(or just log) RESOLVE to illustrate the convergence error
When loading a GAP model, the nodes that appear by default are all the separator (i.e. data
provider) and the wells (i.e. data receiver) nodes.
This section enables to setup additional nodes in the RESOLVE model.
Additional nodes such as the "Total System Lift Gas" could be used for instance to feed a GAP
July, 2021
RESOLVE Manual
User Guide 84
model with the amount of gas lift available at each timestep from a process model in Hysys or
UniSim Design.
Input fields
Total If this is checked then an icon representing the total lift gas available for the
system lift GAP system will be displayed in RESOLVE.
gas This can then be connected to a source from another application.
For example, the gas lift supplied to a GAP system can come from a plant
(Hysys) model or it can be calculated from a spreadsheet macro.
GAP will then use this total amount of gas lift gas available as part of its gas lift
gas allocation optimisation calculation
Pass gas This option is only available if the "total system lift gas" option
properties is checked: this determines whether or not the PVT
properties (i.e. including EOS properties if present) are
applied to the gaslift stream.
Lift gas for If this is checked then an icon representing the lift gas for each gas lifted well
individual will be displayed (i.e. in addition to the usual icon representing the well itself).
gas lifted
wells The icon will have the label "<wellname>-gaslift".
The lift gas passed to this node from a different model can be applied in two
ways depending on the well model:
Gas lifted well In this case the lift gas quantity passed from the source
with fixed model will be applied as a fixed gas lift injection rate in the
injection rate well
Gas lifted well In this case GAP is to optimise the amount of gas lift gas
with variable injected into the well to optimise on the required objective
injection rate function at the separator level. The lift gas quantity passed
from the source model will therefore be set as a
maximum injection rate constraint for that well on the
optimiser
Pass gas As for the total system lift gas, this determines whether the
properties PVT properties of the gas lift stream will be passed to the
well
Inline If this is checked then inline injection elements will be displayed as additional
injection icons on the RESOLVE screen.
elements These can then be connected to sources of lift gas in other models. The same
rules as for gas lifted wells apply: if the injection is fixed then the amount of gas
passed will be applied as a fixed injection rate, if the injection is controllable
then the amount passed will be set as a maximum constraint
Show well The well layers will be displayed as individual sinks.
layers
Unused This will create a "source" icon in RESOLVE that allows the unused quantity of
lift gas lift gas to be passed from GAP into another model.
This feature should be used with care. The amount of unused lift gas is
calculated simply from the GAP solve results (i.e. gas lift gas available minus
gas lift gas used). The properties of the gas are derived from the gas lift gas
passed into the model at the "total system lift gas" icon.
If this is not connected or not present then dummy properties will be passed.
Even if the total system gas lift is connected there is still the possibility that the
July, 2021
RESOLVE Manual
User Guide 86
lift gas could come from various internal sources in the GAP model and so it is
not possible to predict the exact properties of the unused lift gas
Individual If this is checked then each separator icon will have a "child" icon representing
separation the separated streams of each phase. Note that this is for black oil cases only
streams for
oil, gas,
water, and
lift gas
Add extra This invokes the following screen to allow adding internal joints from the GAP
nodes system as new sources on the RESOLVE screen.
The list on the left contains the names of all the joints in the GAP system. To
add a joint to the output sources, highlight the required joint and click Add. The
joint will appear in the list on the right. Remove and Clear can be used to
delete entries in the right hand list
The main objective of this screen is to select whether the driver will perform a regression on the
inflow performance data received from the reservoir model or not. It is also possible to prevent
regeneration of performance curves for network solve if the performance curve model is used in
GAP.
It is important to note that accounting for the progress done in the area of inflow
performance exchange between reservoir and surface network model with the scaling
methods (i.e. see "Connection to REVEAL - REVEAL driver details" section), this method
has been superseded and is not recommended any more.
The regression process will enable to match the BHP vs. rate points received from the simulator
(i.e. inflow lookup tables) using a specific IPR model such as Straight Line PI for an oil producer
or Forcheimer for a gas / retrograde condensate producer.
The main screen will enable to apply this modification to all or non of the wells.
Using the advanced button, it will be possible to select this regression on a well to well basis as
July, 2021
RESOLVE Manual
User Guide 88
illustrated below:
Use rate gradients to calculate This option, if selected, will use the rates from the two
phase fractions in IPR previous timesteps and extrapolate to the current
timestep to obtain the phase fractions
Populate IPR tables even when This option will ensure that the IPR tables are populated
performing regression (slower - in GAP even if the regression calculation is performed
debug)
This section is used only for compositional or compositional tracking GAP models.
The screen of that specific section is as follow:
This section enables to select which item from the surface network will be used to provide a
standard list of components names for the entire GAP model.
If this is left by default, that component names list will be taken from the connected items.
Input fields
Select item This network element will provide the names of the components for the
from which to entire GAP model.
extract
composition
Rate Model The Rate Model option at the bottom of the screen enables to choose
between an IPR data passed to GAP based on volumes or on mass. A
mass-based transfer is necessary if process-independence is required
from the model
Perform When a compositional reservoir simulation model is connected to a
compositional GAP model, there will be a GOR as well as a composition
July, 2021
RESOLVE Manual
User Guide 90
'Target GOR' that is passed across by the simulator to the GAP model. If this option is
on full selected, then RESOLVE will perform a 'target GOR' operation on the fluid
composition composition that is received to the GOR that is passed to obtain a new
composition.
If this option is not selected, then RESOLVE will simply pass on the
composition received from the simulation model to GAP.
The selected Sources appear in the list. Once selected, the Source will include a gauge icon
which is used to pass data from the node to a ProsperCalculator Data Object. For more
information, please refer to the ProsperCalculator Data Object section of this manual.
A well will appear in the list if it is a Tight well and if it uses the option 'Use Resolve Tight
Reservoir - Inflow' in GAP. Once selected, the well will include a Tight Reservoir icon which can
July, 2021
RESOLVE Manual
User Guide 92
be connected to and from a Tight Reservoir Data Object. This is to ensure that the *.rdo file
used by GAP is consistent with the Tight Reservoir Data Object. For more information, please
refer to the Tight Reservoir Data Object section of this manual.
Wells in REVEAL can have multiple tubing strings or flow paths. This option is used to connect a
source or sink in GAP to these wells in order to connect them to the surface network. For more
details on the REVEAL configuration, please refer to the section Connecting to REVEAL.
2.5.1.3 Publishing GAP variables
These variables can then be used for "Event driven scheduling", or for building up different
"Scenarios", for example.
By clicking on "Edit Variables", the following screen appears, enabling to select which GAP
variables have to be published in RESOLVE:
July, 2021
RESOLVE Manual
User Guide 94
OpenServer This screen is used to copy OpenServer strings from the GAP interface into
variables RESOLVE.
There are two ways of using it:
Go to the GAP interface, do <ctrl> and <RightClick> on the required variable,
copy the OpenServer string to the clipboard, and paste it into the grid
displayed here. When the grid cell loses focus the "writable" status and the
unit will be displayed. This can be used to export any OpenServer variable
from GAP into RESOLVE.
As a convenience, the GAP solver variables and equipment constraints are
displayed in the lists on the left hand side of the screen.The user can select
between the solver variables and the equipment constraints list by clicking
on the respective tabs. To add a variable to the list of published variables
used in the event driven scheduling section, highlight the variable in question
and click on the Add button.
Solver These variables are the GAP solver (output) variables only:
Variables they will always be assigned a "non-writable" status. The
drop-down lists below the equipment list can be used to add
When an entry is made, the OpenServer tag, the unit, and the writable status
will be set up automatically, along with a suggestion for a variable name.
In the above example, the oil rate from the solver results for each of the GAP
equipment items has been set up, along with the separator pressure against
which the system is to be solved.
Equipment This section allows the mask status of pieces of equipment to be published as
masking variables.
To do this, highlight the required piece of equipment from the list on the left
hand side and click on the Add button.
The variable has the value "1" when the element is masked in the
system, and "0" when it is not masked.
July, 2021
RESOLVE Manual
User Guide 96
In the above example, the mask status of a GAP well has been set up
Equipment This follows the same logic as the equipment masking, except that it assist in
bypassing publishing the bypassing status of the elements in the system rather than the
masking status
Group This section enables to monitor group memberships: it enables the user to
Membership know whether one specific element / node of the GAP model is part of a
specific group.
Variables of the type equipment "A" is a member of group "G"" can be set up
in this section.
The equipment can be selected from a drop down list in the "equipment"
column. Similarly, the group can be selected from a drop down list in the
"group" column. A variable name for this will be suggested automatically, but
this can be changed. The variables have the value of "1" if "A" is in group "G",
and "0" if it is not
MBAL tank These are variables that relate to the tank models which are internal to the
variables GAP model considered.
The list of tanks that are in the GAP model is displayed on the left hand side.
The most convenient way to use this screen is to first create an instance of the
MBAL application.
RESOLVE will automatically open the MBAL model that is associated with this
tank - The name of this MBAL tank will appear in the Equipment column.
An OpenServer variable can then be taken from the MBAL model (using
<Ctrl> <RClick> in the usual way) and pasted into this table.
The unit of the variable will automatically be written into the "Unit" column. A
variable name should be given to the variable, as shown.
If the unit of the variable does not appear, this means that the
variable has not been found. This can be due to the fact that the
red arrow was not clicked after the MBAL file was loaded, leading
to no reservoir name being specified in the Equipment column.
If this happens, click on Clear and start the process again
Note that the values of these variables can not be changed dynamically (i.e.
during the run). They are changed at the start of the run, before the prediction
in GAP is started. They would normally be used for uncertainty analysis, for
example, to determine the affect of OOIP or OGIP on the length of a plateau.
When the screen is exited (by pressing "OK" or "Cancel"), MBAL will also be
exited provided the instance of the MBAL application was created from this
screen.
Cumulative The cumulative black oil rates can be exported, either at the node level or the
rate system level.
variables
July, 2021
RESOLVE Manual
User Guide 98
The required variables are selected from the list on the left hand side of the
screen. All the variables pertaining to a particular piece of equipment can be
exported in one click by highlighting the equipment itself.
A single variable type (e.g. Cum Oil) can be exported for all pieces of
equipment by using the drop down lists at the bottom of the screen
2.5.1.4 Optimisation
For example, well controls can be set up in GAP to optimise an objective function in a separate
application (a plant process model for instance) while obeying constraints which can be in GAP
or in another model.
The screens described here are used to set up control variables, constraints, and an objective
function as required in the GAP model.
They are accessed by right-clicking on the GAP icon on the RESOLVE screen or by going to
the Optimisation | Setup section.
Objective The top section of the screen enables to setup the objective function.
Function This is not mandatory - the objective function might be located in another
application.
Label This is the label for the objective function that will be used
when reports are generated by the RESOLVE optimiser
OpenServer This is the GAP output variable (in the form of an OpenServer
variable string) that represents the objective function.
The OpenServer string can be directly obtained from the GAP
interface by using <Ctrl> and <RClick> when the mouse is
over the output variable display, or obtained by selecting the
equipment and variable to consider in the appropriate
dropdown boxes.
Selecting Set after this will enable to setup the OpenServer
string directly
Maximise/ One can choose to maximise or minimise the variable
selected above
July, 2021
RESOLVE Manual
User Guide 100
Minimise
OpenServer This is the GAP output variable (in the form of an OpenServer
variable string) that represents the objective function.
The OpenServer string can be directly obtained from the GAP
interface by using <Ctrl> and <RClick> when the mouse is
over the output variable display, or obtained by selecting the
equipment and variable to consider in the appropriate
dropdown boxes.
Selecting Set after this will enable to setup the OpenServer
string directly
Relation The constraint can be "less than", "greater than", or "equal to"
the value specified in the value section
Value The constraint value
Unit The unit of the variable in question. The constraint value is in
these units
This allows controls / control parameters to be set up in GAP. Controls are variables that can be
adjusted in order to maximise an objective function while observing constraints. For example, in
GAP (when run by itself) wellhead chokes can be modified automatically by the GAP optimiser
to optimise oil production or revenue. In RESOLVE, wellhead chokes can be set to optimise an
objective function which is not a variable in GAP, but that is defined in another model: for
instance in an Excel spreadsheet.
Report name This is the label that will be used in the reports that RESOLVE generates when
the optimiser is run
OpenServer The RESOLVE optimiser accesses the control variables through the GAP
variable OpenServer interface. This column contains the OpenServer tag string for the
control variable. These strings can be obtained from the GAP interface itself: in
common with all the IPM applications, pressing <Ctrl> and <RClick> when the
mouse is over a variable input field will yield a screen which gives the
OpenServer variable.
To get the current set of control variables from GAP automatically, use the "Add
GAP controls" button.
To allow the GAP separator pressure to be a control parameter, use the "Vary
separator pressure" button. In systems where a GAP model is linked to a
process model, this enables for instance to avoid choking back wells heavily
with a low separator pressure to respect a rate constraint at separator, and
enables to have instead a higher separator pressure, passing therefore more
July, 2021
RESOLVE Manual
User Guide 102
It is important, of course, to supply the tag strings of input variables which can
be adjusted to optimise the objective function
Unit Once the OpenServer variable has been supplied this column contains the
current GAP unit for the quantity in question. This is the unit that will be used
when entering the allowed variable range (below)
Bounds Enter here the minimum and maximum value for the control.
Again this can come directly from GAP for the current set of control variables. It
is not necessary to enter either maximum or minimum values, but this should be
done if there is a physical limit on the variable (e.g. for instance, pressure drops
or injected gas quantities cannot be negative)
Integer Controls
This section allows to setup mixed integer optimisation cases which are useful in cases such as
as pipeline routing optimisation.
The following assorted functions are available in the GAP driver: these functions can be
accessed for instance by right-clicking on the GAP module icon in the graphical view.
Toggle Select This enables to select the item considered or all its "children" items
(This Item / All
Children)
Select All This enables to select / unselect the module considered and all its children
Children / items
Unselect All
Children
Save Case Saves the GAP case to the current file name
Reload Case Reloads the case from file: this can be useful if the file considered has
July, 2021
RESOLVE Manual
User Guide 104
been edited.
It is important to note that RESOLVE will always reload the file before a
simulation run, unless told otherwise in the simulation options
Display Child This enables to select whether to display only the module icon OR the
Icons / Do Not module icon and all children icons OR the module icon and only the
Display Child connected icons.
Icons / Display This is particularly useful when large models with numerous sources / sinks
Only Connected are used: displaying all the items in these cases can alter the clarity of the
Child Icons graphical view of the RESOLVE model.
By default, RESOLVE will display the module icon and the child icons. If the
number of child icons is too large, then RESOLVE will only display the
module icon. A red cross will then be displayed on the module icon to
specify that it has child icons associated but that they have not been
displayed
Change Label / This enables to change the label or alias associated with the module
Alias considered
Do not send / This is only available when models with compositional data are
receive EOS considered.
property data These options enable to block the flow of compositional data from or to the
module specified
Optimiser Setup These relate to the RESOLVE optimisation function. Further information on
this function can be found in the "RESOLVE Optimisation" section
Test PXServer This simply tests whether PXServer.exe is registered with the operating
system and returns the full path of the registered application
Output Variables This allows additional variables from the GAP run to be added to the main
RESOLVE reporting facilities.
All the variables in the right hand side list will be displayed in the RESOLVE
results alongside the default RESOLVE results.
There are a few points which should be kept in mind when setting up the GAP case to use within
a RESOLVE model:
Well Models The wells should be modelled with a VLP / IPR intersection within the GAP
model- this is set from the main summary screen of the well data entry screen
in GAP
Optimisation To make use of GAP optimisation capability, there should be appropriate
Setup controls available in the GAP model.
In other words, gas lifted wells should have the lift gas calculation set to
"calculated", naturally flowing wells should be controllable (i.e. the dP control
should be set to "calculated"), and so on
System The separator and / or injection manifold pressures that the model is to run with
July, 2021
RESOLVE Manual
User Guide 106
The following GAP items are considered as source or sink items in RESOLVE.
Wells These are data acceptors, i.e. they accept tabular inflow data from the item to
which they are connected. They also implement a bi-directional link, i.e. they
return solution values to the connected module that lie on the original curve that
was passed
Inflows These have the same functionality and manipulate the same data as GAP wells
as far as RESOLVE is concerned
Separators These are data providers, i.e. they pass data to a connected object. Only a
single point is passed and so no data can be returned, that is, they implement
a uni-directional link
Injection These are data acceptors and receive data from a connected object. In
manifolds general, RESOLVE attempts to maintain a continuous pressure profile over the
system and so the injection manifold pressure is set to the pressure of the
connected node. The rate that is injected will depend on the injectivity of the
system: thus the rate from the connected node is set only as a constraint on the
injection: if the injection system is physically able to inject that amount, then it
will be injected, but if the injection system is not physically able to inject that
amount, then the rate injected will be equal to the maximum rate the injection
system can inject
Sources These are data acceptors and behave similarly to injection manifolds. The
main difference is that GAP sources do not have a pressure variable: the
pressure is allowed to float to inject the rate that they are set to (in this sense
they have the opposite behaviour to injection manifolds). The accept a single
data point, i.e. the link is uni-directional.
Note that sources do not appear as separate items in RESOLVE when they are
used as targets for inline separators to separate fluid streams in GAP. They
only appear if they are used "standalone"
Sinks These are data providers. As far as RESOLVE is concerned, they have the
Additional Information
Setting up When these items are setup in GAP, they refer to a labelled fluid definition that
GAP consists of a description of the fluid in question.
sources and For example, a gas injection manifold may refer to a fluid description called
injection "gas01" with appropriate associated properties (gas gravity, etc). An oil
manifolds source may refer to a fluid description called "oil01" with associated properties
GOR, water cut, oil gravity, gas gravity, and so on. These fluid descriptions are
set up on the "fluid" tab of the GAP data input screen for the item in question.
This message indicates that there is an inconsistency in either the wells or tank data. A
common reason for this to occur is that the prediction start date set in RESOLVE does not
correspond to the end of the production history in the GAP/MBAL models. Consequently, there
is no means for GAP to know the initial Reservoir Pressure at the start of the prediction.
This can also be caused by missing files (e.g. missing VLP files), file names which are too large
and errors in the underlying models (i.e. GAP/MBAL/PROSPER, etc.). The philosophy that is
advocated in building an integrated model is to ensure that the underlying models are consistent
prior to integration with RESOLVE.
This message indicates that the potential results are not being calculated. In order to generate
the GAP potential results from RESOLVE, ‘Always calculate system potential’ must be enabled
for the GAP instance, as shown below.
July, 2021
RESOLVE Manual
User Guide 108
The configuration screen can be accessed from the RESOLVE driver registration screen:
Drivers | Register Drivers on the main menu.
Click on the IPM-OS driver in the list and press the "Configure" button. The following screen will
be displayed.
Application In this field the directory in which the IPM executables are installed should be
executable entered.
path If this is left blank, RESOLVE will attempt to start the application in the same
directory as the OpenServer executable (PXServer.exe) is registered.
For safety it is recommended to enter the directory from which the IPM
executable has to be started
Application In case of difficulties, RESOLVE will wait a certain period when attempting to
timeouts start up the application.
If it is not able to do so then RESOLVE will call an error after the timeout
period. For most cases a timeout of 30 seconds should be appropriate.
If a fairly slow machine is used then the user may want to make this value
larger to avoid RESOLVE "timing out" unnecessarily.
Two different timeout duration can be specified, depending whether the
model is run on a single machine or over a cluster setup
July, 2021
RESOLVE Manual
User Guide 110
The first step is to place an IPM-OS instance in RESOLVE by selecting the button or the
menu Edit System / Add client Program / IPM-OS and then clicking anywhere in the main
screen. This will create a module icon.
Note that the first time the module icon is created, it will have the aspect of a PROSPER icon:
Later, after the IPM application has been defined, the icon will change depending on the
application selected. For example, if the application selected in GAP, the icon will turn to the
one of GAP:
Once the IPM-OS module icon has been located in the graphical view, it will be required to
associate this module icon to a specific IPM model as well as to setup the options that are
going to be used to integrate the new module to the overall RESOLVE model.
This can be done through the "Edit Case Details" screen that can be obtained by double-
clicking on the module icon.
IPM In this section it is possible to select the IPM tool to connect (GAP, PROSPER,
application MBAL, PVTP, RESOLVE, REVEAL)
IPM In this section details concerning the IPM model to run need to be entered:
model
File Name and full path of the model to run
name
July, 2021
RESOLVE Manual
User Guide 112
"dave-8200").
Leave the space blank to run the application on the local
machine.
When entering file (case) names for remote machines, the file
name entered should be relative to that machine and not the
local machine
Use The machine name (above) is not used if "use cluster" is
specified specified. In this case, RESOLVE will start the IPM model on
machine / some node of a cluster configured either with PxCluster or LSF
use cluster (Platform). For more information on PxCluster (the Petroleum
Experts cluster software), refer to the "Setting up a Cluster"
section.
If a cluster is used, the following elements should be kept in
mind:
Actions In this section the user can define the actions the connected IPM application
to take needs to carry out, like for example entering data in the model or running
calculations.
Type Defines the action to take. These are two possible choices:
DoCmd (run a calculation or perform a command like masking
wells in a GAP model), DoSet (enter an inout in the model)
OS OpenServer variable associated to the command to perform or
Statement the input to enter in the model
Arg "i" field This contains arguments of the OS Statement variable.
For example:
GAP.SOLVENETWORK(1) is the variable to perform Solve
Network calculations in GAP with optimisation mode. In this
case the OS Statement is GAP.SOLVENETWORK, whilst the
argument is "1"
These variables can then be used for "Event driven scheduling", or for building up different
"Scenarios", for example.
Select the IPM-OS section to enter the corresponding variables and click on Edit Variables.
July, 2021
RESOLVE Manual
User Guide 114
RESOLVE has access to virtually all the parameters input and output by the connected
applications. In order to import a variable, paste the corresponding OpenServer string in the
table, along with a user-defined variable name and reporting name.
Variable Name associated to the variable to input/output. This is chosen by the user
name and is used for reporting purpose
Reporting Name used to identify the model itself. This could be for example the name
node of a well or of a field. The name is chosen by the user and is used for
reporting purpose
OpenServer OpenServer string corresponding to the variable to import/export.
string For input/output parameters usually the string can be directly obtained from
the program GUI by pointing the value itself and using the combination
CTRL + mouse right click. Alternatively, the OpenServer strings can be
obtained from the OpenServer User Guide
Writable Highlights if the variable is read-only or can be changed by the user
Unit Units corresponding to the imported variable
Delete Table entries can be erased by clicking on the button under the Delete
column
Example
In the following example the IPM-OS instance is a PROSPER well model. The variable imported
is the liquid rate from the System calculation:
Note that the variable is called Liquid_rate and the Reporting node is called well1.
After running a calculation, in the results it is possible to see the role of the variable name and
reporting node:
Toggle Select This enables to select the item considered or all its "children" items
July, 2021
RESOLVE Manual
User Guide 116
Models As with all calculations, quality checking of the model before the
validity simulation is run is extremely important.
It is recommended to quality check each model connected running it
standalone and making sure it yields consistent results before linking it
within RESOLVE
2.5.3.1 Overview
The MOVE module allows the user to connect to the structural geological modelling software
package MOVE. This section describes the connectivity and functions of MOVE that can be
utilised via RESOLVE.
MOVE In this field, the user should enter the directory in which the MOVE executable is
executable installed. This should be located in Program Files\ Petroleum Experts\IPM
path 12.5\bin. If this is left blank, RESOLVE will attempt to start MOVE in the same
directory as the OpenServer executable (i.e. PXServer.exe) is running. For
safety, it is recommended to enter the directory from which the MOVE
executable has to be started.
July, 2021
RESOLVE Manual
User Guide 118
Base This is the connection port by which RESOLVE communicates with MOVE. The
connection default port number (i.e.52681) can be left unchanged unless this port is taken by
port another application.
Alternatively, MOVE can be added as a part of the IPM-OS instance on the canvas. This can be
done by placing the IPM-OS icon on the canvas and selecting MOVE as the IPM application. On
the Edit case details screen, open a MOVE model which will later be used in the Workflow, as
shown below.
Variables from the Stress analysis module can be transferred into RESOLVE by using
OpenServer strings with the following setup.
July, 2021
RESOLVE Manual
User Guide 120
Pressure gradients can be imported from MOVE using the following OpenServer string e.g.
MOVE.StressAnalysis.Sigma1Gradient.
July, 2021
RESOLVE Manual
User Guide 122
Select the Display Palette icon to open the Workflow item Palette.
Double-click on the new operation item to open the Perform operations window.
July, 2021
RESOLVE Manual
User Guide 124
From the Select category of operation drop-down menu, select MOVE structural geology.
The MOVE visual workflow functions will now be listed in the Select operation drop-down
menu.
When the desired operation is selected from the list, the Create / edit an operation window
will update according to the chosen operation. Each operation requires inputs to be defined and
some require appropriate variables to be declared for saving an output into memory. Inputs and
outputs can be defined by the user once the Create / edit an operation window has updated.
This section of the manual will describe the purpose of each operation and provide information
about the expected inputs and outputs.
Description:
Copies all objects selected in the currently active MOVE project to the clipboard. The MOVE
window must be the active window for this command to work. If the MOVE window is not the
July, 2021
RESOLVE Manual
User Guide 126
Inputs:
Outputs:
Return Value:
Example of window:
Description:
Cuts all objects selected in the currently active MOVE project to the clipboard. The MOVE
window must be the active window for this command to work. If the MOVE window is not the
active window the command will not run as expected.
Inputs:
Outputs:
Return Value:
Example of window:
July, 2021
RESOLVE Manual
User Guide 128
Description:
Pastes all objects on the clipboard into the currently active MOVE project. The MOVE window
must be the active window for this command to work. If the MOVE window is not the active
window the command will not run as expected.
Inputs:
Outputs:
Return Value:
Example of window:
Description:
July, 2021
RESOLVE Manual
User Guide 130
Inputs:
Outputs:
Return Value:
Example of window:
Description:
Closes the specified view in the active MOVE project. The view is defined using the name of
the view, which is the name displayed in the tab in the MOVE project.
Inputs:
Outputs:
Return Value:
Example of window:
July, 2021
RESOLVE Manual
User Guide 132
Description:
Creates a line along the base of all of the objects that are selected in the current MOVE project,
as performed by the Create Base Line option available in the MOVE context menu. The objects
must be selected using one of the selection operations prior to this operation being performed.
Inputs:
Outputs:
Return Value:
Example of window:
Description:
Creates a line along the top of all of the objects that are selected in the current MOVE project,
as performed by the Create Top Line option available in the MOVE context menu. The objects
must be selected using one of the selection operations prior to this operation being performed.
July, 2021
RESOLVE Manual
User Guide 134
Inputs:
Outputs:
Return Value:
Example of window:
Description:
Creates a new view in the currently active MOVE project. The view is created as a new tab in
the project. The available view types are a 3D view, Map view, Section view, and Google Map
View.
When creating a new section view, the section to be opened in the new view must be selected
using the Select Section operation before this operation is performed.
Inputs:
Outputs:
Return Value:
Example of window:
July, 2021
RESOLVE Manual
User Guide 136
Description:
Delete's an object from the specified MOVE project. The object to be deleted is identified using
the object's unique Id. In MOVE, the Id of an object can be found by right-clicking on an object
and selecting Object Properties from the context menu. In the Object Properties window, the
Id attribute is shown to the immediate right of the object Name.
Inputs:
Outputs:
Return Value:
Example of window:
Description:
Creates a duplicate of all objects selected in the specified MOVE project. The objects to be
July, 2021
RESOLVE Manual
User Guide 138
duplicated must be selected in an operation that precedes this operation (see selection
operations). If no objects are selected prior to this operation, no objects will be duplicated.
Inputs:
Outputs:
Return Value:
OPTIONAL: This operation returns the Id's of all new objects. The return values can be assigned
to a single integer variable or to an array of integer variables if multiple objects are created.
Example of window:
Description:
Exports the selected objects in a specified file format. The objects to be exported must be
selected in an operation that precedes this operation (see selection functions). If no objects are
selected prior to this operation, no objects will be exported.
Inputs:
July, 2021
RESOLVE Manual
User Guide 140
[MOVE]
format The type of the file to be exported (e.g. .move and .segy).
[String] This is input as a string by the user.
Outputs:
Return Value:
Example of window:
Description:
This operation returns the name of the active view in the current MOVE project.
Inputs:
Outputs:
July, 2021
RESOLVE Manual
User Guide 142
Return Value:
Example of window:
Description:
Retrieves the value of one Object Attribute for a specified object. The object is defined using the
objectId and the attribute is defined by a user-defined string.
Inputs:
Outputs:
Return Value:
This operation returns the value of the specified attribute. The return value is assigned to a
workflow variable.
Example of window:
July, 2021
RESOLVE Manual
User Guide 144
Description:
Inputs:
Outputs:
Return Value:
This operation returns the value of the specified attribute. The return value is assigned to a
workflow variable.
Example of window:
July, 2021
RESOLVE Manual
User Guide 146
Description:
Retrieves the value or values of a vertex attribute for a specified object in MOVE. The retrieved
value(s) are transferred into a RESOLVE FlexDataStore, which must be created prior to defining
this operation.
Inputs:
[int32] be retrieved.
attrName The name of the Vertex Attribute that is to be retrieved.
[String] This should be defined by the user as a string that exactly
matches the name of the Vertex Attribute.
Outputs:
Return Value:
This operation returns the specified vertex attribute(s) from the specified object. The return value
or column of values are assigned to a FlexDataStore object.
Example of window:
July, 2021
RESOLVE Manual
User Guide 148
Description:
Retrieves the X, Y, and Z coordinates of the vertices of a specified object in MOVE. The
retrieved values are transferred into a RESOLVE DataSet, which must be created prior to
defining this function.
Inputs:
objectId The unique Id of the object from which the vertices are to
[int32] be retrieved.
Outputs:
Return Value:
This operation returns the vertices (X, Y, and Z coordinates) from the specified object. The
return values are assigned to a DataSet object.
Example of window:
July, 2021
RESOLVE Manual
User Guide 150
Description:
This operation returns the names for all of the views in the current MOVE project.
Inputs:
Outputs:
Return Value:
The names returned by the operation are assigned to an array of string variables.
Example of window:
Description:
Retrieves the value of a property from a toolbox in MOVE. The function retrieves a single value.
Where a toolbox property comprises multiple values a single value can be retrieved by
declaring the column number and row number of the desired property (indexes). The retrieved
value is transferred into a RESOLVE Workflow Variable, which must be created prior to
declaring the variable in the function.
Inputs:
July, 2021
RESOLVE Manual
User Guide 152
Outputs:
Return Value:
This operation returns the value of the specified tool property. The return value is assigned to a
workflow variable.
Example of window:
Description:
Inputs:
Outputs:
Return Value:
Example of window:
July, 2021
RESOLVE Manual
User Guide 154
Description:
Opens a MOVE project using the path to the MOVE file. A MOVE object needs to be created
in RESOLVE. If this is empty, the MOVE object will be populated with the file path specified by
the user. If the MOVE object contains an existing file path, this will be replaced (overwritten) with
the file path specified by the user.
Inputs:
Outputs:
Return Value:
Example of window:
July, 2021
RESOLVE Manual
User Guide 156
Description:
A MOVE object in RESOLVE can be linked to a MOVE project on disk. This function opens the
MOVE project linked to a MOVE object.
Inputs:
Outputs:
Return Value:
Example of window:
Description:
Inputs:
July, 2021
RESOLVE Manual
User Guide 158
Outputs:
Return Value:
Example of window:
Description:
Redoes an undo operation. This operation must be used after at least one undo operation has
been performed.
Inputs:
Outputs:
Return Value:
Example of window:
July, 2021
RESOLVE Manual
User Guide 160
Description:
Changes the name of an object in a MOVE project. The object is identified using the unique Id.
Inputs:
Outputs:
Return Value:
Example of window:
July, 2021
RESOLVE Manual
User Guide 162
Description:
Retrieves the unique ID numbers for any objects that are selected in the current MOVE project.
Objects must be selected prior to this operation (see selection functions). If no objects are
selected, no IDs will be returned.
Inputs:
Outputs:
Return Value:
This operation returns the ID numbers for all objects selected in the current MOVE project. The
return values are assigned to a Workflow Variable object.
Example of window:
July, 2021
RESOLVE Manual
User Guide 164
Description:
Inputs:
Outputs:
Return Value:
Example of window:
Description:
Saves any objects selected in the current MOVE project to a new location. The selected objects
are saved as a MOVE file. Objects must be selected prior to this operation (see selection
functions). If no objects are selected, no objects will be saved.
Inputs:
July, 2021
RESOLVE Manual
User Guide 166
Outputs:
Return Value:
Example of window:
Operation Name: Select all MOVE objects with IDs supplied in an array
Description:
Selects multiple objects in the current MOVE project. Objects to be selected are identified using
an array of unique ID numbers. The array is a Workflow Variable that needs to be created prior
to defining this function.
Inputs:
July, 2021
RESOLVE Manual
User Guide 168
Outputs:
Return Value:
Example of window:
Operation Name: Select all MOVE objects with names supplied in array
Description:
Selects multiple objects in the current MOVE project. Objects to be selected are identified using
an array of objects names. The array is a Workflow Variable that needs to be created prior to
defining this function.
Inputs:
July, 2021
RESOLVE Manual
User Guide 170
Outputs:
Return Value:
Example of window:
Description:
Selects any objects in the current MOVE project with a supplied name. Multiple objects in
MOVE can have the same name - this operation could result in multiple objects being selected.
Inputs:
July, 2021
RESOLVE Manual
User Guide 172
[MOVE]
objectName The name of the object to be selected.
[String]
If declaring the name of the object explicitly, the string
should be in quotation marks (e.g. "Fault1"). If referring
to a workflow variable that contains the name of the
object the string should not be in quotation marks (e.g.
VariableName).
Outputs:
Return Value:
Example of window:
Description:
Inputs:
July, 2021
RESOLVE Manual
User Guide 174
Outputs:
Return Value:
Example of window:
Description:
Selects the defined section. This operation often precedes one of the other view operations,
such as Create View when creating a new section view.
Inputs:
July, 2021
RESOLVE Manual
User Guide 176
[MOVE]
name The name of the view to be selected. This is the name
[STRING] that appears in the tab at the top of the view in MOVE.
Outputs:
Return Value:
Example of window:
Description:
Inputs:
Outputs:
Return Value:
Example of window:
July, 2021
RESOLVE Manual
User Guide 178
Description:
Sets the active view in the current MOVE project. The view to be activated is declared using the
name variable. The declared view will become visible in the MOVE project.
Inputs:
[MOVE]
name The name of the view to be activated. This is the name
[STRING] that appears in the tab at the top of the view in MOVE.
Outputs:
Return Value:
Example of window:
July, 2021
RESOLVE Manual
User Guide 180
Description:
Assigns a value to an attribute for a specified object. The object is defined using the object Id
and the attribute is defined by a user-defined string.
Inputs:
Outputs:
Return Value:
Example of window:
Description:
Inputs:
July, 2021
RESOLVE Manual
User Guide 182
Outputs:
Return Value:
Example of window:
Description:
This operation assigns a user-defined value to a property in a MOVE toolbox (e.g. defines
displacement = 250m).
Inputs:
July, 2021
RESOLVE Manual
User Guide 184
Sigma1Stress not
MOVE.StressAnalysis.Sigma1Stress). If using indices
to access a value in a table of properties, the toolbox
property name should be the index command (e.g.
DataTable_Indexed).
Outputs:
Return Value:
Example of window:
Description:
Defines Vertex Attribute values from a FlexDataStore object. The Vertex Attribute value(s) are
assigned from a RESOLVE FlexDataStore, which must be created prior to defining this function.
Inputs:
July, 2021
RESOLVE Manual
User Guide 186
Outputs:
Return Value:
Example of window:
Description:
Inputs:
Outputs:
Return Value:
Example of window:
July, 2021
RESOLVE Manual
User Guide 188
Before the driver can be used effectively, it must be configured for use with REVEAL.
The configuration screen can be accessed from the RESOLVE driver registration screen:
Drivers | Register Drivers on the main menu.
Click on the REVEAL driver in the list and press the "Configure" button. The following screen
will be displayed.
REVEAL In this field the directory in which the REVEAL executable is installed should
executable be entered.
path If this is left blank, RESOLVE will attempt to start REVEAL in the same
directory as the OpenServer executable (PXServer.exe) is running.
For safety it is recommended to enter the directory from which the REVEAL
executable has to be started
Application In case of difficulties, RESOLVE will wait a certain period when attempting to
timeouts start up REVEAL.
If it is not able to do so then RESOLVE will call an error after the timeout
period. For most cases a timeout of 30 seconds should be appropriate.
If a fairly slow machine is used then the user may want to make this value
larger to avoid RESOLVE "timing out" unnecessarily.
Two different timeout duration can be specified, depending whether the
model is run on a single machine or over a cluster setup
July, 2021
RESOLVE Manual
User Guide 190
This can be done through the "Edit Case Details" screen that can be obtained by double-
clicking on the module icon.
Case Details
The filename should be visible to all the specified nodes of the cluster
(which, by default, is all the nodes). This means that the file name should
normally be a UNC name.
A subset of cluster nodes and the software executables can be selected
by clicking on "setup". By default, the IPM software is assumed to be
located on the remote nodes in the same path as the local machine, but
this can be overridden from this screen.
Whether LSF or PxCluster is used, RESOLVE will have to locate a node which
is not busy before starting REVEAL. If there are no free nodes, the startup of
REVEAL will eventually time out. The timeout period can be set in the
"Configuration screen"
IPR This enables to select which type of calculation is used in the simulator to
Model provide each well inflow performance to the surface network model.
July, 2021
RESOLVE Manual
User Guide 192
region (non- method. The block IPR may exhibit some curvature (e.g. due
linear) to non-Darcy effects). This IPR method scales the block IPR
such that it passes through the drainage region pressure and
the last known operating point, thus preserving the IPR
curvature.
Load all The option enables to load different input streams for each completion defined
completion in REVEAL
data for wells
Associated Data
The following screen will be displayed when selecting this tab:
July, 2021
RESOLVE Manual
User Guide 194
Well When REVEAL is connected through RESOLVE, the connected wells can be
Control controlled by:
Fixed rate (total liquid rate, or gas rate for gas reservoirs),
Fixed bottom hole pressure
Fixed manifold pressure
System response
The third option is available only when REVEAL is connected to GAP, and is
generally to be preferred to the two previous ones as the lift curve will describe
some of the system response of the surface network and so will reduce the
explicitness of the system. If fixed manifold pressure is selected, it is essential to
include the same lift curves that are in the GAP model in the REVEAL model.
The system response option enables to automatically select which of the three
previous options will be the most suitable to control the well in order to reduce the
impact of the system explicitness.
Once the type of well control has been selected, it is possible to define the
behaviour of the wells in case they are shut-in.
This can be done by using the "Edit well shut-in behaviour" option.
This screen enable the user to select two different ways of closing a well:
If the well is SHUT, it is assumed that the well is closed at the perforation
level and that there is therefore no crossflow potential at reservoir level.
If the well is STOP, it is assumed that the well is closed at the wellhead level
and that there is therefore crossflow potential at reservoir level.
Using this screen, the user can define, depending on the reason for which the
well does not produce, whether the well has to be SHUT or STOP.
The "Set Global Flags" section enables to do so for ALL wells, whereas the
table at the top lists all the wells and enables to make that choice on a well by well
basis
Run By default, REVEAL simulations are started from t = 0. If a restart file has been
REVEAL saved in the simulation case, then this control can be used to start from the
from restart date chosen.
The "Load / Refresh" button enables to reload the REVEAL case, which can be
July, 2021
RESOLVE Manual
User Guide 196
quite useful if the reservoir model has been modified. Please note that RESOLVE
will automatically reload all the clients modules prior to starting a run UNLESS
the contrary has been specified by the user in the general RESOLVE options
This tab automatically becomes available when a REVEAL model with the Water Chemistry
option enabled is loaded into RESOLVE. Further information on the tag data can be found with
the description of the Water Chemistry objects.
This tab automatically becomes available when a REVEAL model with a detailed well with
multiple strings (e.g. secondary tubing or annulus paths) is loaded into RESOLVE. Data
Providers can be connected to 'inputs' of GAP (e.g. a source or producing well) and and Data
Consumers can be connected to 'outputs' (e.g. a sink or injecting well).
These variables can then be used for "Event driven scheduling", or for building up different
"Scenarios", for example.
July, 2021
RESOLVE Manual
User Guide 198
Select the Reservoir section to access the REVEAL variables and click on Edit Variables.
REVEAL allows variables associated with wells AND with completions to be published in
RESOLVE.
The "Well Data" section enables to automatically publish the well results.
The following procedure can be used to do so:
Step 1 In the well list on the left hand side of the screen, select the well to consider
Step 2 Click on the "+" sign: the list will expand and all the variables that can be
published for this specific well will be displayed
Step 3 Select the variables to import in RESOLVE and click on the red arrow: this will
add them in the right hand side list, which summarises the variables that have
been published into RESOLVE
Step 4 If one wants to publish in RESOLVE a specific variable for ALL the wells in
the REVEAL model, then it will be possible to select the variable in the "All
Wells" drop-down box and click the red arrow next to it. This will update the
right hand side list, which summarises the variables that have been published
into RESOLVE
The "Well Completions" section enable to publish completion results within RESOLVE.
The following procedure can be used to do so:
Step 1 In the Variable Name section, enter the name of the variable to import - This
name is user-defined
Step 2 In the Well section, a drop-down box will allow to select which well the variable
July, 2021
RESOLVE Manual
User Guide 200
to publish is associated to
Step 3 In the Completion section, a drop-down box will allow to select which
completion the variable to publish is associated to
Step 4 In the Variable section, a drop-down box enables to select the variable to be
reported
Step 5 The variable Unit will then be automatically reported
In the example below, the oil rate of completion 1 of the Horizontal well defined in REVEAL will
be published in RESOLVE.
The following assorted functions are available in the REVEAL driver. These functions can be
accessed for instance by right-clicking on the REVEAL module icon in the graphical view.
Toggle Select This enables to select the item considered or all its "children" items
(This Item / All
Children)
Select All This enables to select / unselect the module considered and all its children
Children / items
Unselect All
Children
Save Case Saves the GAP case to the current file name
Reload Case Reloads the case from file: this can be useful if the file considered has been
edited.
It is important to note that RESOLVE will always reload the file before a
simulation run, unless told otherwise in the simulation options
Display This enables to select whether to display only the module icon OR the module
Child Icons / icon and all children icons OR the module icon and only the connected icons.
Do Not This is particularly useful when large models with numerous sources / sinks
Display are used: displaying all the items in these cases can alter the clarity of the
Child Icons / graphical view of the RESOLVE model.
Display
Only By default, RESOLVE will display the module icon and the child icons. If the
Connected number of child icons is too large, then RESOLVE will only display the module
Child Icons icon. A red cross will then be displayed on the module icon to specify that it
has child icons associated but that they have not been displayed
Change Label / This enables to change the label or alias associated with the module
Alias considered
Do not send / This is only available when models with compositional data are considered.
receive EOS These options enable to block the flow of compositional data from or to the
property data module specified
Tabular well This option allows the display and editing of all the individual well data in a
data single table.
July, 2021
RESOLVE Manual
User Guide 202
The control mode and IPR model can be changed from this table on a per-
well basis. It is sometimes appropriate to set different wells to use different
control modes or IPR models, but generally the global defaults are to be
preferred.
The drop down boxes at the bottom of the screen may be used to change the
settings of all the wells simultaneously
Test This simply tests whether PXServer.exe is registered with the operating
PXServer system and returns the full path of the registered application
In general terms, there are no special requirements that a REVEAL model has to fulfill in order
to be used by RESOLVE, but the following elements can nevertheless be considered:
Well RESOLVE controls the schedule of the combined run. This means that
Scheduling scheduling of wells that are connected in RESOLVE will be handled by the
top level schedule in RESOLVE. Wells that are not connected in RESOLVE
will have their REVEAL schedules honoured
Start Dates The RESOLVE start date and REVEAL start date may be different. If they are
the same RESOLVE will start REVEAL at the start of its run.
If the RESOLVE start is before the REVEAL start, then RESOLVE will shut the
reservoir in until it reaches the REVEAL start date. RESOLVE will
automatically put in a timestep to coincide with the start of the reservoir
schedule.
If the RESOLVE start is after the REVEAL start, then REVEAL will run a
simulation/history until the RESOLVE start date before RESOLVE takes
control
Lift Curves Lift curves in REVEAL will be ignored unless the wells are controlled with a
fixed manifold pressure
Well Type Wells will be set to production or injection wells depending on the item that
Setup they are connected to.
For example, if a REVEAL well is connected to a water injector in GAP, then
it will become a water injector in REVEAL
Fluid Re- Fluids that are injected into REVEAL may not be at the same temperature as
Injection the reservoir.
It is important in these cases to have a fully thermal PVT model defined in
REVEAL for these fluids
The following list summarises the resource requirements for Eclipse to run under RESOLVE.
1. Eclipse licences are required as they would be if Eclipse was running standalone. For
example, an E300 model would need access to a compositional licence. Other models
may require parallel licences, multisegmented wells licences, and so on
2. Each Eclipse model under RESOLVE will require its own set of licences. In other words, 2
Eclipse models in a RESOLVE model will require 2 Eclipse licences
3. For RESOLVE, each Eclipse model will also lock an OpenEclipse licence. These are
available from Schlumberger. To avoid any doubt, it is the runtime licence, rather than
the development licence, that is required
4. Appropriate hardware must be available to run Eclipse.
While RESOLVE can run only under the Windows environment, it is possible to setup the
Eclipse model to run either under the Windows environment or under the Linux
environment.
For very large Eclipse models, it may not be possible to run them under Windows
environment, which is where the Eclipse model can be setup on a Linux system.
5. The protocol used for communication between RESOLVE and Eclipse is MPI. Various
flavours of MPI can be used, as explained below. However, an appropriate installation of
MPI must be configured on the machine(s) in question in order for the connection to work.
Most, but not all, supported MPI flavours come free with the Eclipse software
When RESOLVE executes Eclipse, it does so by directly calling the Eclipse executable, and not
by calling the usual Eclipse macros. This means that the licence information for Eclipse may be
missing, leading to a licence error.
Starting from version 2009 Eclipse uses Intel MPI, which comes free with Eclipse installation.
Note that the Windows versions of MPI do not allow cross-platform connections: they connect to
Eclipse running on the same machine as RESOLVE, or to another Windows machine on the
network. To connect to Eclipse running on Linux it would be required to install and configure MPI
on the Linux side.
RESOLVE (depending upon the license) can be 32-bit or 64-bit. Even if 32-bit RESOLVE is
used it allows the connection to 64-bit Eclipse. This applies to Eclipse runs on both Windows
and Linux systems.
July, 2021
RESOLVE Manual
User Guide 204
See also the section dealing with connections to Eclipse when running under Citrix.
2.5.5.2.1 Setting up Intel MPI
This section describes the steps to setup MPI on a Windows machine.
Alternatively, MPI can run under the appropriate account. The account is registered using
wmpiregister utility, that should have been set up in the Intel MPI program group. An encrypted
The above path should be included in the PATH variable automatically during the MPI
installation. In some cases (very rarely) it was noticed, that the path was not included or was
incorrect. In that case a call to MPI will fail.
The PATH variable can be verified using the MPI configuration wizard (ECLCONFIG.EXE)
wizard RESOLVE will highlight that in the PATH is incorrect in the Eclipse configuration Wizard.
In that case the PATH variable should be corrected manually.
2.5.5.2.2 MPI test wizard
RESOLVE includes a test wizard for MPI, that will check that the environment variables and the
path to MPI has been properly set up. It will also test if Eclipse can be started (note that this
does not require an Eclipse license).
July, 2021
RESOLVE Manual
User Guide 206
The following screen will help to know whether the relevant MPI service (spmd.exe) is running or
not.
The smpd.exe service should be running as part of the installation of MPI. If this is not the case,
then the service should be started manually.
The next screen enables the testing of whether Eclipse can be run or not. There is no need to
have an Eclipse license available on the computer to do so.
July, 2021
RESOLVE Manual
User Guide 208
When doing the tests for the first time, it is useful to select the 'View Output of MPIExec
Command (notepad)' option, as this will ease the diagnosis should Eclipse fail to start.
Click on 'Test Eclipse startup' and this will test whether Eclipse can be started from
RESOLVE.
If a windows firewall is setup on the machine that is used, it is possible that a firewall
message will be displayed, asking whether the start up of the Eclipse software has to
be blocked or not. The user has to confirm that the Eclipse software can be launched by
selecting the 'Unblock' option
When this setup is ended, clicking on 'Next' will bring the user to a screen similar to the
previous one, but specifically designed to configure remote connections with Eclipse
(i.e. Eclipse running on one PC and RESOLVE on another PC). The same procedure
than described above can be used to do so, with MPI being installed on both machines
considered
2.5.5.2.3 Setting up the Linux MPI software
Before the driver can be used, it must be configured for use with Eclipse. Separate settings are
held for Eclipse, E300, and DeadOil Eclipse.
This screen can be accessed from the RESOLVE driver registration screen (Drivers | Register
Drivers on the main menu) and allows the setup of various global options related to the Eclipse
link(s).
July, 2021
RESOLVE Manual
User Guide 210
Input fields
Eclipse The Eclipse100 and Eclipse300 drivers can both be configured to loop
license while waiting for a license.
Essentially, RESOLVE will try and obtain an Eclipse license and, if it fails,
will continue trying for a certain number of attempts with a fixed time delay
between each attempt.
In this part of the screen enter the maximum number of retries before
RESOLVE gives up and the time delay between each retry. Note that
there is no way of aborting the process and so one should be careful
about entering large values for the maximum number of retries.
Note also that the default behaviour of Eclipse is to check the license
after the grid file has been read. This can cause the loop to run rather
slowly. In addition, one should be careful about entering too short a time
delay between retries as Eclipse will flag an error if the log file is not
unlocked before the next attempt is made.
These options should not be used unless there is a very good reason to
do so
Communication Here the mode of communication between RESOLVE and Eclipse can
be specified. These settings are only relevant to the use of Eclipse
Local program If the path to eclipse_mpi.exe is in the $PATH environment variable then
executable path this version will be executed. If it is not in the path, then the path to
eclipse_mpi.exe/eclipse_ilmpi.exe (or e300_mpi.exe for E300 runs)
must be entered here
Linux options The timeout for the process start is supplied here. It applies to Linux
systems, and the default is 300 seconds.
Debugging The option Log timings at end of end of run generates a large amount
of performance data for the run which can be helpful if benchmarking is
being performed
The following elements should be kept in mind when running Eclipse through RESOLVE under
Citrix.
Typically, RESOLVE will be an application that is published under the Citrix framework. Eclipse
does NOT need to be published under the Citrix framework to be used by RESOLVE.
Eclipse running on the This is equivalent to a standard local installation, where RESOLVE
Citrix server (but not and Eclipse run on the same machine. When RESOLVE is run,
published) Citrix effectively "logs on" to the server under the user account of
the person running RESOLVE, and so any log on environment
settings will take effect in the Citrix session.
July, 2021
RESOLVE Manual
User Guide 212
This document provides recommendations on how Eclipse models should be set up to achieve
good results when coupling to GAP models through RESOLVE.
In the case of Eclipse, there is another reason for this advise. RESOLVE
must control the wells based on the rates and pressures which are being
calculated by the surface network. Control keywords (such as
GCONPROD) in the Eclipse model can interfere with the control of wells
by RESOLVE. Although RESOLVE attempts to remove any superfluous
control by Eclipse, it is not possible to guarantee that there will never be
interference as it is not possible to test all possible combinations of
keywords against the OpenEclipse interface that RESOLVE uses
Setting up an The advised changes all affect the SCHEDULE section.
Eclipse deck
Ideally, the predictive (as opposed to history) part of a schedule section for
an Eclipse model to be coupled to GAP should consist of only the
following:
Any engineering, reporting or restart options that are required, e.g.
RPTSCHED, DRSDT.
Well definitions through WELSPECS and COMPDAT
WCONPROD and WCONINJE keywords to open a well with some rate.
The rate that is specified in the Eclipse deck will be overwritten by
RESOLVE. Since the WCONPROD and WCONINJE keywords are
replaced, if the well is defined as closed in the Eclipse deck, it will be
opened by RESOLVE.
A single DATE to define the end of the forecast, followed by the END
keyword.
The date defined should be such that the simulation period (in the
Eclipse Deck) is greater than 60 days. For example, if the Eclipse start
date (defined in the Eclipse data deck) is specified as "01 JAN 2012",
then the end date defined must be about "01 APR 2012" or higher. This
is a prerequisite for performing the scaling operation.
The keyword NOSIM should not be included in the datadeck. (The
NOSIM keyword would tell Eclipse that there is no simulation that needs
to be performed and would terminate the Eclipse run)
In addition, it is advised that lift curves be removed from the Eclipse model
if the connection is to be made at the sandface. This is because Eclipse
will continue to calculate a THP for whatever rate is calculated by the
network, and Eclipse will not allow this pressure to become negative or
violate some upper limit (for an injector). Clearly, this should not happen if
the network and the Eclipse lift curves are consistent. Nevertheless, there
remains the possibility that an Eclipse model which does not have
consistent lift curves with the network will violate the limit and apply an
unwanted THP control to the well(s) in question.
--------------------------------------------------------------------------
SCHEDUL E
--------------------------------------------------------------------------
TUNI NG
/
/
1* 1* 50/
DRSDT
0 /
July, 2021
RESOLVE Manual
User Guide 214
RPTRST
" BASI C=4 " " NORST" /
RPTSCHED
" RESTART=2 " " WEL L S=2 " " WEL SPECS" " CPU=2 " /
I NCL UDE
" l i f t 1. ec l " /
I NCL UDE
" l i f t 2. ec l " /
I NCL UDE
" l i f t 3. ec l " /
I NCL UDE
" l i f t 4. ec l " /
I NCL UDE
" l i f t 5. ec l " /
WEL SPECS
- - we l l n a me g r o u p n a me I J r e f . BHP pr ef er ed phas e
P1 GO1 14 64 1830 " OI L " /
P2 GO1 26 69 1788 " OI L " /
P3 GO2 32 23 1755 " OI L " /
P4 GO2 28 38 1755 " OI L " /
I1 GOI NJ 11 90 1* " WATER" /
I2 GOI NJ 32 48 1* " WATER" /
/
COMPORD
P4 I NPUT/
/
COMPDAT
" P1 " 14 64 5 5 " OPEN" 1 * 0. 624 0. 124 27. 542 2* " Y" 0. 659 /
" P1 " 14 64 6 6 " OPEN" 1 * 21. 921 0. 124 1826. 370 2* " Y" 5. 383 /
" P1 " 14 65 6 6 " OPEN" 1 * 2. 001 0. 124 173. 384 2* " Y" 6. 440 /
" P1 " 14 65 7 7 " OPEN" 1 * 20. 639 0. 124 1365. 221 2* " Y" 2. 145 /
--------------------------------------------------
" P2 " 34 2 1 1 2 1 2 " OPEN" 1 * 0. 157 0. 124 27. 136 2* " X" 4. 288 /
" P2 " 35 1 9 1 6 1 6 " OPEN" 1 * 13. 458 0. 124 2520. 052 2* " X" 9. 507 /
" P2 " 36 1 9 1 6 1 6 " OPEN" 1 * 7. 603 0. 124 1411. 776 2* " X" 8. 735 /
" P2 " 36 1 9 1 7 1 7 " OPEN" 1 * 21. 706 0. 124 4053. 938 2* " X" 9. 262 /
" P2 " 36 1 9 1 8 1 8 " OPEN" 1 * 3. 970 0. 124 741. 744 2* " X" 9. 306 /
----------------------------------------------------
" P3 " 28 38 2 2 " OPEN" 1 * 4. 317 0. 124 687. 251 2* " Y" 2. 114 /
" P3 " 29 37 5 5 " OPEN" 1 * 0. 001 0. 124 0. 222 2* " X" 7. 818 /
" P3 " 29 37 6 6 " OPEN" 1 * 9. 501 0. 124 1706. 844 2* " X" 6. 323 /
" P3 " 29 37 7 7 " OPEN" 1 * 268. 140 0. 12448033. 410 2* " X" 6. 153 /
" P3 " 30 36 7 7 " OPEN" 1 * 2. 358 0. 124 416. 195 2* " X" 5. 354 /
" P3 " 30 36 8 8 " OPEN" 1 * 235. 539 0. 12442575. 227 2* " X" 6. 711 /
" P3 " 32 3 5 1 2 1 2 " OPEN" 1 * 0. 139 0. 124 25. 960 2* " X" 9. 019 /
" P3 " 33 3 5 1 2 1 2 " OPEN" 1 * 0. 833 0. 124 122. 008 2* " X" 1. 068 /
" P3 " 33 3 5 1 4 1 4 " OPEN" 1 * 6. 852 0. 124 1204. 486 2* " X" 5. 141 /
" P3 " 34 3 5 1 4 1 4 " OPEN" 1 * 0. 175 0. 124 32. 524 2* " X" 8. 836 /
----------------------------------------------------
" P4 " 26 69 3 3 " OPEN" 1 * 0. 188 0. 124 35. 430 2* " X" 9. 905 /
" P4 " 27 68 8 8 " OPEN" 1 * 9. 168 0. 124 1691. 233 2* " X" 8. 186 /
" P4 " 28 68 8 8 " OPEN" 1 * 1. 146 0. 124 212. 869 2* " X" 8. 769 /
" P4 " 29 68 8 8 " OPEN" 1 * 0. 988 0. 124 170. 147 2* " X" 4. 238 /
" P4 " 29 67 8 8 " OPEN" 1 * 1. 130 0. 124 180. 188 2* " X" 2. 146 /
/
WCONPROD
" P1 " " OPEN" " ORAT" 4* 1* 1* 1* 2 100000 /
" P2 " " OPEN" " ORAT" 4* 1* 1* 1* 4 100000 /
" P3 " " OPEN" " ORAT" 4* 1* 1* 1* 6 100000 /
" P4 " " OPEN" " ORAT" 4* 1* 1* 1* 8 100000 /
/
WCONI NJ E
" I 1" " WAT" " OPEN" " RATE" 3975 1* 420. /
" I 2" " WAT" " OPEN" " RATE" 3975 1* 400. /
/
DATES
1 " J AN" 2 0 3 8 /
/
END
As explained in the Control Data section, it is possible for wells which are
not controlled by RESOLVE to be under control by Eclipse (group or well
control). In this case the scheduling of events for these wells would have to
be entered into the SCHEDULE section, as normal
Well Constraints in the schedule data should be avoided as these can interfere
Control with the control by RESOLVE. Note that if Eclipse is attached to GAP, it is
best to put any constraints in the GAP file where they can be "seen" by the
optimiser.
Wells that are not controlled by RESOLVE (i.e. they are not connected to
data acceptors in the RESOLVE system) will be controlled purely by the
Eclipse schedule data (i.e. this allows, for example, voidage injection wells
to be set up). In addition, group controls can be overwritten by RESOLVE
from the interface
THP Control THP control of wells is available from the Eclipse drivers. In this case, the
pressure at the "top" of the lift curve is passed between the applications.
For consistency, it would be required that the GAP system and the Eclipse
system have the same lift curves - note that GAP is able to import Eclipse
lift curve formats.
July, 2021
RESOLVE Manual
User Guide 216
An artificially lifted well can not be controlled by tubing head pressure (BHP
and rate controls are still available).
See also the notes below regarding IPR generation and scheduling
IPR Generation Note also that the IPR generated by Eclipse is generated at the reference
depth for the well in question. The well reference depth is set with the
WELSPEC keyword. It is important that the lift curve used by GAP,
regardless of the well control mode in Eclipse, is generated to this
reference depth to ensure the continuity of the model
Drawdown As with point (1), drawdown constraints should be placed in the GAP
constraints model where they can be seen by the optimiser.
Drawdown constraints in the Eclipse deck (i.e. entered using the
WELDRAW keyword) can interfere with the control of E300 runs from
GAP. This should not be a problem with E100 where the constraints can be
overridden programmatically from RESOLVE
Scheduling The Eclipse link attempts to honour the schedule in the Eclipse deck for the
wells.
The caveat here is that all wells must be "declared" at the start of the run
using the WELSPEC keyword in Eclipse.They must also be declared as
producers or injectors with the WCONPROD and/or WCONINJE
keywords. They do not need to be open from the start - it is acceptable for
the wells to be initially SHUT or STOPped before being opened up at a
later time.
If a well is closed by the connected program (e.g. GAP) then there are two
possibilities. The first is that the well will be opened at the start of each
RESOLVE timestep to generate the inflows for GAP and then started or
stopped again based on the GAP calculation: in other words, RESOLVE
will keep trying to restart the well. Alternatively, the well will be abandoned
and either shut or stopped in the simulation. This is set on the
"Miscellaneous" tab of the "Edit case" screen.
and type ps. The Eclipse process reported can then be seen. If it is not,
then it will be necessary to kill the RESOLVE run by hand (e.g. from
Windows Task Manager)
Notes on OpenEclipse
RESOLVE communicates with Eclipse using OpenEclipse. This is the open architecture for
Eclipse, allowing an external application to control and interrupt Eclipse runs.
When an Eclipse instance is loaded into RESOLVE, OpenEclipse initialises the Eclipse run
according to the information in the data deck (i.e. it either equilibrates the reservoir or loads a
pre-saved restart file) and runs it to the start of the first timestep. The instance of Eclipse is then
queried by RESOLVE for the sources and sinks - these will be the wells in the simulation that
exist at this time. For this reason, it is important that the wells that are to be present in the
coupled run are declared at the start of the run.
If the Eclipse data deck is quite large, it may therefore take some time for the case to be loaded
as Eclipse is actually running the first part of the simulation.
Once the case has run to the end (or the user has terminated the run with the "stop" button from
RESOLVE) Eclipse will terminate. Note that this is a different behaviour to GAP, REVEAL, and
for example Hysys as these applications stay in memory until RESOLVE is exited. When the
user starts a new run Eclipse will again have to perform the initialisation described above; thus
repeated runs may take longer to initialise than the original run.
To create an instance of Eclipse in the RESOLVE system, click on Edit System | Add Client
Program | Eclipse OR Eclipse300 OR DeadOil Eclipse from the RESOLVE main menu.
If the Eclipse entry does not exist in the menu, then the drivers need to be registered with the
RESOLVE application.
See the "Driver Registration" section to do so.
In order to open and / or run an Eclipse case, the Eclipse connection needs to be
configured. Refer to the "ECLCONFIG.EXE ", "Setting up MPI" and "Running an Eclipse
instance on a remote computer" sections for further information on how to setup an
Eclipse connection on both local and remote machines.
Once the icon is created on the RESOLVE interface, double-click on the icon to enter the load/
edit case screen.
The following screen displayed and is split into three tabbed sections:
Case Details This tab has information on the Eclipse data file and the machine on which
the Eclipse runs.
July, 2021
RESOLVE Manual
User Guide 218
The Advanced section at the bottom of the screen contains some options that have been
introduced to cope with exceptional circumstances in the model. Normally they should not be
changed. More information regarding these settings can be found in the " Eclipse Driver :
Advanced Options Section".
The Start button will start the Eclipse simulation (e.g. or stop it if it is already running). It will be
necessary to start Eclipse in order to generate the well icons in RESOLVE, although the startup
may take some time. Once the wells have been created in RESOLVE, it will not be necessary to
start Eclipse unless a run is being performed.
The OK button will save the settings that have been made without running Eclipse. Care should
be taken if the data file contents have been changed compared to the well representation on
the RESOLVE screen. If changes have been made, or the data file itself has been changed,
Start should be used to refresh the RESOLVE screen.
File Name This is the data file name. It should be an Eclipse100 or 300 case
depending on the driver being run.
July, 2021
RESOLVE Manual
User Guide 220
Note that no syntax errors have to be present in the data file as that will
cause Eclipse to terminate prematurely.
If the run is a remote run, the file name specified should be the file as seen
by the remote computer (i.e. relative to the root of the remote computer file
system).
Windows This is optional. If it is supplied then it should be a Windows path to the data
shared file given above, in the case that the path given is not visible from the
path to machine running RESOLVE (notably if the model is running on Linux). It is
data file used in the following ways:
1. To allow the IFM model catalogue to locate and store Eclipse models
located on other systems or architectures than the local system.
2. To allow the viewing of the data, log, and prt files from the local
workstation
Parallel run Parallel Eclipse is supported by RESOLVE. If this button is checked then
the "Parallel options" button will be displayed.
Machine This is the name of the host on which Eclipse100 or Eclipse300 or DeadOil
Eclipse is to be run. This is required for remote runs.
A remote Windows machine must be "visible" to the MPI protocol (i.e. the
MPIStartupServer or similar Intel MPI service must be running on the remote
machine). A remote Linux machine does not have to be visible to MPI, but it
will be necessary to fill in the port number and location of the run factory
(controller) machine. See the "Running an Eclipse instance on a remote
computer" section for further information
This is a If a remote Linux machine is used, this tick-box should be selected, and the
remote machine port number and controller machine name should be specified
Linux Run
Send data This may be used if the file specified under "File Name" is not visible to the
file to client remote computer. This option is not recommended. The data file will be
sent to the remote instance of Eclipse by the driver. For most models this
can be quite slow to initialise (although the actual forecasting time is
unaffected)
Remote If sending the data to the client (above), this allows the working folder on the
working remote machine to be specified. All Eclipse output files will be written to this
folder directory. The path must exist on the remote machine, otherwise the Eclipse
initialisation will fail
Read "best This enables to display the best practice document that describes the best
practice" way to setup an Eclipse model in order to link it with a GAP model through
document RESOLVE.
See the "Best Practice" section for further details
PVT File Eclipse300 only
OpenEclipse does not allow the EOS properties to be read dynamically,
and so this data is parsed from the data deck.
In this edit field the name of the file in which the EOS properties are held
should be entered, which may be the same as the file entered under "File
Name".
Note that INCLUDE files are not read. Note also that, if no properties are
read (e.g. if no file is supplied), black oil properties will still be passed from
the simulator to the receiving model
To do this, the "Parallel run" check box should be clicked on the main "case details" screen.
This will enable the options to run parallel Eclipse on Windows. To run on Linux, the option "This
is a remote linux run" will need to be selected in addition to the "Parallel run" option. For either
option, the "Parallel options" button should then appear, which when pressed invokes this
screen.
July, 2021
RESOLVE Manual
User Guide 222
Manual In this case, the machines on which the Eclipse models are run are selected
Machine by the user.
Allocation The "LSF will be used to allocates suitable nodes" option is not selected.
The number of processors to be used in the parallel run along with the
machines in the cluster should be entered here.
Note that the number of processors MUST be the same as that set up in the
Eclipse model (using the "PARALLEL" keyword, for example), otherwise
the run will fail.
Also, the parallel run will use the Scali (SMC) version of eclipse
(eclipse_scampi.exe or e300_scampi.exe) and so it will be necessary to
ensure that the SMC options have been set up appropriately, with the
correct interconnect (e.g. infiniband) etc.
LSF In this case, the machines on which the Eclipse models are run are selected
Machine by the LSF cluster.
Allocation The "LSF will be used to allocates suitable nodes" option is selected
July, 2021
RESOLVE Manual
User Guide 224
Control Choose from Explicit (start of timestep consistency) (default) and Newton
level (end of timestep consistency)
Aside from the above limitation, there are other reasons why
Newton level control may not be advisable:
The run will be much slower than the equivalent explicit run.
Petroleum Experts experiments indicate that the deviation
between fully implicit and explicit results are minimal provided
a suitable control method is chosen for the wells. In addition,
the adaptive timestepping in RESOLVE can be used to
speed up the forecasts.
GAP is capable of genuine (i.e. physically realistic) non-linear
behaviour which may make convergence with a Newton
method very slow or impossible. This is particularly true when
GAP has many degenerate solutions.
Maximum These settings govern the performance of the Newton level control.
iterations / The default values will be suitable for most cases and these values should
RMS tolerance normally not be modified
Inflow Performance Type
The IPR generation is a very important consideration when
setting up coupled models. More details can be found in the
"IPR Generation Options" section
Control This is a global setting for all production / injection wells determining how
production the wells will be controlled. Individual well controls can be set by pressing
wells / Control the Set individual well controls button.
injection wells
Care should be taken when setting the control mode in an explicitly
controlled system. For example, very productive wells operating under a
small drawdown should not be controlled with a fixed bottom hole pressure
as the rate can then vary enormously over the length of the RESOLVE
timestep.
The default setup, which is to control the wells by their main phase rate, is
the most reliable way to control the wells for a wide range of circumstances
Auto-switch In some cases wells which are predominantly producing oil may, at some
between stage in later life, start to produce large amounts of gas (or vice versa). A
gas and liquid fixed oil or liquid rate may not be appropriate for the well over its entire
control lifetime. This option allows to switch automatically between a fixed liquid,
oil, or water rate to a fixed gas rate (or the reverse) in such circumstances.
July, 2021
RESOLVE Manual
User Guide 226
RESOLVE will use a GLR threshold to determine the stage at which a well
can be considered to have switched between phases. This threshold is
15,000 scf/STB by default, but it can be changed on the "Advanced
options" screen
Fluid RESOLVE always passes temperature data between applications. When
temperature Eclipse is isothermal this data does not exist and so must be entered
manually in this field. Note that the data is only required for production
wells.
IPR that are passed from the reservoir model to the surface network model can be generated
from the reservoir model in different ways, as described below. Some of these techniques
provide realistic inflow performance definitions, whereas other techniques provide unrealistic
inflow performance definitions, often leading to instabilities in the full-field model results.
The choice of the IPR generation option used is crucial to the behaviour and accuracy of the
model, and IPR generation techniques are a subject of constant development and testing.
Even so all the different IPR generation options are detailed below, it is important to note that
the most advanced, and recommended option is the Calculated PI (Based on Drainage) |
Scaling option.
Before detailing the different IPR generation options that are available in RESOLVE, the
following describe the important elements when generating IPRs for dynamic coupling:
The true rate-pressure response of the well over a timestep (i.e. which is what should be passed
to the GAP model) is given by the green line in the diagram below, which is simply the line
connecting three different solution points at (Q1,P1), (Q2,P2), and (Q3,P3). The red lines
represent typical block-calculated IPRs for each of these points.
One way of effectively obtaining the true (green) response is to iterate between Eclipse and
GAP. This is what the "Newton level" coupling is trying to achieve. However, this is not an
efficient coupling as it is computationally expensive and is normally limited to the coupling of a
single Eclipse model to a GAP model.
Alternatively, if a zero flow pressure P0 is calculated then it will be possible to generate an IPR
by connecting the current solution to the pressure P0. This zero flow pressure is the drainage
region pressure for the well in question.
This describes why IPR generation methods based on drainage region reservoir pressure are
preferred to the ones based on block reservoir pressure.
Calculated PI In this case an IPR is calculated by performing well calculations for a set of
(block) different bottom hole pressures for each well at the first Newton iteration of
the Eclipse solver. This will build an IPR that is a representation of the well
performance at one specific point in time. The "reservoir pressure" used to
calculate this IPR is derived from the pressure of the block in which the
wells are perforated. It will include non-Darcy effects (i.e. especially
important, therefore, for gas wells) and any potential cross-flow. However,
July, 2021
RESOLVE Manual
User Guide 228
the entire reservoir is not solved for each bottom hole pressure so effects
such as coning will not be included in the resulting IPR description.
Build-Up In this model the wells are closed for a fixed (i.e. though configurable) time and
(E100 only) the pressure transient is observed. The drainage pressure is then calculated.
July, 2021
RESOLVE Manual
User Guide 230
The few elements below can be considered if issues are experienced with the diffusivity
calculations: it is important to note that the first element to be consider when experiencing
diffusivity calculation issues is the way the Eclipse deck is setup. It is recommended to follow
the "Best Practice" notes when setting up an Eclipse model to be used with RESOLVE.
If a calculation fails because of a failure to generate an IPR, then this would normally
because RESOLVE is not able to bring the well in question online.
If the model is running from a restart, it is important to ensure that the well is "on" at the
point where the restart is generated: (which can be specified by using a WCONPROD
or WCONINJE keyword.)
Another frequently observed reason for the failure of the scaling calculations is the use
of the Well Economic Limit (WECON) keyword in the Eclipse deck. During the Scaling
calculation, it is possible that this minimum rate limit be triggered and the wells will be
shut off and hence not allow RESOLVE to control the well/s. It is recommended that the
WECON keyword be completely removed from the data deck, since in the presence of
a Network Model, this is not required as the fluid production rate is a function of the
wellhead pressures. Should the wellhead pressures are high enough, the wells will stop
production.
If Eclipse is sitting at a time before history data is applied - when RESOLVE performs
the well calculations it will not be able to override history production or injection settings
(WCONHIST, etc).
If Eclipse prevents RESOLVE from controlling the wells, then superfluous control
keywords should be removed.
One way to ensure that the scaling will be run is to remove ALL the Scheduling
keywords from the Eclipse deck and simply place the WCONPROD keyword for each
well with the status set to open at a certain rate. (The rate itself will be insignificant as it
will be overwritten by RESOLVE)
If RESOLVE tries to generate IPR data for connected wells which are under
group control, an error will be reported and the run will terminate
This screen allows to respect or clear any or all of the group controls in the Eclipse run, with the
caveat that if RESOLVE wells are allowed to be under Eclipse group control then this will cause
problems in the run.
July, 2021
RESOLVE Manual
User Guide 232
The screen displays all the groups that are present in the Eclipse model. Each group can be
checked or unchecked each group to tell RESOLVE whether any controls on that group are to be
respected. If a group is left unchecked then this group will effectively be deleted from the Eclipse
run.
The action buttons can be used to check or uncheck all the groups. Note that the top level
"FIELD" group is always unchecked as it makes no sense to allow field production constraints
in Eclipse when any part of the model is controlled by RESOLVE.
This tab sets up some miscellaneous data that may be used in the Eclipse run, and uses the
following screen:
Well Scheduling
The well schedule can be controlled by Eclipse itself or by the connected application (i.e.
typically GAP). This refers to the way that wells can be scheduled to be brought online or shut
in.
We strongly recommend that this is option is set to "GAP" and that all scheduling is
removed from the Eclipse model, as discussed in the "Best practice" document.
Eclipse In this case, the well management scheduling will be carried out entirely by
the Eclipse schedule deck. Wells can be brought on line with WCONPROD
or WCONINJE keywords or shut in with a WELOPEN keyword, for
example. Alternatively, economic limits can be set with WECON. In
July, 2021
RESOLVE Manual
User Guide 234
essence the well state (on or off) will be determined entirely by Eclipse.
If GAP shuts a well (for optimisation purposes or because it is unable to
flow) the rate for that well will be set to zero for the timestep, but the well will
not be closed unless this violates some limit in the Eclipse input data. The
well efficiencies set by Eclipse with the WEFAC keyword will be honoured.
Note that in this mode RESOLVE will put in extra timesteps to synchronise
with the Eclipse events. In fact, RESOLVE will synchronise with all Eclipse
report times: in other words, if TSTEP is used in the Eclipse schedule
RESOLVE will put in extra timesteps to synchronise with the times in the
TSTEP keyword
Connected In this case, the well state is determined by GAP alone and the Eclipse
application schedule is effectively overwritten. If GAP closes a well, that well is closed
(GAP) for the timestep but is potentially allowed to flow at the next timestep if this
is possible (unless abandonment has been specified as indicated below).
Well efficiencies will be derived from the well downtime set in the GAP (or
connected) model.
Write a well If this button is selected a well management debug file will be written
scheduling debug which gives the status of all the wells (including those not controlled by
file RESOLVE) at the start of each timestep. It will also highlight when a well
has changed state (i.e. been closed having been open, etc), and will
indicate if a well has been prevented from flowing by the connected
application (i.e. GAP).
The location of the debug file can also be specified. By default, RESOLVE
will attempt to write the file at the same location as the Eclipse input file. If
the model is running remotely then this might not be possible so a new
output file should be selected. This should point to a local directory and
be a fully qualified filename (e.g. c:\eclipse-data\dbg.txt)
July, 2021
RESOLVE Manual
User Guide 236
Additional output Variable keywords that appear in the SUMMARY section of the Eclipse
variables: plot model can be added to the results that appear in the RESOLVE reporting.
summary vectors The box should be checked to activate this feature.
in RESOLVE
Note that the RPTONLY keyword in Eclipse will suppress the passing of
this data to RESOLVE and so should not be used if this feature is to be
used
Simulation From the drop down list select the level of reporting that is needed on the
debugging RESOLVE calculation windows.
It is possible choose which RESOLVE windows to send the messages to
in the check boxes below this
This screen contains some options that have been introduced to cope with exceptional
circumstances in the model.
Normally they should not be changed.
If the entry is left blank, Eclipse will be started at its own internal start date
(i.e. with reference to its input file). This can be the start date of the history
or the date of a restart file.
If the RESOLVE start date is later than the Eclipse start date, Eclipse will
run its history (i.e. and/or forecast) up to the start date of RESOLVE. For this
July, 2021
RESOLVE Manual
User Guide 238
reason there is no need to enter anything here if Eclipse has to run to the
RESOLVE start date as this will be done automatically for the user.
If the date entered here is earlier than the date at which Eclipse starts (i.e.
the start of the history or restart) then it has no effect
Ignore This should be used with care.
Eclipse By default, RESOLVE will synchronise at every Eclipse DATE or TSTEP
events entry (i.e. in addition to its own synchronisation times) to ensure that any
control keywords at these times are overwritten by the control data from the
network. There is a small overhead associated with this, as the Eclipse
timestep is cut after a synchronisation. However, if the keywords are only
for reporting or for generating restart steps this flag can be set to force
RESOLVE to ignore these events
GLR This is the threshold above which RESOLVE considers a producer to be a
Threshold gas producer and hence a candidate for a fixed gas rate control through the
auto-switch option. If a well is controlled with a fixed liquid or oil rate and the
GLR exceeds this value, the well will automatically change to fixed gas rate
control if the auto-switch option is on (alternatively, a single warning
message will be generated).
Similarly, a well controlled with a fixed gas rate can revert automatically to a
fixed liquid rate control when the GLR falls below this threshold value
IPR These are options introduced in IPM #5 to help with a particular set of
Generation problems which concerned OpenEclipse.
July, 2021
RESOLVE Manual
User Guide 240
IPR point It is very unlikely that this option will have to be changed. It
grouping may be useful for cases where there is a BHP
discrepancy between GAP and Eclipse that can be traced
back to interpolation errors in the IPR table (which can be
most apparent in cases with large non-Darcy factors)
IPR rescaling The option allows execution of scaling calculations at the
(IPR 'scaling' intermediate timestep during the forecast run
option)
Rescaling Fraction of the PI change between two subsequent time
tolerance steps that will trigger rescaling calculation
Maximum IPR Defines the maximum rate multiplier that is used to
rate multiplier generate IPR.
IPR is extracted from the simulator as table of BHP vs
Phase Rates. 10 points are evenly distributed between
maximum value and zero. In case the well has low
productivity then the default maximum may result in
infrequent distribution of rates and as such no IPR will be
returned. Reduction of the 'Maximum IPR rate multiplier'
will ensure finer rate distribution and allow obtaining IPR
form the simulator
These variables can then be used for "Event driven scheduling", or for building up different
"Scenarios", for example.
Select the Eclipse section to access the Eclipse variables and click on Edit Variables.
The following screen will be displayed, in which three section are available: Groups, Wells and
Completions.
July, 2021
RESOLVE Manual
User Guide 242
Groups This section enables to publish variables from the group control sections of
Section the Eclipse model
Wells This section enables to publish variables from the wells sections of the
Section Eclipse model.
To select the variables to publish in these two sections, select in the left
hand screen the group and the variable of that group that is to be published
and click on the red arrow button. The variable to be published will be
displayed in the right hand side of the window
Completion Eclipse allows variables associated to each of the well completions to be
Section published in RESOLVE.
In order to do so, the following procedure can be used:
In the Variable Name section, enter the name of the variable to import -
This name is user-defined.
In the Well section, a drop-down box will allow to select which well the
variable to publish is associated to.
In the Completion section, a drop-down box will allow to select which
completion the variable to publish is associated to.
In the Variable section, a drop-down box enables to select the variable
to be reported.
The variable Unit will then be automatically reported.
In the example below, the oil rate of completion 1 of the LU1 well defined in
Eclipse will be published in RESOLVE.
The additional functions menu of the Eclipse link can be invoked by right-clicking on the Eclipse
icon in the RESOLVE graphical view.
July, 2021
RESOLVE Manual
User Guide 244
The commands at the bottom (outlined) are those that are specific to the Eclipse link.
Description of the other functions can be found in the "Other functions" section of the GAP driver
description.
Well control This enables to access a table that summarise the well control options used
table for the different wells of the model
View data file Provided the data file is local to the machine on which RESOLVE is running,
this will allow to view the main Eclipse .data file
View log file Provided the data file is local to the machine on which RESOLVE is running,
this will allow to view the .log file generated by Eclipse during the forecast
View prt file Provided the data file is local to the machine on which RESOLVE is running,
this will allow to view the .prt file generated by Eclipse during the forecast
View well status If the "Write a well management debug file" option has been selected (i.e.
file on the "Miscellaneous" tab of the data entry screen) this will allow to view
this file.
It can be viewed during the forecast, although it will not be updated
dynamically. To get updates the screen must be closed and re-opened
Once the data deck is accessed from the tokens editor, tokens can be created by highlighting
the value of the parameter and then clicking “Create new token”. The value will be replaced by
July, 2021
RESOLVE Manual
User Guide 246
Tokens can be used in the VisualWorkflow via Intellisense OpenServer strings. In the example
below, the value of ~MyToken~ is defined to be 0.5 in the Assignment element (“MyToken”).
This section describes how to connect two computers and run Eclipse included in a RESOLVE
model on a remote machine (i.e .PC or Unix).
2.5.5.7.1 Remote Windows Run
On the remote PC, the same setup and configuration of MPI should be carried out. The major
difference between running Eclipse locally and running Eclipse on the remote machine is that
RESOLVE does not know where Eclipse is installed on the remote machine. This means that
the PATH environment variable must be modified on the remote computer to include the path to
the eclipse_mpi.exe, for example:
PATH=%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;C:\ecl
\home;C:\ecl\macros;C:\ProgramFiles\MPIPro\bin;C:\Program Files\Common Files
\AspenTech Shared\;c:\program files\mpich2\bin;C:\Program Files\SPT Group\FlexLm;C:\ecl
\2018.2\bin\pc
The remote connection can be tested under the "EclConfig" wizard; any problems can be
picked up from the MPI log file. Problems that can occur can often be traced back to Windows
security issues such as a running firewall or difficulties with MPI logging on to the remote server.
These issues can usually be addressed by I.T. staff, and Petroleum Experts can also be
contacted for further assistance.
The environment needs to be set up on the remote PC to allow Eclipse to obtain a license
when it loads a data file, as it would need to be on the local PC when running Eclipse locally. In
other words the variable LM_LICENSE_FILE needs to be set appropriately, e.g.
LM_LICENSE_FILE=1600@MYSERVER
Basic Principles
To allow the connection a process runs on the head node of the cluster. This process may
either run as a simple console application (which is useful while testing), or as a full Windows
service. The application/process in question is called WinRunFactory.exe, and it sits on a port
waiting for requests to start simulation jobs. This port must be opened through any firewall.
When a user requests a simulation, Resolve broadcasts the details of the required run through
the port to the service.
Once the service has the required information, it starts the simulator by executing a batch
script.
A example script for Eclipse, ResolveEclipsePxClusterStarter.bat, is provided in the IPM 12
installation directory as a starting point to illustrate the principles of how it should work. When
used ‘as is’, this script will use PxSub.exe to submit the job to the cluster, which then distributes
July, 2021
RESOLVE Manual
User Guide 248
1. –p <port number>. Specify the listening port number for this executable to which Resolve will
broadcast the run information.
2. –o. Start port (o, above) for communication with the eventual simulation run. It defaults to
9001. The port used is incremented from this value for every run which takes place.
3. –n. Number of free ports to use from the base port (-o). After this number of runs it cycles
back to the base port. It defaults to 100.
4. –s <script>. This should be a fully qualified path to the script described above. If nothing is
supplied, it defaults to ResolveEclipsePxClusterStarter.bat.
5. –l. Path to the log file which logs requests made to the run factory.
6. –i. Install as a service with the current set of command-line arguments.
7. –u. Uninstall the service.
We then need to go to Resolve, open an Eclipse driver and insert the model path and cluster
details. We will use the RADIAL.DATA example provided in the Eclipse 2018.2 example folder.
Please ensure that you have a running PxCluster and WinRunFactory command prompt both as
Administrator.
July, 2021
RESOLVE Manual
User Guide 250
Clicking Start will make Resolve communicate to the WinRunFactory which will execute the
Batch script “ResolveEclipsePxClusterStarter.bat”. This script then launches the Eclipse job
using PxSub.exe and reports the name of the running node to the run factory. The run factory
then reports the communication port of the running node to Resolve for direct communication to
occur.
2.5.5.7.3 Remote Linux Run
The following section illustrates the characteristics and set up procedure to be used in order to
run remote ECLIPSE runs on Linux platforms from RESOLVE. This is possible through 3 main
mechanisms:
RESOLVE launches an Intersect run directly on a target Linux machine
The minimum Linux version required is RHEL6 or equivalent – separate tar files are supplied for
both RHEL6 and RHEL7 and can be found in the IPM 11 installation directory in a folder called /
linux_executables/.
Before installing the IPM components on the Linux machine/cluster, the appropriate
Intel MPI installation should have been made and tested. Refer to the documentation
on these applications.
In addition, if LSF or PBS is to be used to distribute the Eclipse jobs, then this should
be installed and tested before the IPM components are installed.
2.5.5.7.3.1 Overview
To allow the connection, a single service, or daemon, must be running on a target Linux
computer. This service, eclrunfactory.exe, sits on a port (p) which must be opened through any
firewall. When a user requests a simulation, RESOLVE broadcasts the details of the required
run through the port to the service.
Once the service has the required information, it starts the simulator by executing a bash shell
script.
The script may be written by the user. However, three scripts are provided which should cover
most users’ requirements. The scripts are as follows:
1. eclmpi_dir.sh. These start the simulations directly, i.e. on a given computer as specified in
the RESOLVE interface, through a call to mpirun.
2. eclmpi_lsf.sh. These perform an LSF bsub job submission to launch another script
(eclmpi_lsf_runner.sh), which in turn starts the simulations through the same mpirun
command as (1). In other words, these scripts are responsible for LSF distribution of the
simulation jobs.
3. eclmpi_pbs_sh. As for (2), except using PBS as the distribution mechanism rather than LSF.
The scripts can be edited by the user (e.g. if a different distribution mechanism was required)
but obviously care would be needed.
The scripts are responsible for starting as many instances of the simulation executable as are
required by the model parallelisation, and then returning to RESOLVE the host on which the
models are running.
Communication to the simulation models from RESOLVE is then made via a thread spawned
from the eclcontroller.exe process. This communication is carried out through a comms port (o)
that will be referred to below.
July, 2021
RESOLVE Manual
User Guide 252
2.5.5.7.3.2 Installation
The run factory should normally be run under the root account (although the exception to this
should be noted, below). Examples follow with a listing of possible arguments:
./eclrunfactory.exe –p 8900
1. –p <port number>. Specify the port number (p, above) to which RESOLVE will broadcast the
run information.
2. –u. Run under the current user account, rather than root. This will not take account of the
contents of the .ecl_resolve_users file, described below.
3. –o. Start port (o, above) for communication with the eventual simulation run. It defaults to
9001. The port used is incremented from this value for every run which takes place.
4. –n. Number of free ports to use from the base port (-o). After this number of runs it cycles
back to the base port. It defaults to 100.
5. –s <script>. This should be a fully qualified path to the bash script described above. If nothing
is supplied, it defaults to eclmpi_dir.sh.
6. –t <timeout>. The time (in seconds) that RESOLVE will wait after the processes have been
initialised for a connection to be made to the simulator. The default is 60 seconds, which
should be ample.
7. –l. Generate a communication log file in the simulation model data directory.
Configuration Files
1. ecl_resolve_env. This allows environment variables to be set up which are then passed on to
the child simulation processes as they are started. It is, for example, a convenient location for
the LM_LICENSE_FILE definition. The file should consist of a number of lines representing
variable-value pairs, e.g.
LM_LICENSE_FILE 27000@PETEXLICENSE02
2. ecl_resolve_users. This file is not read if the –u option is selected above. It allows the
mapping from Windows user IDs to Linux user IDs, e.g.
dave developer 1
**0
Wildcards (‘*’) are allowed. The last number is 1 to allow a connection, 0 otherwise. The
example above maps the user ‘dave’ on Windows to ‘developer’ on Linux, and prevents any
other users from connecting.
Debug Logging
The run factory processes output their logs to /var/log (if they have write access) or /tmp (if they
do not).
Some variables can be used to force the processes to output logging information. For
convenience, they can be set up in the .ecl_resolve_env file described above.
ECL_MPI_RUN. The path to mpirun – MUST BE SPECIFIED either in the _env file or directly in
the scripts.
ECL_EXEC. The path to eclipse.exe – MUST BE SPECIFIED either in the _env file or directly
in the scripts.
The path to the boost shared libraries (included in the installation files) should be included in the
LD_LIBRARY_PATH environment variable.
When setting up Eclipse to run on Linux in IPM12.5, the port number and the head node can be
automatically populated in the simulator instance by selecting e.g. This is a cluster head node
run option, as shown below.
July, 2021
RESOLVE Manual
User Guide 254
This has already been configured for Eclipse 100, Intersect, GEM, IMEX, and TNavigator
simulators in the SimulatorConnections.xml file. In order to use this functionality, this file needs
to be uploaded to \Petroleum Experts\IPM12\SimulatorConnections.xml which can be accessed
at %Appdata%.
If changes are required in the SimulatorConnections.xml file (e.g. a new simulator is added or
the port number/server name is changed), then it can be done by adding a new line in the file or
editing already existing lines.
This message indicates that on the timestep where the error message occurs, no IPR is passed
to GAP. This can be verified by taking a look at the Debug Logging from Run | Debug File |
View
It is always best practice to remove all targets, limits, constraints and controls from the data
deck (e.g. WCUTBACK and WECON keywords), as these can interfere with the control of the
simulator from GAP.
The ‘Error retrieving the completion status of the wells: well array size not consistent’ error is
caused by an inconsistency in the number of wells defined the three keywords that are required
to define wells within Eclipse
Further to the above, errors may be reported in the *.PRT file and may indicate wells which do
not have any active connections with the grid.
July, 2021
RESOLVE Manual
User Guide 256
KPJSTR error
The ‘KPJSTR’ error means that the ECLIPSE job was started successfully (and RESOLVE
started to read the data file) but ECLIPSE crashed for some reason. This could be due to
hardware configuration or could be related to the data file itself, and it would be worth checking
the following:
i) Remove the PARALLEL keyword in the ECLIPSE *.DATA files and try running the model
Section 2.5.4.5.3 in the IPM RESOLVE User Manual details how to setup Eclipse
models to run in parallel.
ii) The restart file version and the Eclipse version being used. It may be so happen that the
restart file is created with a certain version of Eclipse, and if some other version of
Eclipse is used to perform the run, then this may fail.
iii) The ECLIPSE model should also have generated a *.PRT file which will often give
details of the reason for the run being terminated if the cause is due to the model.
iv) Another reason for this is that the user running the running the Eclipse job does not have
permissions over the output files of Eclipse (e.g. *.PRT, *.LOG, etc.). Try deleting all the
output files and then try again. This situation may arise if another user has previously run
Eclipse, and the new user does not have permissions over the files generated. This can
be resolved at the system level, either by setting up user groups or unmasks.
v) Copying and paste Example 2.2.1 (which can be found in the sample folder) into the
same location as the current model and then connect to it via RESOLVE. If this model
can run without a problem then the issue should have been isolated to the specific file in
use. However, if this sample model cannot be run then this implies that the hardware
configuration should be reviewed.
This means that the machine can not 'see' the machine and/or port on which the run factory is
running.
In this case, it is because the port '8889' is not the port which was entered on the command line
of the Linux-side daemon. Alternatively, it may mean that the machine name is wrong, or can not
be recognised in the DNS register.
With this error, it is unlikely that any Linux-side log files will be written.
The use of GPP for the inflow model of condensate wells in the Eclipse simulator has been
known to cause an issue when an operating point lies beyond IPRs. This appears to be a
consequence of the simulator working with RESOLVE. To fix it, access the ‘advanced’ settings
in the Eclipse module GUI in Resolve and from the resulting screen select ‘Perform Newton
solves before generating IPR’. Leave all the other options at their default values.
2.5.6 Connecting to IMEX/GEM
2.5.6.1 IMEX/GEM Overview
This help section documents the links to two reservoir simulators: IMEX and GEM.
These applications are developed by Computer modelling Group (CMG).
July, 2021
RESOLVE Manual
User Guide 258
The links are essentially the same in operation (aside from the compositional differences) and
have the following properties:
A remote machine can be of any architecture supported by the reservoir simulator, e.g.
Linux or Unix.
Execution of the simulators on the nodes of a cluster is supported. This uses either
LSF (developed by Platform Inc.) or PxCluster (developed by Petroleum Experts).
The original LSF connection assumed that the RESOLVE computer (running Windows)
and the target computer (potentially running Linux) were part of a single LSF cluster. A
third method, termed 'Head Node Daemon', is now supported which allows submission
of the IMEX/GEM job to a Linux cluster in which the task distribution is handled by the
head node of the cluster.
Integrated models that include IMEX or GEM can be checked into the
ModelCatalogue, so that integration with IFM is assured.
Full OpenServer access is available to all the input parameters of the IMEX/GEM
case (e.g. dataset, IPR model, remote execution, etc). This allows full automation of the
setting up of an integrated model from a third-party application or web-service.
Initial versions of the driver were developed by CMG to work with the 2005.10 (and later)
versions of the simulators. The first version of the driver developed by Petroleum Experts, and
built upon the CMG code, has been designed to work with the 2007.10 version of the
simulators.
2.5.6.2 IMEX/GEM driver configuration
The driver configuration screen can be invoked from the Drivers | Register drivers RESOLVE
menu item, by double-clicking on the IMEX or GEM entry or selecting the entry and clicking
"Configure".
The default executable path for this machine (or this user on this machine depending on user
permissions) can be set here. This means that the "executable" entry on the main data entry
screen can be left blank.
As the executable on the data entry screen is saved with the RESOLVE file then a file which is
transferred between different machines can fail to run because the executable is not correct for
the new machine. If the executable fields are all left blank and default executables are set with
this system, then this problem is overcome.
2.5.6.3 IMEX/GEM case setup guidelines
The following sections concern the setup of IMEX and GEM cases to work with GAP and
RESOLVE. There are certain recommendations that ideally should be followed, although the
coupling should work with minimal or no changes to the data deck. In the following descriptions,
IMEX and GEM are interchangeable unless mentioned explicitly.
Preparation There are no specific keywords required for IMEX to couple with GAP. The
of the only requirement is that the reservoir temperature must be passed in the link.
IMEX This is ensured by putting the reservoir temperature (*TRES) into the IMEX
Data Set Component Properties section.
Some restrictions and differences in how well recurrent data is interpreted are
imposed. The following well data restrictions are applicable for wells that are
linked between GAP and IMEX / GEM and are not applicable for wells that are
not linked
Recurrent In recurrent data, for linked wells, any operation that changes the value of a
Data well rate constraint is ignored. Rates are exclusively determined by GAP
ON-TIME For linked wells, at each RESOLVE date, the GAP down-time percentage is
Fraction passed to IMEX as an on-time fraction. IMEX uses the received on-time
fraction as if the same value was input through the IMEX recurrent data.
Essentially the definition / handling of on-time fraction for coupled wells is
moved to the GAP network.
July, 2021
RESOLVE Manual
User Guide 260
For unlinked wells, on-time fraction should be given in IMEX recurrent data as
usual.
IMEX passes back on-time fraction for all wells to RESOLVE, so that it is
accessible via a RESOLVE script and readable in the RESOLVE reporting
window
Opening / IMEX passes open / shut-in status for linked wells to RESOLVE at each
Closing RESOLVE date. After receiving this well status information from IMEX, GAP
Wells honours those wells IMEX shuts in and may add other shut-in wells based on
its schedule or optimisation procedure.
RESOLVE then runs a new timestep and passes the updated well status back
to IMEX. IMEX keeps the received well status for the IMEX timesteps required
to reach the new RESOLVE date.
IMEX can attempt to open a well (i.e. using a monitor or well control in a
trigger), but if the GAP schedule or the GAP network optimisation overrides
this, the well remains shut-in until the next RESOLVE date, where it will be
checked again by GAP, and if the network schedule or optimisation allows the
well to open, it will open. This check will occur at every RESOLVE date.
All this means that either IMEX or GAP can shut-in a well, but both IMEX and
GAP must agree that a well should be open before it actually opens
Monitors Generally, scheduling of events that affect the wells should be placed in GAP
and Triggers or, preferably, RESOLVE (via the "Event driven scheduling"). In the current
version of this link there is no access from RESOLVE to well layer information,
so events which affect layer productivity (e.g. workovers) must be scheduled in
IMEX in the usual way.
IMEX well monitors, group monitors and triggers are allowed to operate on
wells or well layers only. Details are described below. The allowed operations
are restricted to those which open or shut-in wells or well layers, or that modify
productivity / injectivity indices of the wells or well layers.
* DATE 2000 1 1
* TRI GGER " t r g_wel l 2" * ON_ELAPSED " TI ME"
t r el t d > 0. 0
* GEOMETRY *K 0. 07 0. 37 1. 0
0. 0
* PERFV * GEO 2
* * KF FF
2: 3 1. 0
* SETPI * MULTO 2
5. 0
* END_TRI GGER
The use of this “elapsed time” trigger ensures that the keywords
July, 2021
RESOLVE Manual
User Guide 262
within the trigger will be read at the first possible RESOLVE date
following “2000 1 1”.
The use of the “*ON_ELAPSED ‘TIME” treltd > 0.0” trigger would
normally force the trigger to fire immediately, but as this is a
linked model, the trigger is checked at the next RESOLVE date.
The trigger above will be fired at the RESOLVE date when the
bottom-hole pressure of PRODUCER-2 is lower than 1000 psia
Well For coupled producers, a non-zero surface rate constraint should be given just
Constraint to define the target stream phase for IMEX to use with the coupled well.
Definitions/
Use/ For example,
Restrictions *OPERATE *MAX *STO 500
indicates the target stream of the producer is oil: the rate itself is ignored.
The target stream phase is used to determine the IMEX constraint from the
three phase rates received from GAP. Within the recurrent data, the keyword
*TARGET with non zero rate can be used to shift the target stream phase from
one phase to another. *TARGET for linked wells should also be placed within
a trigger.
Similarly, non-zero STG and STW can be used to indicate a gas and a water
producer (if applicable) respectively. If more than one of the STO, STG and
STW constraints are defined, the first will be used.
For example,
*OPERATE *MIN *BHP 1000
*SHUTIN
for a producer may act at any IMEX timestep and cause a rate mismatch
between IMEX and GAP. Such conditional operations may be performed
instead by the monitor keyword or within a trigger.
The following screen is used to enter the input data required to run IMEX or GEM on a given
data set.
July, 2021
RESOLVE Manual
User Guide 264
Use specified Select "Use specified computer" to execute the simulation on a given
computer / computer, which may be the local computer. Select "Use cluster" to execute
Use cluster the simulation on a node of a cluster (see below)
Computer If the option is set to "Use specified computer" this allows the
selection of the computer on which the simulation should run.
The drop-down list contains a list of recently selected
computers as well as an entry "More", which invokes a
computer selector, as described below.
IMEX/GEM The executable used for the run can be entered in this field.
executable For local runs, If it is left blank, then the default entered in the "Configuration
screen" for the computer considered will be used.
For remote runs, the remote executable will have to form part of the SSH
command which is entered in this screen
Dataset This is the IMEX or GEM dataset for the run.
This may be specified with a full UNC definition: indeed, a cluster run in which
the target node is not known would have to have the dataset defined in this
way so that the same path is "seen" from all nodes. A remote run should
have a dataset defined for the remote (target) computer; again, it may be
convenient to use a UNC path which can be browsed to from the button to the
right
SSH command These are only active when a remote computer is selected.
(update) /
Local path to The SSH command button is used to display/update the ssh command,
July, 2021
RESOLVE Manual
User Guide 266
work folder based upon the information supplied in the boxes above. The displayed SSH
command is also editable, for example if the user has a different login name
on the client and server computers. (In this case it would be necessary to
alter ssh to ssh –l username_on_server).
"Local path to work folder" is the local path that points to the folder/directory
where the IMEX dataset exists. Clearly, the path should be visible from both
local and remote machines, and should have a UNC path definition. The
exception to this would be when the work-folder is on a local disk, and the
target CPU is a Windows machine. In this case, a mapped drive for the entry
(e.g. “D:\CMG\Dataset\”) can be specified. However, a UNC path is
recommended as it should always work, even if the mapping has not been
set up
IPR model Choose from "Block" or "Corrected".
If "corrected" is chosen, a preliminary calculation must be performed before
the run. See the "IPR models" section for more information
Advanced These options allow greater control over the simulation job and its output,
simulator such as creating a simulation log file or running the simulations on multi-
options processor mode
The IMEX / GEM links to GAP use two models for the generation of the IPR which is passed on
to GAP.
IPR that are passed from the reservoir model to the surface network model can be generated
from the reservoir model in different ways: the IMEX / GEM links to GAP use two models for the
generation of the IPR which is passed on to GAP, as described below. Some of these
techniques provide realistic inflow performance definitions, whereas other techniques provide
unrealistic inflow performance definitions, often leading to instabilities in the full-field model
results.
The choice of the IPR generation option used is crucial to the behaviour and accuracy of the
model, and IPR generation techniques are a subject of constant development and testing.
Even so further details on IPR generation are provided in the "IPR Generation" section, it is
important to note that the most advanced, and recommended option is the Corrected option.
The IPR generation options available when connecting IMEX / GEM to GAP are the following:
Block This is the original model in which the IPR is the inflow referred to the well
block pressure.
The block IPR is that generated by the simulator based on the block pressure
of the well and the mobility of the phases. It is the same IPR as would be
"seen" by the simulator at one iteration of its solver; a reservoir simulator
would then iterate on the inflow when it solves its timestep. Use of the block
IPR, therefore, leads to a fully explicit formulation which can be unstable
Corrected This is an improved model which is proprietary to Petroleum Experts.
The corrected IPR attempts to reduce the explicitness of the system by
calculating an IPR which is more representative of the performance of the
well over the timestep. To use this model, preliminary well test calculations
need to be performed.
When this option is selected on the main data entry screen, a button appears
which indicates whether the pre-calculation has been performed.
The "Calculate" button will perform the calculation for all wells. To choose a
subset of the wells (for example, if many of the wells are not connected) then
the "Wells" button invokes a screen in which wells can be selected:
July, 2021
RESOLVE Manual
User Guide 268
The wells to be included in the calculation should be selected in the left hand
pane, and the arrow button will add the selected wells to the calculation set.
The "Tuning" button displays some parameters which can be used to tune
the calculations. These parameters should only be changed on the advice of
Petroleum Experts.
Normally, the system will be equilibrated by setting all well rates to zero for a
fixed period. This can clearly take some time. If the system is already
equilibrated at startup, then this step can be skipped by using the check
button at the top of the screen
The following information describes how the remote execution between the RESOLVE computer
and a remote computer (Windows or Unix/Linux) can be enabled.
This section specifically refers to the case where RESOLVE is connecting directly to the
remote computer. It is also possible to send the job to a Linux cluster, where it may be
distributed using LSF or similar load balancing software from the head node of that cluster.
The IMEX-GAP link DLL (IMEX-2007-xx and later) is able to submit simulation jobs to network
computers via Secure SHELL (SSH). Here the network refers to the intranet/LAN.
The targeted remote computer should be a SSH server (running SSH service) and the
local computer should have a SSH client program installed. The SSH client issues a
ssh command to execute the simulation on a SSH server.
Both local and remote computers should be able to create/read/write files in the folder/
directory where the simulation dataset exists. IMEX and the link DLL exchange data in
that “work folder”.
SSH Server Linux and IBM AIX computers are equipped with OpenSSH which provides
SSH Daemon (SSHD) as the SSH service.
Windows computers needs to install third party software to run an SSH
service. For all our tests, WinSSHD ( http://www.bitvise.com/ ) was used to
provide the SSH service for Windows. Tested Windows platform were
Windows Server 2000, Server 2003 and XP 64 bit
SSH Client The SSH client must be on Windows, because the local machine needs to
run RESOLVE, a Windows based program. There are quite a few software
vendors provide SSH clients on Windows at a low cost. We developed/tested
using CopSSH (http://www.itefix.no/phpws/index.php ) which is an OpenSSH
derived Windows version. The client program has been tested on Windows
2000, XP, XP 64 bit and Vista
After the installation of CopSSH, the user may need to edit the “path” of
the Windows environment variable to add the SSH bin directory, for example:
C:\Program Files\copSSH\bin.
The installation of CopSSH, will install both the server and client software
on the users PC. The user may want to disable the CopSSH server service
as only the client software is required. This service is called Openssh SSHD
Service
SSH There are various approaches that can be used to enable SSH
Settings communication. The following descriptions are intended to give an example
of one approach which was employed in our testing process.
For technical details, the user should refer to the SSH software manual.
The ultimate object of the SSH settings is to allow user to invoke remote
simulation by issuing a one line “ssh” command at a local command console.
July, 2021
RESOLVE Manual
User Guide 270
For example, the following command line should directly start the IMEX with
the dataset "dataset1.dat".
ssh –l username_on_server simsvr d:\cmgexe\imex.exe –
f d:\cmgdata\dataset1.dat
where the “simsvr” is a remote Windows computer.
Above settings ensure that the $HOME/.kshrc is read when SSH does login.
If the $HOME is a shared directory among UNIX machines and those hosts
need different CMG variable settings, the shell startup file can include a case
command to account for different hosts
For example,
case $(hostname) in
simsvr | simsvr1 ) LD_LIBRARY_PATH=/usr/cmg/imex/
Linux_x64/lib:\ /opt/intel/fce/9.1.036/lib;
LSFORCEHOST=lserv; export\ LD_LIBRARY_PATH
LSFORCEHOST;;
aixsvr | aixsvr1) LSFORCEHOST=lserv; export
LSFORCEHOST;;
esca
Samba Mount If the work folder/directory is located on a drive under a UNIX platform, and the
user notices an unreasonable delay or even a hanging up on alternating text
file communication between IMEX and RESOLVE / GAP, there can be file
sharing problems caused by Samba “opportunistic file locking”. The
workaround is to switch off the file locking for all CMG - RESOLVE
communication files.
That is, in the Samba configuration file (e.g. /etc/smb.conf), add a line:
veto oplock files = \ /*.LSImex/*.LSResolve/*.LSGem/
*.LDImex/*.LDResolve/*.LDGem/
Restart the Samba mount. The above switch should be applied for those
mount points where the work folder may be located
Working folder An issue has been identified in which RESOLVE is not able to read data from
on a remote IMEX or GEM under Linux when the working folder (i.e. used for
disk communication between RESOLVE and the simulator) is physically located
and mounted on a different machine to the machine which is performing the
calculations.
July, 2021
RESOLVE Manual
User Guide 272
It has been found that in some situations the problem does not need the
workaround if the "local path to work folder" is mounted on the calculation
machine (linux_cpu), even if this in turn is mounted on the file server. In other
words, in the above example the local path would have the form: "//linux_cpu/
usr/data/cmg/simdata"
When running the CMG simulators (IMEX and GEM), RESOLVE is able to communicate directly
to the head node of a Linux cluster, which is then able to distribute the simulation jobs to the
nodes of that cluster. The distribution can be performed by LSF or some other distribution tool
of the user's choosing. The default is LSF.
The following section describes the setup of this scheme. For simplicity, IMEX and GEM are
referred to collectively as the CMG simulators, or CMG for short.
Contents:
Overview
Installation
Administering the lxresolve.exe daemon
2.5.6.6.1 Overview
lxresolve.exe.
lxresolve_client_cmg.exe. This is the client program distributed from the head node.
Currently, these components are built only for the Red-Hat Linux operating system,
version 4 and 5.
These programs are installed on a central ia32 or x86_64 server. Simulation runs can then be
spawned on the same server or any other appropriately configured Linux computer on the
cluster.
July, 2021
RESOLVE Manual
User Guide 274
The simulation models communicate with Resolve using the standard protocol for that simulator.
This is 'file transfer' for the CMG simulators.
2.5.6.6.2 Installation
This section assumes that the person performing the installation has a working knowledge of
the Linux environment. The following steps require the user to be logged in as ‘root’ on the Linux
machine.
To do this, the command ‘mkdir lx-ipm12’ can be used under the </home/
developer> directory
Copy this file across to the target directory on the Linux machine
cd lx-ipm12
gunzip lx_resolve.tar.gz
This will unpack the stated file and place the file lx_resolve.tar in the installation
directory. The next command will be to extract the archive file using the command:
STEP 5 INSTALLATION
As ‘root’ run the installation by entering the command:
./install.sh
The installation will require entering the various configuration settings from the
user. These configuration settings are described as follows
OPTION 0: EXIT
OPTION 1: INSTALL SOFTWARE ONLY
OPTION 2: SETUP CONFIGURATION
OPTION 3: DO EVERYTHING
July, 2021
RESOLVE Manual
User Guide 276
If the configuration files are to be freshly written by the installer, Select OPTION 2.
Select OPTION 3 if the objective is to do both.
If the installation of the Linux executables is being performed for the first time, it is
strongly advised that Option 3 be selected. This will guide the person performing
the installation towards setting up the Linux environment as expected.
For the sake of this example we shall ask the installer to install the software AND
setup the configuration files. Thus the option to select will be <Do Everything>
which is option number 3
The lx_resolve.tar.gz file that was installed in STEP 2 above, is basically a tape
archive (tar) file (like a zip file) which has some files stored in it. These files are
the components that are required to be installed on the Linux Machine. In this
tar file, there are two versions of the installation files available.
Depending upon the option that is selected in this Step 11, the installer will
unpack the correct version of the files and place them in the installation
directory. The procedure to install the components is the same for both types of
architectures
There are three configuration files that are created during the installation process:
a) .lxresolve_users --> sets up queues or other command line options for the
distribution software (e.g. LSF), as well as setting up individual user permissions
and mappings between Windows user names and Linux user names. This is
described in more detail here.
LSF can be set up for the distribution of simulation jobs. This is described in more
detail here. If this is to be used, it should be renamed '.lxresolve_cmd' (without the
trailing '_').
Before continuing to the next step it is important that the configuration files are
configured correctly. If LSF is to be used, it will be necessary to configure the
.lxresolve_users file.
There are many ways to configure the file, but this should be sufficient to get
started
STEP 10 LAUNCH THE PROCESS.
The user must be in the appropriate directory where the installation has been
done. For this example it is </home/developer/lx-ipm10>
./lxresolve.exe 7777 0
In this case, any processes that the daemon launches (such as the simulation job)
will be run under the user account that was obtained from the contents of the
.lxresolve_users file described above.
If the above command line is run from a non-root account, the daemon will
terminate with an error. However, it is possible to run under a user account by
appending the -nosuid option to the command line:
In this case, jobs spawned from the daemon will be run under the account under
which the daemon was run; in other words, the daemon will not attempt to perform
a setuid.
The first argument is the port number with which Resolve will communicate with
July, 2021
RESOLVE Manual
User Guide 278
The second argument is an LSF timeout in minutes. If LSF is not being used, this
will be ignored and the timeout will be fixed at 20 seconds. If it is set to '0' (as
above) then the software will wait indefinitely for LSF to start the job.
The above mentioned command line will start the lxresolve.exe process.
Alternatively this file may be included as a service on the Linux machine
Under the 'cluster' section, LSF (head node daemon) should be selected. The
head node (on which lxresolve.exe is running) and the port number (7777 in the
above example) should be entered.
Should you require any further information please send an email to edinburgh@petex.com or
contact:
Email: Edinburgh@petex.com
The first line should be '0' if there are no special, user-independent options. Otherwise, LSF
command line options can be entered as per the examples below.
The following lines consist of pairs of users, to allow lxresolve.exe to map a Windows user
(from RESOLVE) to a Linux user (which will run the simulation job).
The third argument of each line should be '1' (to allow access to the Windows user) or '0' (to
deny access).
The fourth argument consists of further LSF options for the user in question. If none are given
('*') then those options given in the first line will be applied, if present.
Although LSF is discussed here, the options supplied in this file will also be applied to other
softwares used to distribute the tasks, as supplied in the .lxresolve_cmd file (see below).
Examples:
-q resolve
winuser1 linuxuser1 1 –m linux1 linux2 –q priority
winuser2 linuxuser2 1 *
The RESOLVE user ‘winuser1’ will submit the simulation as an LSF job under the
‘linuxuser1’ account. When this user submits a job, the parameters ‘-m linux1 linux2 –q
priority’ will be appended to the bsub command.
The RESOLVE user ‘winuser2’ will run as ‘linuxuser2’ on Linux. On submission, default
parameters (-q resolve, for submission to a ‘resolve’ queue) will be appended.
July, 2021
RESOLVE Manual
User Guide 280
0
**1*
This is the file that is generated by the installer. It allows all users to submit jobs under
their Linux accounts, with no special arguments to the bsub command. If the Windows
account does not have the same name as the Linux account, the run will fail.
If present, lxresolve.exe will read the contents of the file and use the command line specified
when starting a simulation job (instead of the LSF 'bsub' command). The dummy file generated
consists of the line:
/usr/bin/ssh
Clearly, ssh requires a specific computer to be named, which defeats the object of load
balancing. However, as an illustration, the command line arguments for ssh can be appended in
this file, e.g.
Alternatively and more logically, the command line arguments can be added to the
.lxresolve_users file:
0
WinDave LinuxDave 1 -l LinuxDave linux64
and so on.
2.5.7 Connecting to Intersect
This help section documents the links to Intersect (IX) reservoir simulator. This reservoir
simulator is developed by Schlumberger.
Communication with Intersect is done via MPI protocol, which should be installed on
the computer or cluster where the simulator is running
The table below explains the main functions used in this window.
July, 2021
RESOLVE Manual
User Guide 282
described below.
Head node Name of the Head node on Linux side where the Intersect simulation job will
be submitted.
Port Linux daemon communication port number.
Configuration The button displays Intersect driver configuration window which can be
configured/reconfigured without going to Drivers section.
Intersect The executable used for the run can be entered in this field.
executable For local runs, if it is left blank, then the default path entered in the Driver
Configuration will be used.
DataSet This is the Intersect dataset for the run.
This may be specified with a full UNC definition: indeed, a cluster run in
which the target node is not known would have to have the dataset defined in
this way so that the same path is "seen" from all nodes. A remote run should
have a dataset defined for the remote (target) computer; again, it may be
convenient to use a UNC path which can be browsed to from the button to
the right.
IPR model The following options are available for selection:
EclRun When running Intersect via RESOLVE, it will soon be possible to use the
Launch Schlumberger developed ECLRun. This option will be available in the 2020.3
options release.
Use IPM Allows the user to select a parameter in the input data deck and replace it
tokens with user defined values. This gives the user flexibility to perform sensitivity
analysis via CaseManager, SensitivityTool, and Probabilistic data objects.
Control mode The following options are available for selection:
Combined GAP sets both gas and liquid rate constraints when a
well has significant amounts of each phase present, but
automatically reverts to single phase control when one
phase is clearly dominant (such that we do not impose a
constraint for a trivial phase).
Gas GAP sets gas rate constraint
Liquid GAP sets liquid rate constraint
Rate Uses GAP’s definition of the well to set the rate control
(dominant type: typically single phase control.
phase)
Close The check box will make sure that non-connected wells of the simulators are
unconnected shut.
wells If the box is not checked, then non-connected wells will follow the simulator
control and schedule.
July, 2021
RESOLVE Manual
User Guide 284
only requirement is that 'EXTENSION "ExternalController" ' should be added as part of the
SIMULATION section of the deck.
July, 2021
RESOLVE Manual
User Guide 286
Click on Edit variables. The following screen is displayed; note that the well names will appear
only when the model has been loaded in RESOLVE. Select the well name and desired property
and click on Add to publish the variable. On the right hand side, use the check boxes to delete
selected variables.
The properties listed above are included by default. It is possible to include additional IX well
properties for reporting. In order to do this, go to the IX driver configuration screen. Properties
can be added under 'Additional exposed properties'. The properties which can be added are
documented in the Intersect User Guide in section 5.3: Field, Group, Well and Connection
Properties.
July, 2021
RESOLVE Manual
User Guide 288
The property added will appear under the Variables | Import Application Variables dialog
and can then be imported in RESOLVE.
Note: The Intersect variables available in RESOLVE, as with all third party applications, are
those which are exposed by the API.
2.5.7.4 Tokens for Intersect
A new functionality- Tokens for reservoir simulators - was introduced in IPM 12 which allows the
user to select a parameter of a generic reservoir model and replace it with user defined
values. This is done from the RESOLVE interface by replacing the value of the parameter in
the input data deck. Once the value is entered, it is used for running the model and this gives the
user flexibility to perform sensitivity analysis via CaseManager, SensitivityTool, Probabalistic
DataObjects @Risk, Sibyl, and Optimisation.
In the case of Intersect, once the model is loaded, the use of Tokens can be enabled by clicking
the “Use IPM tokens” functionality, as shown below. After this, the tokens can be set up in the
data deck by using either "Setup tokens" functionality or “IPM Tokens Editor”. The latter
appears on the screen when you right click on the Intersect icon.
Once the data deck is accessed from the tokens editor, Tokens can be created by highlighting
the value of the parameter and then clicking “Create new token”. This value will be replaced by
~MyToken~ which can then be used in the Visual Workflow.
July, 2021
RESOLVE Manual
User Guide 290
Tokens can be used in the VisualWorkflow via Intellisense OpenServer strings. This is
demonstrated in the example below where the value of ~MyToken~ is defined to be 100 in the
Assignment element (“MyToken”).
Before installing the IPM components on the Linux machine/cluster, the appropriate
Intel MPI installation should have been made and tested. Refer to the documentation
on these applications.
In addition, if LSF or PBS is to be used to distribute the Eclipse jobs, then this should
be installed and tested before the IPM components are installed.
2.5.7.5.1 Overview
To allow the connection, a single service, or daemon, must be running on a target Linux
computer. This service, ixrunfactory.exe, sits on a port (p) which must be opened through any
firewall. When a user requests a simulation, Resolve broadcasts the details of the required run
through the port to the service.
Once the service has the required information, it starts the simulator by executing a bash shell
script.
The script may be written by the user. However, three scripts are provided which should cover
most users’ requirements. The scripts are as follows:
1. ixmpi_dir.sh. These start the simulations directly, i.e. on a given computer as specified in the
Resolve interface, through a call to mpirun.
2. ixmpi_lsf.sh. These perform an LSF bsub job submission to launch another script
(ixmpi_lsf_runner.sh), which in turn starts the simulations through the same mpirun command
as (1). In other words, these scripts are responsible for LSF distribution of the simulation
jobs.
3. ixmpi_pbs_sh. As for (2), except using PBS as the distribution mechanism rather than LSF.
The scripts can be edited by the user (e.g. if a different distribution mechanism was required)
but obviously care would be needed.
The scripts are responsible for starting as many instances of the simulation executable as are
required by the model parallelisation, and then returning to Resolve the host on which the
models are running.
Communication to the simulation models from Resolve is then made via a thread spawned from
the ixrunfactory.exe process. This communication is carried out through a comms port (o) that
will be referred to below.
2.5.7.5.2 Installation
The run factory should normally be run under the root account (although the exception to this
should be noted, below). Examples follow with a listing of possible arguments:
./ixrunfactory.exe –p 8900
July, 2021
RESOLVE Manual
User Guide 292
1. –p <port number>. Specify the port number (p, above) to which Resolve will broadcast the run
information.
2. –u. Run under the current user account, rather than root. This will not take account of the
contents of the .ix_resolve_users file, described below.
3. –o. Start port (o, above) for communication with the eventual simulation run. It defaults to
9001. The port used is incremented from this value for every run which takes place.
4. –n. Number of free ports to use from the base port (-o). After this number of runs it cycles
back to the base port. It defaults to 100.
5. –s <script>. This should be a fully qualified path to the bash script described above. If nothing
is supplied, it defaults to ixmpi_dir.sh.
6. –t <timeout>. The time (in seconds) that Resolve will wait after the processes have been
initialised for a connection to be made to the simulator. The default is 60 seconds, which
should be ample.
7. –l. Generate a communication log file in the simulation model data directory.
Configuration Files
There are two configuration files:
1. ix_resolve_env. This allows environment variables to be set up which are then passed on to
the child simulation processes as they are started. It is, for example, a convenient location for
the LM_LICENSE_FILE definition. The file should consist of a number of lines representing
variable-value pairs, e.g.
LM_LICENSE_FILE 27000@PETEXLICENSE02
2. ix_resolve_users. This file is not read if the –u option is selected above. It allows the mapping
from Windows user IDs to Linux user IDs, e.g.
dave developer 1
**0
Wildcards (‘*’) are allowed. The last number is 1 to allow a connection, 0 otherwise. The
example above maps the user ‘dave’ on Windows to ‘developer’ on Linux, and prevents any
other users from connecting.
Debug Logging
The run factory processes output their logs to /var/log (if they have write access) or /tmp (if they
do not).
IX_MPI_RUN. The path to mpirun – MUST BE SPECIFIED either in the _env file or directly in
the scripts.
IX_EXEC. The path to ix.exe – MUST BE SPECIFIED either in the _env file or directly in the
scripts.
The path to the boost shared libraries (included in the installation files) should be included in the
LD_LIBRARY_PATH environment variable.
2.5.7.6 Integrate Intersect on windows cluster with PxSub
Intersect can be run on Windows Cluster using the PxCluster. To illustrate this feature we will
use an example from the Intersect release 2018.2 called Cartesian partition. For this illustration,
let us make a copy of this example file in a shared directory and map the PxCluster to the
network location (please see the section on how to map your PxCluster to network drives).
Please ensure that the PxCluster service is running.
The first step is to set up an instance of WinRunFactroy.exe from the IPM 12 release. This is
done as follows:
The WinRunFactory is now listening on port 9884 (an arbitrary choice) for incoming
communication from Resolve. When this communication occurs, the WinRunFactory.exe will run
ResolveIntersectPxClusterStarter.bat that is contained within the IPM 12 release.
July, 2021
RESOLVE Manual
User Guide 294
Double clicking on the Intersect icon, the path to the model and the port number should be
supplied to the instance, along with the cluster head node.
Clicking ‘start’ will send a TCP communication from Resolve [1] to the WinRunFactory on port
9883 (in this case); the batch script is then run on the current node, which is usually (but not
required to be) the head node of the PxCluster. The script uses PxSub.exe to submit the
Intersect job to the PxCluster. The job is contained within another batch script called
ResolveIntersectPxClusterRunner.bat which sets up the Intersect executable using Intel MPI on a
cluster node [2]. This script also populates a ‘.hosts’ file with the name of the running node,
which is reported back to the WinRunFactory.exe on the head node [3] along with the
communication port to the Intersect job running on the cluster node. The WinRunFactory then
reports the communication socket of the running job back to Resolve [4] so that the application
can interface with the job running on the remote node [5].
July, 2021
RESOLVE Manual
User Guide 296
The WinRunfactory.exe will print out the command line that it is sending to the PxCluster head
node and will wait until the .hosts file is populated with the name of the running node. This could
take time if the PxCluster is running other jobs in the job queue.
We can see the job is running on a remote machine using the PxCluster management console.
After a short period, we should see Resolve has opened the Intersect job successfully. In our
example, the Resolve client, WinRunFactory.exe and the PxCluster head node are all running on
the same node <EDI-ENG-NL2> .
July, 2021
RESOLVE Manual
User Guide 298
To terminate the communication, double click on the Intersect icon and click Stop. The job will
then finish on the cluster.
The batch files in the IPM 12 release can be edited (right-click edit) and modified to target
different versions of Intersect if necessary. The files contain a walk-through commentary of what
they do and can be supplemented with information in this document.
The driver configuration screen can be invoked from the Drivers | Register drivers RESOLVE
menu item, by double-clicking on the Nexus entry or selecting the entry and clicking
"Configure".
July, 2021
RESOLVE Manual
User Guide 300
The driver’s configuration holds the setup data required to run any case file. Note that "mpiexec"
is optional, as path to executable is generally detected automatically.
The input sections are self explanatory and in most of the cases would be left by default. The
Nexus model is defined by inputting the path to the Case file and the name of the Study file,
which should be located in the same folder.
Please note that Nexus Case file path should be defined without spaces in folder or file names.
Alternatively it will not be able to detect the file.
July, 2021
RESOLVE Manual
User Guide 302
Combined GAP sets both gas and liquid rate constraints when a
well has significant amounts of each phase present, but
automatically reverts to single phase control when one
phase is clearly dominant (such that we do not impose a
constraint for a trivial phase).
Rate Uses GAP’s definition of the well to set the rate control
(dominant type: typically single phase control.
phase)
Grid Items This defines those grid items that will be available to Resolve during the
simulation run.
Selecting Set Grid Items button the following screen will be prompted:
July, 2021
RESOLVE Manual
User Guide 304
To allow the connection, a single service, or daemon, must be running on a target Linux
computer. This service, nexusrunfactory.exe, sits on a port (p) which must be opened through
any firewall. When a user requests a simulation, Resolve broadcasts the details of the required
run through the port to the service.
Once the service has the required information, it starts the simulator by executing a bash shell
script.
The script may be written by the user. However, three scripts are provided which should cover
most users’ requirements. The scripts are as follows:
1. nexusmpi_dir.sh. These start the simulations directly, i.e. on a given computer as specified in
the Resolve interface, through a call to mpirun.
2. nexusmpi_lsf.sh. These perform an LSF bsub job submission to launch another script
(nexusmpi_lsf_runner.sh), which in turn starts the simulations through the same mpirun
command as (1). In other words, these scripts are responsible for LSF distribution of the
simulation jobs.
3. nexusmpi_pbs_sh. As for (2), except using PBS as the distribution mechanism rather than
LSF.
The scripts can be edited by the user (e.g. if a different distribution mechanism was required)
but obviously care would be needed.
The scripts are responsible for starting as many instances of the simulation executable as are
required by the model parallelisation, and then returning to Resolve the host on which the
models are running.
Communication to the simulation models from Resolve is then made via a thread spawned from
the nexusrunfactory.exe process. This communication is carried out through a comms port (o)
that will be referred to below.
2.5.8.3.2 Installation
The run factory should normally be run under the root account (although the exception to this
should be noted, below). Examples follow with a listing of possible arguments:
./nexusrunfactory.exe –p 8900
./nexusrunfactory.exe–p 8900 –u –m /opt/resolve/nexusmpi_pbs.sh
PSim is the proprietary reservoir simulator of Conoco-Phillips (COP). The following pages are
therefore of interest only to COP users and third parties licensed by COP
PSim can run in black-oil or fully compositional modes. In addition, it has an EXTEND capability
so that the simulation can run with a smaller subset of components to that passed to GAP.
The communication mechanism between PSim and RESOLVE is through mpich, which is the
MPI implementation of the Argonne National Laboratory. Details on its setup can be found
under the "Setup and Configuration of mpich" section.
The PSim driver was originally developed by COP, but is now developed by Petroleum
Experts in collaboration with the PSim development team. Support requests should come
initially to Petroleum Experts, who will forward the information and request assistance from
COP as it is required.
The PSim driver is included (with permission from COP) in the IPM distribution, and as such
should be registered by RESOLVE by default.
If it is not, then the Drivers | Register drivers menu item can be used to register the DLL. The
DLL name will be PSimLinkxx.dll, where "xx" is the current IPM version number.
July, 2021
RESOLVE Manual
User Guide 306
The target of the current coupling between PSim and RESOLVE is the use of GAP as an
surface network optimisation model as well as an additional well management tool for PSim.
The assumption underlying this work is that users embarking on this endeavour wish to use
GAP as the sole well management tool and will specify any well management instruction in
GAP. Hence, it is assumed that all well management instructions are only communicated to
PSim rather than being placed in the simulation input deck directly. Although the above is not
enforced and deviations are possible, the documentation provided in this section needs to be
read from this perspective.
It needs to be emphasised that the coupling of PSim to other tools through RESOLVE is fully
explicit in nature, i.e. at synchronisation time the decision on flow rates for the coming time
period, which may be larger than the next PSim timestep, is purely based on an IPR which is
exclusively constructed from data available at the end of the previous time period. Therefore,
changes in the reservoir behaviour occurring between two synchronisation times will not be
reflected in the IPR until the next update. The implications of this assumption need to be well
understood, as they may have several effects which can easily be misinterpreted as “bugs” or
errors made by any one of the tools involved.
Explicit coupling – and therefore its limitations - are at the very center of the assumptions
underlying the overall IPM approach and is not limited to PSim but applicable to any reservoir
simulator.
The overall logic outlined below aims at minimizing the effort required to adjust or modify
existing PSim simulation decks. The only non-standard keyword requirement is the presence of
the NETWORK keyword.
Psim simulation decks must be prepared / modified before they are loaded into RESOLVE and
the RESOLVE project is started.
Keywords in Table 1 will be ignored or overridden by the operating point data received
from GAP. It is assumed that limits (e.g. QGLMAX) which can be specified through
these keywords will be set in GAP by the user. If no limits for bottom hole pressure and
tubing head pressure are received from GAP PSim will default these values to 14.6 psi
for producers and 20.000 psi for injectors, respectively regardless of previous
definitions in the PSim deck.
Wells under GAP / RESOLVE control will automatically be detached from platforms
defined in PSim decks. In consequence, these wells will be unaffected by keywords
specified in Table 2. Note that wells which are not controlled by GAP / RESOLVE will
still be under platform control. The detachment logic does not change the limits
specified through platform related keywords. However, detaching wells will reduce the
number of wells producing against these limits. Note that the automatic detachment of
wells also applies to “cosmetic platforms”.
Keywords in Table 3 specify global limits of different fluid rates for production and
injection. Global control is disabled selectively. E.g., if any of the wells under GAP /
RESOLVE control is a producer all global limit(s) affecting production will be
deactivated. A similar logic holds for gas and water injection separately.
FIELDLM
Table 3: Keywords selectively deactivated
A very limited number of keywords (Table 4) cannot be used in runs coupled to GAP /
RESOLVE.
July, 2021
RESOLVE Manual
User Guide 308
PLATPRS
Table 4: Non-permissible keywords
Other keywords which do not directly interfere with control through GAP / RESOLVE
such as SHUTGLR, SHUTGOR or SHUTWC are unaffected by the coupling. Still, they
should be used with care and optimally all of them should be transferred into the GAP
equivalent control.
Combining PSim Well Management Keywords and Control through GAP / RESOLVE
In general this is not a recommended practice.
However, in certain cases it may be desirable to allow control of certain wells by PSim and
other wells by GAP / RESOLVE, e.g., cases where water injection is handled by PSim while
production wells are controlled by GAP / RESOLVE.
Any attempt to combine PSim and GAP well management in one model needs to be developed
considering the keyword disabling logic outlined in section “PSim Well Management Keywords
and Control through GAP / RESOLVE” above.
Special care needs to be taken with respect to keywords presented in Table 2.
The THP limits provided with the THP card will be overridden by the operating point data sent
by GAP / RESOLVE. The presence of RATE cards is not required.
New decks prepared specifically for IPM runs should not contain additional keywords. As
outlined above, all additional well control keywords for coupled wells will be ignored.
… Static Data
ENDINIT
C ---------------------------------------------------------
C WELL CONTROL
C ---------------------------------------------------------
WELL
I J K PI
PROD1
10 10 3 10.614
PROD2
10 1 1 10.614
10 1 2 10.614
10 1 3 10.614
PROD3
1 10 3 10.614
PROD4
3 3 3 10.614
GINJ1
1 1 1 10.614
GINJ2
6 7 1 10.614
WELLTYPE
PROD1 STBOIL
PROD2 STBOIL
PROD3 STBOIL
PROD4 STBOIL
GINJ1 MCFINJ
GINJ2 MCFINJ
THP
PROD1 250 1
PROD2 250 1
PROD3 250 1
PROD4 250 1
Compositional Simulation
Coupling between PSim and RESOLVE is not limited to black oil cases. If the number of
components specified in the PSim deck does not match the number of components in GAP
model the EXTEND keyword can be used to lump / delump production and injection streams.
Please refer to the NETWORK and EXTEND keywords in the PSim manual.
Restart Runs
The same guidelines – including the use of the NETWORK keyword - presented in the “General
Guidelines” section still hold for runs starting from a restart. Note that if extended compositions
are to be used the EXTEND keyword must already be present in the initial run used to generate
the restart.
2.5.9.2.3 Driver configuration
This screen governs global settings which affect how all PSim - RESOLVE models run.
It is invoked from the "Register drivers" screen, either by double-clicking on the driver entry in
the list, or highlighting the entry and clicking on Configure.
July, 2021
RESOLVE Manual
User Guide 310
Local These are settings that are applicable to either Windows or Linux runs.
executable / The "Load executable" is the location of the submodel shell script.
Run The "Run executable" is the location of the subrun shell script
executable
Default PSim This is the default version of PSim that will be run if no version number is
version entered in the "Data entry screen"
MPI debug By default these should both be switched off, as if activated they will
modes compromise the performance of the run.
The "MPI server" is an application that sits between RESOLVE and PSim,
and acts as a conduit for the MPI messages passed between the two.
Normally, it is hidden but it can be displayed, along with the contents of the
In addition, the messages that are passed can be dumped to a log file, the
path of which must be supplied in the field at the bottom of the screen
The mpich software can be obtained directly from Petroleum Experts or COP upon request.
1. WINDOWS installation/configuration
MPICH2 Installation
STEP 1 The installation of MPICH2 (version 1.0.5p2) on PCs is facilitated by an msi
installer.
The file required for the installation of MPICH2 is distributed with the PSim
installation files.
July, 2021
RESOLVE Manual
User Guide 312
2. Linux installation/configuration
The .cshrc file must have the paths set properly to identify the gcc compiler,
e.g.
setenv CC gcc
setenv CFLAGS "-m32"
This will complete the installation. To start the process manager on the node,
the following commands can be used.
Before it is possible to set up a PSim model in RESOLVE, it will be necessary to have the driver
"Registered" with the application. This should normally have been carried out automatically.
To set up a PSim model in RESOLVE, the Edit System | Add Client program | PSim and
click somewhere on the RESOLVE screen. A PSim icon should appear, which can be given an
appropriate label for the field in question.
Double-clicking on this icon yields the main data entry screen:
July, 2021
RESOLVE Manual
User Guide 314
PSim case These settings enable to specify the model that is to be run under RESOLVE.
details
File name This is the PSim model to be run
Host method This can be set to 'specified computer', in which case the
machine name (below) will have to be specified, or 'cluster
head node' to allow the link to connect to a head node
daemon running on a Linux cluster
More information on these options can be found here
Head node/ If 'cluster head node' is specified above, then these fields
port should contain the name of the Linux head node (running the
lxresolve daemon) and the port on which the daemon is
waiting
Machine If 'specified computer' is set under 'host method', then this is
the machine on which PSim is to run. If the machine is Linux,
then the "Linux node" box should be checked
Directory When PSim runs it writes temporary and post-processing
name / files to a sub-directory of the run directory.
Automatically There are two options to define this sub-directory:
generate
directory - The sub-directory name can be declared here explicitly. If
names "myruns" were entered into the "directory name" field in the
above example, PSim would write files to the "c:\projects
\examples\PSim\gaslift\myruns" directory. The directory is
created automatically at the start of the run if it does not
already exist.
- In the above case, every run that is made will write the output
data to the same location, and existing data will be
overridden. If this is not appropriate, it is possible to force
RESOLVE to automatically generate a new directory for each
run that is performed. In this case, the directory will have a
name of the form "<reservoir-label>_<date>_<time>"
Local path If the path to the data file is set for a remote machine (e.g. a
to directory Linux cluster), then this can be used (optionally) to map the
name remote path to the corresponding Windows path which is
visible from the local workstation. This allows the model to be
checked in and integrated with the IFM tool and the model
catalogue
PSim version The version number of PSim must be entered here. This is
the postfix of the PSim executable name (e.g.
model_2006.00.01.14.exe for the 2006.00.01.14 version).
The version number can be entered globally for all runs in the
"Configuration screen". If something has been entered here, it
will be displayed next to the version field
Restart This is the name of the restart file, if required. It is not obligatory to enter anything
Information here for a non-restarted run
Action Well controls Invokes the "Well controls" screen
buttons Advanced Invokes the screen for "Advanced options"
Executable This section invokes a screen that enables the user to specify
Paths the paths to the PSim executable. Executable paths defined
in this section will overwrite those set in the "Configuration
screen"
July, 2021
RESOLVE Manual
User Guide 316
The PSim links to GAP use two models for the generation of the IPR which is passed on to
GAP.
IPR that are passed from the reservoir model to the surface network model can be generated
from the reservoir model in different ways: the PSim links to GAP use two models for the
generation of the IPR which is passed on to GAP, as described below. Some of these
techniques provide realistic inflow performance definitions, whereas other techniques provide
unrealistic inflow performance definitions, often leading to instabilities in the full-field model
results.
The choice of the IPR generation option used is crucial to the behaviour and accuracy of the
model, and IPR generation techniques are a subject of constant development and testing.
Even so further details on IPR generation are provided in the "IPR Generation" section, it is
important to note that the most advanced, and recommended option is the Corrected option.
The IPR generation options available when connecting PSim to GAP are the following:
Block This is the original model in which the IPR is the inflow referred to the well
block pressure.
The block IPR is that generated by the simulator based on the block pressure
of the well and the mobility of the phases. It is the same IPR as would be
"seen" by the simulator at one iteration of its solver; a reservoir simulator
would then iterate on the inflow when it solves its timestep. Use of the block
IPR, therefore, leads to a fully explicit formulation which can be unstable
Corrected This is an improved model which is proprietary to Petroleum Experts.
The corrected IPR attempts to reduce the explicitness of the system by
calculating an IPR which is more representative of the performance of the
well over the timestep. To use this model, preliminary well test calculations
need to be performed.
When this option is selected on the main data entry screen, a button appears
which indicates whether the pre-calculation has been performed.
The "Calculate" button will perform the calculation for all wells. To choose a
subset of the wells (for example, if many of the wells are not connected) then
the "Wells" button invokes a screen in which wells can be selected:
July, 2021
RESOLVE Manual
User Guide 318
The wells to be included in the calculation should be selected in the left hand
pane, and the arrow button will add the selected wells to the calculation set.
The "Tuning" button displays some parameters which can be used to tune
the calculations. These parameters should only be changed on the advice of
Petroleum Experts.
Normally, the system will be equilibrated by setting all well rates to zero for a
fixed period. This can clearly take some time. If the system is already
equilibrated at startup, then this step can be skipped by using the check
button at the top of the screen
This screen allows the user to change the control modes of individual wells from the global
setting of the "Main data entry screen".
It consists of a list of the wells (i.e. which can be filtered to select only those wells that are
connected in RESOLVE).
To change a wells (i.e. or group of wells) control mode, it / they should be selected from the list.
A selection can then be made from the drop-down list at the top of the screen, and the "Select"
button will change the control modes of the requested wells.
This screen can be used to make advanced settings to influence how PSim is run.
July, 2021
RESOLVE Manual
User Guide 320
Add small This forces PSim to make a small timestep immediately after rate data has
timestep... been sent to the simulator (i.e. immediately after GAP has solved /
optimised). This forces a synchronisation between the results of PSim and
GAP, i.e. the pressures and rates will be consistent regardless of the
explicitness of the coupling. This can have an adverse effect on the
performance of the model, as PSim has to build its timestep again from the
small value sent from RESOLVE after every synchronisation
Other functions for the PSim link can be accessed by right-clicking on the PSim icon, or by
navigating to the "Program functions" item of the main menu.
View data This will display the PSim data file (.dek file) in its own window. The file can
file be searched by pressing <Ctrl-F> to invoke a search panel. Clearly, the data
file can only be displayed if it is "visible" to the local machine, i.e. if the run is
local or the data file is mounted on a shared drive
View log Similarly, the log file generated as a result of the run can be displayed and
file searched. Once again, this is only possible if the log file is visible to the
RESOLVE machine
GAP places the extraction point at the top of the top perforation while
PSim places it at the middle of the first perforation (where “first” refers to
the sequence in which perforations have been defined). This difference
can be corrected through the WELLHEADZ keyword.
July, 2021
RESOLVE Manual
User Guide 322
The error tends to increase as PSim moves further away from the last
synchronisation time.
For each timestep a reservoir simulator takes it uses a new IPR to generate
results. Reservoir simulation timesteps typically occur at a much higher
frequency than the synchronisation (update) of data with RESOLVE.
RESOLVE passes updated IPRs to GAP only at synchronisation times and
GAP bases its decision on flow rates for the coming time period on this
“snapshot” data. As the IPR in the reservoir simulator is repeatedly changed
with every timestep the wells in the reservoir simulator will not be able to
produce at exactly the operating point prescribed by GAP / RESOLVE, i.e.,
they will not be able to honor oil, gas and water flow rates or bottom hole
pressures simultaneously over the period between synchronisation times.
E.g., if gas breaks through between synchronisation times GAP / RESOLVE
will remain unaware of the event until the next synchronisation. Although the
reservoir simulator now produces gas, the fact that gas is now being
produced will not be accounted for in any downstream application, e.g.
HYSYS until the next update. This behaviour leads to a “loss” of volumes, i.e.,
the reservoir simulator produces volumes (which are correctly reported in its
output) but these volumes are never communicated to the tools sitting
downstream in the workflow.
The main purpose of this error table is to help users manage the error from
explicit coupling to help making better decisions about the required
frequency of synchronisation, and to support the decision process for whether
the advanced, welltest based IPR calculation implemented in RESOLVE
should be used and for which wells
Post- Results from the current RESOLVE project can be imported into CView and
processing compared to actual PSim results reported in the *.pltdat file. All CView tools
of IPM of the line plot functionality (e.g. VCM) can be used for imported IPM data
Results
Multi-PC RESOLVE projects containing multiple PSim models can be run distributed
Runs over several PCs in the same network.
The following requirements need to be satisfied:
The user launching the IPM run needs to have “administrator” or “power
user” privileges on each machine designated for running a PSim module.
MPICH2 needs to be installed and running on each PC designated for
running a PSim module.
PSim needs to be installed on each PC designated for running a PSim
module.
The directory containing the PSim data needs to be located on the
remote PC and mapped as a drive on the master PC (PC running
RESOLVE). After the run finishes the result data will reside on the remote
PC
LINUX PSim models in a RESOLVE project can be run on Linux clusters.
Runs The only requirement that needs to be satisfied is that the MPICH2 daemon
must be running on each node (i.e. smpd -start must have been run). The
connection can either be made directly (using mpich2 to go across the
Windows-Linux platforms) or by connecting to a daemon on the head node of
the cluster, which can then distribute the job using LSF or some other load
balancing software.
PSim can be run on remote machines connected to the RESOLVE Windows PC on a network.
Linux and Windows architectures for the remote machine are supported.
The communication protocol used between RESOLVE and PSim is mpich. This must be
configured appropriately on the RESOLVE machine and all machines running PSim (whether
Windows or Linux).
More information on the installation and configuration of mpich for Windows or Linux can be
July, 2021
RESOLVE Manual
User Guide 324
The connection to Linux can be made directly, in which case the setup of MPICH is the only
necessary task. Alternatively, it is possible to communicate directly to the head node of a
cluster, which can then distribute the PSim tasks across the cluster through LSF or some other
load balancing software. The setup of this more versatile configuration can be found here.
When running PSim, RESOLVE is able to communicate directly to the head node of a Linux
cluster, which is then able to distribute the simulation jobs to the nodes of that cluster. The
distribution can be performed by LSF or some other distribution tool of the user's choosing. The
default is LSF.
Contents:
Overview
Installation
Administering the lxresolve.exe daemon
2.5.9.6.1 Overview_3
lxresolve.exe.
lxresolve_client_psim.exe. This is the client programs distributed from the head node.
Currently, these components are built only for the Red-Hat Linux operating system,
version 4 and 5.
These programs are installed on a central ia32 or x86_64 server. Simulation runs can then be
spawned on the same server or any other appropriately configured Linux computer on the
cluster.
The simulation models communicate with Resolve using the standard protocol for that simulator:
MPICH2 for PSim.
2.5.9.6.2 Installation_3
This section assumes that the person performing the installation has a working knowledge of
the Linux environment. The following steps require the user to be logged in as ‘root’ on the Linux
machine.
To do this, the command ‘mkdir lx-ipm10’ can be used under the </home/
developer> directory
July, 2021
RESOLVE Manual
User Guide 326
Copy this file across to the target directory on the Linux machine
cd lx-ipm10
gunzip lx_resolve.tar.gz
This will unpack the stated file and place the file lx_resolve.tar in the installation
directory. The next command will be to extract the archive file using the command:
STEP 5 INSTALLATION
As ‘root’ run the installation by entering the command:
./install.sh
The installation will require entering the various configuration settings from the
user. These configuration settings are described as follows
OPTION 0: EXIT
OPTION 1: INSTALL SOFTWARE ONLY
OPTION 2: SETUP CONFIGURATION
OPTION 3: DO EVERYTHING
version). If this is the case the objective would be to install the software only, which
is Option 1. The config files will not be written with this option.
If the configuration files are to be freshly written by the installer, Select OPTION 2.
Select OPTION 3 if the objective is to do both.
If the installation of the Linux executables is being performed for the first time, it is
strongly advised that Option 3 be selected. This will guide the person performing
the installation towards setting up the Linux environment as expected.
For the sake of this example we shall ask the installer to install the software AND
setup the configuration files. Thus the option to select will be <Do Everything>
which is option number 3
The lx_resolve.tar.gz file that was installed in STEP 2 above, is basically a tape
archive (tar) file (like a zip file) which has some files stored in it. These files are
the components that are required to be installed on the Linux Machine. In this
tar file, there are two versions of the installation files available.
Depending upon the option that is selected in this Step 11, the installer will
unpack the correct version of the files and place them in the installation
directory. The procedure to install the components is the same for both types of
architectures
There are three configuration files that are created during the installation process:
a) .lxresolve_users --> sets up queues or other command line options for the
distribution software (e.g. LSF), as well as setting up individual user permissions
and mappings between Windows user names and Linux user names. This is
July, 2021
RESOLVE Manual
User Guide 328
c) .psim_client_ports --> sets up the tcp/ip ports that are to be used in connecting
the client program 'lxresolve_client_psim.exe' with Resolve on the Windows PC.
More information can be found under the section which deals specifically with
PSim configuration.
Before continuing to the next step it is important that the configuration files are
configured correctly. If LSF is to be used, it will be necessary to configure the
.lxresolve_users file and the .psim_client_ports file.
There are many ways to configure the file, but this should be sufficient to get
started
STEP 10 LAUNCH THE PROCESS.
The user must be in the appropriate directory where the installation has been
done. For this example it is </home/developer/lx-ipm10>
./lxresolve.exe 7777 0
In this case, any processes that the daemon launches (such as the simulation job)
will be run under the user account that was obtained from the contents of the
.lxresolve_users file described above.
If the above command line is run from a non-root account, the daemon will
terminate with an error. However, it is possible to run under a user account by
appending the -nosuid option to the command line:
In this case, jobs spawned from the daemon will be run under the account under
which the daemon was run; in other words, the daemon will not attempt to perform
a setuid.
The first argument is the port number with which Resolve will communicate with
lxresolve. This must be opened through any firewall.
The second argument is an LSF timeout in minutes. If LSF is not being used, this
will be ignored and the timeout will be fixed at 20 seconds. If it is set to '0' (as
above) then the software will wait indefinitely for LSF to start the job.
The above mentioned command line will start the lxresolve.exe process.
Alternatively this file may be included as a service on the Linux machine
July, 2021
RESOLVE Manual
User Guide 330
The host method should be set to 'Cluster head node'. The head node (on which
lxresolve.exe is running) and the port number (7777 in the above example) should
be entered.
In the case of PSim, the location of the executable on the Linux file system can be
entered through the 'Executable paths' button on this screen, or defaults can be
entered by configuring the PSim driver.
Should you require any further information please send an email to edinburgh@petex.com or
contact:
Email: Edinburgh@petex.com
The first line should be '0' if there are no special, user-independent options. Otherwise, LSF
command line options can be entered as per the examples below.
The following lines consist of pairs of users, to allow lxresolve.exe to map a Windows user
(from RESOLVE) to a Linux user (which will run the simulation job).
The third argument of each line should be '1' (to allow access to the Windows user) or '0' (to
deny access).
The fourth argument consists of further LSF options for the user in question. If none are given
('*') then those options given in the first line will be applied, if present.
Although LSF is discussed here, the options supplied in this file will also be applied to other
softwares used to distribute the tasks, as supplied in the .lxresolve_cmd file (see below).
Examples:
-q resolve
winuser1 linuxuser1 1 –m linux1 linux2 –q priority
winuser2 linuxuser2 1 *
The RESOLVE user ‘winuser1’ will submit the simulation as an LSF job under the
‘linuxuser1’ account. When this user submits a job, the parameters ‘-m linux1 linux2 –q
priority’ will be appended to the bsub command.
The RESOLVE user ‘winuser2’ will run as ‘linuxuser2’ on Linux. On submission, default
parameters (-q resolve, for submission to a ‘resolve’ queue) will be appended.
0
**1*
July, 2021
RESOLVE Manual
User Guide 332
This is the file that is generated by the installer. It allows all users to submit jobs under
their Linux accounts, with no special arguments to the bsub command. If the Windows
account does not have the same name as the Linux account, the run will fail.
In this case there are five possible ports to be used by the controller: 13900 - 13904. Each
instance of PSim that is spawned by the run factory will use one of the ports named here.
Note that the lxresolve_client_psim program can not start unless it manages to read this file.
Note that the ports specified in the above file, and the listening port that is used by
lxresolve.exe, should be opened through any firewall
If present, lxresolve.exe will read the contents of the file and use the command line specified
when starting a simulation job (instead of the LSF 'bsub' command). The dummy file generated
consists of the line:
/usr/bin/ssh
Clearly, ssh requires a specific computer to be named, which defeats the object of load
balancing. However, as an illustration, the command line arguments for ssh can be appended in
this file, e.g.
Alternatively and more logically, the command line arguments can be added to the
.lxresolve_users file:
0
WinDave LinuxDave 1 -l LinuxDave linux64
and so on.
2.5.10.1 Overview
The TempestLink driver allows RESOLVE to add a Tempest simulator into its overall system.
The driver provides the means for RESOLVE to query and control the simulator and to store/
restore its data.
This driver follows the approach of other RESOLVE simulator drivers. The major data items (and
a limited number of run-time controls) also support OpenServer and provides support for file
streaming to allow for storing/restoring simulator data.
In order to communicate with Tempest, RESOLVE talks through the Rex TCP/IP interface
provided by Roxar. Thus Resolve only needs to know the machine/IP address Rex is running on
and a specific port number (agreed with that particular Rex instance) for communication to/from
Rex (and thus to/from Tempest).
The following control options are available to RESOLVE for any Tempest case loaded:
The driver configuration screen can be invoked from the Drivers | Register drivers RESOLVE
menu item, by double-clicking on the Tempest entry or selecting the entry and clicking
"Configure".
July, 2021
RESOLVE Manual
User Guide 334
The driver’s configuration holds the setup data required to run any case file. Note that the
second section is only required when Tempest is run on the Local Machine; if Tempest is run on
a Linux cluster, the data in this second section will have no effect on that run.
When the Tempest version in use is 8.4 or above, additional Environmental variables in
Windows have to be created PYTHONHOME and REX_83_TYPES under the User variables
section. Example of these variables and the values assigned to them are shown below.
In addition, update the Path variable under the User variables section by entering an additional
value %PYTHONHOME%.
If the version of Tempest is prior to 8.4 , then the “Pre-8.4 compatibility mode” option has to be
enabled.
2.5.10.3 Loading and editing Tempest case details
Below is an example of the Tempest case details dialog. Note that there are some obvious
spaces present in the layout: this is intentional, as these spaces hold hidden fields that become
visible only when necessary, but do not cause layout changes when doing so.
July, 2021
RESOLVE Manual
User Guide 336
Some of the text shown is dynamically assigned to better guide users in the use of this dialog,
as will be described below.
Combined GAP sets both gas and liquid rate constraints when a
when well has significant amounts of each phase present, but
multiple automatically reverts to single phase control when one
phases phase is clearly dominant (such that we do not impose a
present constraint for a trivial phase).
Driven by Uses GAP’s definition of the well to set the rate control
GAP well type: typically single phase control.
type
Driven by Uses single phase rate control, either gas or liquid,
Gas-to-Oil according to their ratio compared to a user-threshold.
Ratio When this option is selected, the dialog provides an
additional field for the user to set their desired threshold.
Grid Items This defines those grid items that will be available to Resolve during the
simulation run.
Selecting Set Grid Items button the following screen will be prompted:
July, 2021
RESOLVE Manual
User Guide 338
Configure / The bottom left button “Configure” indicates that the current setup still
Reconfigure requires to be configured; pressing this button will open the Configuration
dialog to do so (as will pressing Start or OK, as without a valid
configuration, the case file cannot be used).
July, 2021
RESOLVE Manual
User Guide 340
To allow the connection, a single service, or daemon, must be running on a target Linux
computer. This service, tempestrunfactory_lsf.exe , sits on a port (p) which must be opened
through any firewall. When a user requests a simulation, RESOLVE broadcasts the details of
the required run through the port to the service.
Once the service has the required information, it starts the simulator by executing a bash shell
script.
The script may be written by the user. However, scripts are provided and they should cover most
of users’ requirements.
tempestmpi_direct.sh. This starts the simulations directly, i.e. on a given computer as specified
in the RESOLVE interface, through a call to mpirun.
tempestmpi_lsf.sh. This script is responsible for the LSF distribution of the simulation jobs.
The scripts can be edited by the user (e.g. if a different distribution mechanism was required)
but obviously care would be needed.
2.5.10.4.2 Installation
The run factory should normally be run under the root account (although the exception to this
should be noted, below). Examples follow with a listing of possible arguments:
./tempestrunfactory.exe –p 8900
./tempestrunfactory.exe –u –p 8900 -d
2. -p. <listen_port> . Specify the port number (p, above) to which RESOLVE will broadcast the
run information.
3. -t. <lsf-timeout (mins)>. The time (in seconds) that RESOLVE will wait after the processes
have been initialised for a connection to be made to the simulator: -1 for an infinite wait
(default).
7. -lsf. Additional LSF command line options (if present, must be last).
Configuration File
There is a single configuration file which contains all the required environmental variables. It can
be accessed via vim .tempest_resolve_users.
This allows environment variables to be set up which are then passed on to the child simulation
processes as they are started. It is, for example, a convenient location for the
LM_LICENSE_FILE definition. The file should consist of a number of lines representing
variable-value pairs, e.g.
LM_LICENSE_FILE 27004@PETEXLICENSE01
The configuration file must contain the variables mentioned below. If some of the variables are
missing, they have to be provided in the file.
LD_LIBRARY_PATH - The path to the boost shared libraries (included in the installation files).
The other two variables have to be provided in the configuration file when the Tempest version
in use is 8.4 or above.
July, 2021
RESOLVE Manual
User Guide 342
REX_83_TYPES 1
The Tempest case in RESOLVE should be configured as shown below. On this screen, the
location of the Tempest model is on a Linux machine.
This driver follows the approach of other RESOLVE simulator drivers. The major data items (and
a limited number of run-time controls) also support OpenServer and provides support for file
streaming to allow for storing/restoring simulator data.
The following control options are available to RESOLVE for any tNavigator case loaded:
Full OpenServer access is available to all the input parameters (e.g. target rates,
schedule, execution, etc.)
tNavigator OutPut results (e.g. phase rates, bottom hole pressure, etc.) are
automatically published by RESOLVE for the given file
July, 2021
RESOLVE Manual
User Guide 344
The driver’s configuration holds the setup data required to run any case file. For local runs it is
only required to provide path to "tNavigator-con.exe" file. Other parameters will be filled
automatically or can be left blank.
Below is an example of the tNavigator case details dialog. Note that there are some obvious
spaces present in the layout: this is intentional, as these spaces hold hidden fields that become
visible only when necessary, but do not cause layout changes when doing so.
Some of the text shown is dynamically assigned to better guide users in the use of this dialog,
as will be described below.
July, 2021
RESOLVE Manual
User Guide 346
Wells
Combined GAP sets both gas and liquid rate constraints when a
when well has significant amounts of each phase present, but
multiple automatically reverts to single phase control when one
phases phase is clearly dominant (such that we do not impose a
present constraint for a trivial phase).
Driven by Uses GAP’s definition of the well to set the rate control
GAP well type: typically single phase control.
type
Driven by Uses single phase rate control, either gas or liquid,
Gas-to-Oil according to their ratio compared to a user-threshold.
Ratio When this option is selected, the dialog provides an
additional field for the user to set their desired threshold.
July, 2021
RESOLVE Manual
User Guide 348
IPR table spacing This parameter determines how rate values in the IPR table will be
spaced.
Select fromt he following options:
The default option ("Bias to low rates") will provide table where points will
be concentrated at low rates.
Configure / The bottom left button “Configure” indicates that the current setup still
Reconfigure requires to be configured.
Selecting "Start" or "OK" buttons at this state will prompt a message and
tNavigator driver configuration window asking to configure driver.
July, 2021
RESOLVE Manual
User Guide 350
Once the data deck is accessed from the tokens editor, Tokens can be created by highlighting
the value of the parameter and then clicking “Create new token”. This value will be replaced by
~MyToken~ which can then be used in the Visual Workflow.
Tokens can be used in the VisualWorkflow via Intellisense OpenServer strings. This is
demonstrated in the example below where the value of ~MyToken~ is defined to be 500 in the
Assignment element (“MyToken”).
The following information is required for RESOLVE to successfully communicate with tNavigator
on Linux:
For more information on installation and configuration of tNavigator and dispatcher on Linux
please refer to the tNavigator user guide or RFD technical support.
This driver follows the common approach of other RESOLVE simulator drivers. The major data
items also support OpenServer and provides support for file streaming to allow for storing/
restoring simulator data.
The following control options are available to RESOLVE for any Echelon case loaded:
The input sections are self explanatory and in most of the cases would be left by default. The
Echelon model is defined by inputting the path to the Case file (Dataset). This path should be
defined without spaces in folder or file names. Otherwise it will not be able to detect the file.
July, 2021
RESOLVE Manual
User Guide 354
The table below explains the main functions used in this window.
Combined GAP sets both gas and liquid rate constraints when a
well has significant amounts of each phase present, but
automatically reverts to single phase control when one
phase is clearly dominant (such that we do not impose a
constraint for a trivial phase).
Rate Uses GAP’s definition of the well to set the rate control
(dominant type: typically single phase control.
phase)
Close The check box will make sure that non-connected wells of the simulators
unconnected are shut.
wells If the box is not checked, then non-connected wells will follow the
simulator control and schedule.
July, 2021
RESOLVE Manual
User Guide 356
In the Echelon data deck, make sure that the INTERFAC keyword is used in the Schedule
section. This will initiate the interface of the simulation and will allow RESOLVE to control the
simulation process. An example of how this key word is used in a data deck is shown below.
SCHEDULE
...
INTERFAC
'SLAVE' 'SOCKET' 27184 transcript.bin interfac.log /
...
END
The SLAVE key word refers to the mode and indicates that RESOLVE will drive the simulation
by setting well controls, advance to the specified time step etc.
The port number "27184" gives the port for the socket which controls the message transfer
between applications.
The name "transcript.bin" refers to the file which is used for writing a binary transcript of
received messages for debugging purposes.
Another file named "interfac.log" is used to record received requests.
Further information on the key words used in the Echelon data deck can be found in the Echelon
User Guide.
2.5.12.4 Remote Linux Run
Enter topic text here.
2.5.12.4.1 Overview
To allow the connection, a single service, or daemon, must be running on a target Linux
computer. This service, echelonrunfactory.exe, sits on a port (p) which must be opened through
any firewall. When a user requests a simulation, RESOLVE broadcasts the details of the
required run through the port to the service.
Once the service has the required information, it starts the simulator by executing a bash shell
script.
The script may be written by the user. However, the following script is provided which should
cover most users’ requirements: echelon_dir.sh. This start the simulation directly, i.e. on a given
computer as specified in the RESOLVE interface, through a call to mpirun.
The script can be edited by the user (e.g. if a different distribution mechanism was required) but
obviously care would be needed.
The script is responsible for starting as many instances of the simulation executable as are
required by the model parallelisation, and then returning to RESOLVE the host on which the
models are running.
Communication to the simulation models from RESOLVE is then made via a thread spawned
from the echelonrunfactory.exe process. This communication is carried out through a com port
that will be referred to below.
2.5.12.4.2 Installation
The run factory should normally be run under the root account (although the exception to this
should be noted, below). Examples follow with a listing of possible arguments:
./echelonrunfactory.exe –p 8900 –o 12000 –s
1. –p <port number>. Specify the port number (p, above) to which Resolve will broadcast the run
information.
2. –u. Run under the current user account, rather than root. This will not take account of the
contents of the .echelon_resolve_users file, described below.
3. –o. Start port (o, above) for communication with the eventual simulation run. It defaults to
9001. The port used is incremented from this value for every run which takes place.
4. –n. Number of free ports to use from the base port (-o). After this number of runs it cycles
back to the base port. It defaults to 100.
5. –s <script>. This should be a fully qualified path to the bash script described above. If nothing
is supplied, it defaults to echelonmpi_dir.sh.
6. –t <timeout>. The time (in seconds) that Resolve will wait after the processes have been
initialised for a connection to be made to the simulator. The default is 60 seconds, which
should be ample.
7. –l. Generate a communication log file in the simulation model data directory.
Configuration Files
There are two configuration files:
1. echelon_resolve_env. This allows environment variables to be set up which are then passed
on to the child simulation processes as they are started. It is, for example, a convenient
location for the LM_LICENSE_FILE definition. The file should consist of a number of lines
representing variable-value pairs, e.g.
LM_LICENSE_FILE 27001@PETEXLICENSE01
2. echelon_resolve_users. This file is not read if the –u option is selected above. It allows the
mapping from Windows user IDs to Linux user IDs, e.g.
dave developer 1
**0
Wildcards (‘*’) are allowed. The last number is 1 to allow a connection, 0 otherwise. The
July, 2021
RESOLVE Manual
User Guide 358
example above maps the user ‘dave’ on Windows to ‘developer’ on Linux, and prevents any
other users from connecting.
Debug Logging
The run factory processes output their logs to /var/log (if they have write access) or /tmp (if they
do not).
ECHELON_MPI_RUN. The path to mpirun – MUST BE SPECIFIED either in the _env file or
directly in the scripts.
ECHELON_EXEC. The path to echelon.exe – MUST BE SPECIFIED either in the _env file or
directly in the scripts.
The path to the boost shared libraries (included in the installation files) should be included in the
LD_LIBRARY_PATH environment variable.
2.5.13 Connecting to RN-KIM
Enter topic text here.
2.5.13.1 RN-KIM overview
The RN-KIM link driver has been developed by Petroleum Experts and it establishes
communication between RESOLVE and RN-KIM reservoir simulator which is produced by RN-
BASHNIPINEFT. The syntax of the RN-KIM input file is based on Eclipse syntax. The driver
allows RESOLVE to query and control the simulator and store/restore its data.
This driver follows the common approach of other RESOLVE simulator drivers. The major data
items also support OpenServer and provides support for file streaming to allow for storing/
restoring simulator data.
The following control options are available to RESOLVE for any RN-KIM case loaded:
"Configure". The configuration screen contains three main fields: an executable path for RN-
KIM, Additional command line arguments and communication port range, as shown below.
The input sections are self explanatory and in most of the cases would be left by default. The
RN-KIM model is defined by inputting the path to the Case file (Dataset). This path should be
defined without spaces in folder or file names. Otherwise it will not be able to detect the file.
July, 2021
RESOLVE Manual
User Guide 360
The table below explains the main functions used in this window.
Combined GAP sets both gas and liquid rate constraints when a
well has significant amounts of each phase present, but
automatically reverts to single phase control when one
phase is clearly dominant (such that we do not impose a
constraint for a trivial phase).
Gas GAP sets gas rate constraint
Liquid GAP sets liquid rate constraint
Rate Uses GAP’s definition of the well to set the rate control
(dominant type: typically single phase control.
phase)
The coupling between GAP and RN-KIM is based on lock files which, if present, warn GAP
about a new task. Two python scripts are responsible for the RN-KIM side of the
synchronisation:
petex_utils.py contains the definition of the class responsible for the synchronisation;
July, 2021
RESOLVE Manual
User Guide 362
The python script which manages the synchronisation process must be registered in the input
file in the PYTHON section of the data deck. A specific entry point for the synchronisation with
GAP, INTEGRATOR_SSTEP_BEGIN, is provided. An example of the PYTHON section in the
data deck is shown below.
PYTHON
INTEGRATOR_SSTEP_BEGIN 'N:\Testing\RN-KIM\RN-KIM-DRIVER-13-08-2020\solver
\test_scripts\DriverTests\Prod-Inj\rn_kim_petex_sync.py' /
/
2.5.14 Connecting to LedaFlow
2.5.14.1 Overview
The LedaFlow module allows the user to connect to the dynamic flow modeling package
LedaFlow. This section describes the connectivity and functions of LedaFlow that can be
utilised via RESOLVE.
The figure below shows a comparison between the LedaFlow module (left) and LedaFlow data
object (right):
It is also possible to register the LedaFlow driver manually by going to Drivers| Register Drivers,
selecting “Register” button in the displayed window and then browsing for the
“LedaFlowLink90.dll” file, which should be located in the IPM installation folder.
Once registered the LedaFlow driver does not require any further adjustments.
2.5.14.3 Loading and editing LedaFlow case
Double-clicking on the LedaFlow module or data object in RESOLVE will display the dialog
window, which consists of three tabs:
Model
Execution
Results,
These are described below.
Model file LedaFlow dump file with *.ldm extension contacting the case in
definition question
Load The button loads the *.ldm file into RESOLVE model and reads
its pipes, nodes and other components, which are then listed in
the window
July, 2021
RESOLVE Manual
User Guide 364
The Execution tab defines controls for the LedaFlow case and has the following inputs:
July, 2021
RESOLVE Manual
User Guide 366
The Results tab allows the user to display the results in a 3D plot (Length-Time-Parameter),
table form or as a slice in time or location.
All the visual workflow variables are preceded with the ModuleName of LedaFlow case (as it is
given in RESOLVE model), e.g. if LedaFlow case is given name "Riser_1", then the file name
for this module will be refered as:
Riser_1.FileName
ModuleName.Simulation.CurrentSimTime
July, 2021
RESOLVE Manual
User Guide 368
ModuleName.Model. ...
ModuleName.Model.Controllers[name/index]. ...
Most of the parameters below are self explanatory. For details on how they are used as a part of
the model, please refer to LedaFlow manual.
ModuleName.Model.Nodes[name/index]. ...
ModuleName.Model.Pipe[name/index]. ...
ModuleName.Model.Pipe[name/index].Pumps[name/index].Results. ...
July, 2021
RESOLVE Manual
User Guide 370
ModuleName.Resolve. ...
ModuleName.Model.Pipe[name/index].Results. ...
ModuleName.Model.Pipe[name/index].Wells[name/index].Results.
Further the Results keyword will be followed by additional keywords referring to specific
parameters with index indicating timestep, e.g.
... .Results.Density.GasDensity[timestep_index].
The last layer of the results always consists of the following 4 keywords:
.Mesh[n] Mesh result. [n] index of the mesh cell.
In some cases mesh will only consist of a single
component, e.g. controller results.
.MeshSize Size of the mesh cell
.Time Time in seconds to which the selected [timestep_index]
refers to
.Val Value of the parameter at the first mesh cell.
Operation Name: Load the object LedaFlow model into the object
Load a given LedaFlow model into the object
Unload a LedaFlow model
Description:
These operations are designed to load the LedaFlow dump file into the LedaFlow simulator for
further communication with it or unload the model if all operations with it are finished.
Inputs:
July, 2021
RESOLVE Manual
User Guide 372
Outputs:
Return Value:
Description:
This operation initialises LedaFlow model and sets boundary conditions for dynamic simulation.
Inputs:
Outputs:
Return Value:
Description:
This operation allows running LedaFlow simulation for the specified point in time.
Inputs:
LedaFlow.Simulation.CurrentSimTime + 100
Outputs:
Return Value:
Description:
These operations allows saving restart file at the end of simulation or use existing restart file to
initialise the model.
Inputs:
Outputs:
July, 2021
RESOLVE Manual
User Guide 374
Return Value:
Description:
These operations are designed to export LedaFlow results into DataSet. The "Export time
data" outputs time dependent results for particular equipment element. "Export profile data"
outputs results along the pipeline at the last timestep of the simulation.
Inputs:
Outputs:
Return Value:
The return is a [DataSet] object with results. The varibale with User Type [DataSet] should be
created before exporting data.
Description:
This operation allows dynamic input of PVT properties into the loaded LedaFlow model.
Inputs:
Composition only:
EOS [PxFlashCom] Name of the PxPfash object storing the compositional
data.
PvtMethod Sets PVT input method. Choose between "Table" and
[LedaFlowCompPVT "Guts". See LedaFlowCompPVTType enumeration.
Type]
PctWater [Double] Water-cut
Outputs:
Return Value:
Description:
This operation allows dynamic input of IPR data for well in LedaFlow model.
Inputs:
July, 2021
RESOLVE Manual
User Guide 376
(psig)
massrates - field Mass rate values
(lbm/day)
Outputs:
Return Value:
The Hysys driver allows instances of Hysys to be opened and controlled from RESOLVE.
Once the driver is registered, it will be possible to create instances of Hysys in RESOLVE, as
per the workflow described below.
Create an instance of Hysys from the "Create Instance" menu on the RESOLVE main
screen. Click on the RESOLVE main display screen to create a Hysys icon.
Load a pre-modeled case of Hysys by double-clicking on the created icon and then
browsing to the case in question. Note that it is possible to run Hysys on a remote
computer over a network. To do this, DCOM security must be set up on the registered
Hysys component on the remote computer.
Click on the OK button. An instance of Hysys will be created in the background that will
load the required case.
The driver will now determine the "sources" and "sinks" of the Hysys model for display
on the RESOLVE screen.
These correspond to the input streams and output streams of the model.
The Hysys streams can now be connected in RESOLVE to other sources and sinks
from other applications.
Note that the streams in Hysys are "uni-directional". This means that a single point is
passed by the program that supplies the data to Hysys and Hysys returns no data back.
The streams can only be connected to other uni-directional sources/sinks.
RESOLVE only extracts the basic black-oil results from each connection.
If it is necessary to set up additional variables from Hysys to monitor in the RESOLVE
results, right click on the Hysys icon in RESOLVE, and click on "Output variables".
As many cases of Hysys as the user wish can be loaded into RESOLVE at one time.
When the case is run in RESOLVE, data will be passed to the connected input stream(s) in
Hysys. This data will consist of a single point of pressure, temperature, mass flow rates, and
black oil and compositional data. This data will be set at the input stream.
The compositional data is set in Hysys by poking the data into the fluid package. This means
that all streams that access the same fluid package will be affected. If more than one input
stream is connected through RESOLVE, and these share a fluid package, then the
compositional data of that fluid package will effectively be written twice, once for each stream.
This is only a problem if the input streams arise from different sources with different properties.
It is possible to set up different fluid packages for different streams in Hysys, but then a stream
cutter (or some mechanism for mapping components between fluids) will be necessary in the
Hysys model.
Once the data has been passed, Hysys is allowed to solve the system. Data for the monitored
variables are extracted. Also, the data for any output streams that are connected in RESOLVE
are generated. Black oil data (that may be required for connection to black oil applications) is
obtained from a series of flash calculations.
The Hysys driver can be configured to run a particular instance of Hysys if more than one
installation of Hysys is present on the user machine.
To reach the configuration screen, first invoke Drivers | Register Drivers from the main
RESOLVE menu. It is then possible to highlight the entry for Hysys in the driver list and click on
the configure button.
July, 2021
RESOLVE Manual
User Guide 378
The version of Hysys (pre- or post- 2004) must be selected. This is because the programming
interface to Hysys was changed between these versions.
For the pre-2004 version Hysys it may be required to define path to Hysys.exe file. For more
recent version path to executable of the selected Hysys version will be retrieved from the
registry. If the installed version of Hysys is not available in the drop down list, then it can be
typed manually as shown in the snapshot above.
July, 2021
RESOLVE Manual
User Guide 380
Transfer This defines the way PVT data is passed to Hysys by the connected
compositional application (i.e. generally GAP).
data
Three options are available:
Advanced If this section is selected, the following will be displayed, allowing access to
Options the following settings:
July, 2021
RESOLVE Manual
User Guide 382
Force solver In some cases Hysys may fail to solve and can enter a
timeout loop from which it never returns. In these cases
RESOLVE never regains control of the application and
so the only way to finish the run is to kill the processes
from the Windows task manager.
If this is a possibility then it might be reasonable to enter
a "timeout" time in this field to force Hysys to return after
a certain time interval
Leave This enables to keep the interface open during the
interface open solves
during solves
Debug This enables to create a debug logging file that enables
Logging to troubleshoot the run if issues arise
Add Output This invokes a screen that enables to add additional output (source) icons
Streams to the RESOLVE interface that represent internal plant streams. This can be
useful to export results from internal streams to an Excel spreadsheet for
reporting or other purposes. This button is disabled until a model has been
loaded into RESOLVE
RESOLVE is able to build a list of input and output variables for every operation, stream, sub-
flowsheet, and column in a Hysys model. This list is used in several different ways:
To export variables into RESOLVE for "Event driven scheduling" and / or "Scenario
management".
For output of additional variables from Hysys in the RESOLVE "Reporting" section.
The RESOLVE "OpenServer" uses the list to allow access to Hysys variables from a
third-party application (e.g. an Excel spreadsheet)
Select the Hysys section to access the Hysys variables and click on Edit Variables.
July, 2021
RESOLVE Manual
User Guide 384
The list down the left hand side is common to several screens in the Hysys link. It consists of a
list of the streams and operations (and sub-flowsheets and column flowsheets, if present).
When one of these items is expanded, it displays a list of the variables that are supported by the
item in question.
Directly from the programming interface of Hysys. As items are added to this interface
by Aspentech they will automatically appear when this list is generated; no changes are
necessary to the RESOLVE software.
In order to publish one of these variables, select the variable to consider in the left hand list and
click on the red arrow. The variable will automatically be passed into the list of published
variables on the right hand side of the screen, as displayed above.
These are accessible by right-clicking on the Hysys icon in the RESOLVE graphical view, or
using the "Program Functions" item of the main menu.
July, 2021
RESOLVE Manual
User Guide 386
Most of these functions are described in the "GAP other functions" section.
correlation
property
Rebuild This option enables to reload the list of variables that can be exported
variable list from the Hysys model within RESOLVE. This will enable for instance to
publish one variable that has just been added in Hysys within the
RESOLVE model without having to re-open the entire RESOLVE model
2.5.15.5.1 Optimisation
The link to Hysys supports the optimisation functionality implemented in RESOLVE. These
screens illustrate how to set up an objective function, control variables, and constraint equations
in the Hysys model.
An example of this usage would be if the user had a GAP (surface network) model connected to
a Hysys model. It is possible to have control variables and constraints in the GAP model (i.e.
these could be well choke settings or lift gas injection quantities) and constraints and an
objective function in the Hysys model (i.e. compressor duty and maximisation of molar flow at a
stream). The RESOLVE optimiser allows these to be distributed between different models.
The screens to set up controls variables, constraints, and an objective function, can all be
accessed by clicking the right mouse button over the Hysys icon in RESOLVE. and selecting the
"Optimiser Setup" section.
July, 2021
RESOLVE Manual
User Guide 388
To set an objective function, check the box at the top of the screen. A list of
variables will be displayed for each stream and equipment item for each
flowsheet in the model (as for the control variable setup). The variable to
optimise on can be selected in this section. When a variable is selected, the
display panel below the list will display the name of the variable and the
Hysys unit of that variable.
In a similar way, constraints on plant variables can be set with the second
tab. Again the variable list can be browsed for the variable to select as a
constraint. The variable label and unit will be displayed in the panel below
the variable list.
July, 2021
RESOLVE Manual
User Guide 390
The list on the left hand side is a hierarchical list of all the spreadsheets,
streams, and equipment in the model. If an item is expanded (e.g. Feed1 in
the above screenshot) a list of all the variables that can be changed for that
particular item will be displayed.
In the above example, a single optimiser control variable has been selected.
This is the pressure of a stream ("To reboiler") which flows into a
fractionating column in the plant model. The units (kPa) have been picked up
automatically from Hysys. The user has imposed a range on the variable of
1000 kPa to 2000 kPa.
To delete a variable from the list, click the delete button on the right hand
side of the grid next to the variable in question
This screen is invoked by right-clicking on the Hysys icon and selecting "Output variables".
Variables can be selected from the list on the left hand side. Once selected, they can be added
to the list of additional (reported) variables by clicking on the "Add" button.
Items can be deleted from the reported list by highlighting them and clicking the "Remove"
button.
If a parent item is deleted, then all children of that item will also be deleted.
The reported variable list can be cleared completely by clicking the "Clear" button.
July, 2021
RESOLVE Manual
User Guide 392
2.5.15.5.3 Scheduling
This functionality is redundant now that the "Event driven scheduling" has been implemented in
the RESOLVE application itself. This is retained for backwards compatibility only.
The hierarchical list at the top of the screen contains the variables that can be changed for each
To schedule To schedule a variable change, browse to the variable that has to change in
a variable the schedule.
change In the above example, the variable "PercentVapourValveOpening" of the
equipment "V-101" has been selected.
The equipment name, variable name, and unit for this variable ("%") are
automatically displayed.
It is now possible to enter the date at which the variable has to be changed
and the value that the user would like it changed to. Click Add to add the
variable change to the schedule. It will appear in the list at the bottom of the
screen, as shown
To schedule Click on a piece of equipment in the list at the top of the screen. T
an enabling/ he screen will present a checkbox to allow disabling or enabling the
disabling of equipment.
equipment Enter the date at which the enabling/disabling has to take place.
Click Add to add the event to the schedule. It will appear in the list at the
bottom of the screen, as shown
Other Delete This removes any highlighted events in the list at the
functions bottom of the screen
Edit entry Allows any existing schedule entries in the list at the
bottom of the screen to be altered.
A new "Edit schedule entry" screen will be presented
Sort by date This just sorts the schedule event list from the earliest
event to the latest
This screen can be invoked by right-clicking on the Hysys icon and selecting "OpenServer /
Object browser".
July, 2021
RESOLVE Manual
User Guide 394
The list on the left is a hierarchical list of all the items in the Hysys model with all their supported
variables. By expanding an item and selecting a variable, the corresponding OpenServer
strings for the variable value and unit can be viewed. As a convenience, the current value of that
variable and its read / write status are also displayed.
Directly from the programming interface of Hysys. As items are added to this interface
by Aspentech they will automatically appear when this list is generated; no changes are
necessary to the RESOLVE software.
To add additional property correlations, right-click on the Hysys icon and select "Add Hysys
Correlation Property".
Those properties that are highlighted in blue are fixed and can not be removed. Additional
properties can be added, as shown. These changes are stored in the Windows registry, so
when RESOLVE is opened subsequently it should not be necessary to add the properties again.
If properties are added to the list, then any variable list that has already been built in the model
will have to be rebuilt if the new variables(s) are to be seen. This can be done by right-clicking
on the icon and going to "Rebuild variable list".
Note that properties are not checked for validity as they are added. If a property that is not valid
is detected when the variable list is built, the property will simply be ignored. In this case, it is
important to check that the wording / spelling of the property is exactly the same as appears on
the Hysys interface. If there are any doubts, contact Petroleum Experts.
July, 2021
RESOLVE Manual
User Guide 396
Here are some other points that should be considered when creating the Hysys model.
The driver has been developed for use with steady state (non-dynamic) Hysys.
Dynamic (time-dependent) behaviour is assumed to come from the connected models.
When the model is created, the fluid package should contain all the components that
are expected from the connected model (e.g. GAP). The components can be pure or
pseudo (hypo). They do not have to have the same names as in the GAP model as
RESOLVE will map GAP components to Hysys components as part of the initialisation
process. If the set of components is different between the connected modules then the
compositions will be re-normalised by RESOLVE when the data is passed across to
exclude the components that do not map.
When the compositional properties (critical temperature, pressure, etc) are passed,
only pseudo-component properties can be changed in Hysys. Pure component
properties are left unchanged. If the user would like to pass the critical properties for a
pure component, then this would have to be set up as a hypothetical component in
Hysys. This would, however, mean that Hysys does not have any knowledge of the pure
component that it is supposed to represent (for the purposes of enthalpy calculations,
etc).
When compositional properties are set in Hysys, they are applied to the fluid package
that is accessed by a stream. If the fluid package is shared across several input
streams all streams will be affected by the changes. This may also affect cases where
more than one input stream is connected in RESOLVE. If the streams access the same
fluid package, then the fluid package will be updated twice, potentially with different
fluid properties. It is possible in Hysys for the input streams to access different fluid
packages, but then a stream cutter or some other mapping will be required to map the
different packages together in the Hysys model.
Data will be passed by RESOLVE to the input feed of Hysys as indicated on the
RESOLVE graphical view. This input feed will therefore be the source of user entered
(i.e. fixed) variables to the Hysys model. On some occasions Hysys models are built
which are based on different fixed variables: if these models are linked with no
changes consistency errors may occur.
Consider, for example, the following system. A Hysys model consists of a separator
followed by a compressor. The user who built the model requires a fixed pressure at
the output stream which he has entered directly in this screen. Hysys is then allowed to
calculate the inlet pressure to the system. If this model is connected to GAP it is the
inlet condition that will be fixed. If the model is connected with no modification a
consistency error on the output stream pressure will be flagged by Hysys.
If, in a system such as this, a constraint on discharge pressure is to be met then it is
necessary to iterate on the inlet pressure to the system.
This can be done through the GAP optimiser.
To enable these applications to be launched remotely, the respective COM objects must be
configured on the target (remote) machines. The COM object to be configured is called "HYSYS
simulator" (under the identify tab, do not select "Interactive User").
Please contact Petroleum Experts if further help is required to setup a remote Hysys run.
2.5.16 Connecting to ProII
2.5.16.1 Use of ProII driver
The ProII driver allows instances of ProII process modeling software to be opened and
controlled from RESOLVE.
July, 2021
RESOLVE Manual
User Guide 398
Once the driver is registered, it will be possible to create instances of ProII in RESOLVE, as per
the workflow described below.
Create an instance of ProII from the "Create Instance" menu on the RESOLVE main
screen. Click on the RESOLVE main display screen to create a ProII icon.
Load a pre-modelled case of ProII by double-clicking on the created icon and then
browsing to the case in question. Note that ProII case should be successfully solved
before loading it into RESOLVE.
Click on the OK button. This will start ProII at the background and load the required
case.
The driver will now determine the "sinks" of the ProII model for display on the
RESOLVE screen.
These correspond to the input streams streams of the model.
Note that unlike UniSim and Hysys no "sources" (i.e. output streams) from ProII model
are displayed; this is explained in more detail here.
The input streams of ProII model can now be connected in RESOLVE to sources from
other applications.
Note that the streams in ProII are "uni-directional". This means that a single point is
passed by the program that supplies the data to ProII and ProII returns no data back. The
streams can only be connected to other uni-directional sources/sinks.
When the case is run in RESOLVE, data will be passed to the connected input stream(s) in ProII.
This data will consist of a single point of pressure, temperature, mass flow rates, and black oil
and compositional data. This data will be set at the input stream.
The compositional data is set in ProII by poking the data into the fluid package. Depending
upon the settings of ProII module RESOLVE will modify properties of pseudo components or all
components in ProII.
Once the data has been passed, ProII is allowed to solve the system. Data for the monitored
variables are extracted. If it is required to pass the output data from ProII to other applications, it
can be done via Visual Workflow.
To reach the configuration screen, first invoke Drivers | Register Drivers from the main
RESOLVE menu. It is then possible to highlight the entry for ProII in the driver list and click on
the Configure button.
July, 2021
RESOLVE Manual
User Guide 400
Case File Path to the ProII file, which should be pre-created before integration.
Solution solved Radio button controls the action that will be taken by RESOLVE if ProII
with error(s) module will solve the case with errors.
NOTE: This option corresponds to the cases when solution is obtained
in ProII, but some errors are reported.
It is also possible that ProII will not solve the case (i.e. solution will not be
found to some equipment elements). In that case run will always be
terminated.
Component The option controls which component properties (Tc, Pc, AF, etc.) will be
properties modified when passing compositional data to ProII.
By default RESOLVE will only modify Non-Pure components (Petroleum
Components and User-Defined Components), while leaving Pure
Components (selected from internal library) unchanged.
RESOLVE is able to build a list of input and output variables for every operation, stream and
equipment unit in ProII model.
The RESOLVE "OpenServer" uses the list to allow access to ProII variables from a
third-party application (e.g. an Excel spreadsheet)
Select the ProII tab (name of the tab is identical to the label of ProII module) and click on Edit
Variables.
July, 2021
RESOLVE Manual
User Guide 402
The list on the left hand side consists of the streams and equipment units present in the ProII
model.
When expanded it shows the list of variables supported by the item in question. In order to
publish one of these variables, select the variable to consider in the left hand list and click on
the Add-> button. The variable will automatically be passed into the list of published variables
on the right hand side of the screen, as displayed below.
NOTE:
ProII internally stores variables under different internal classes. It is possible that variables
related to the same equipment unit or a stream will be spread across several different classes.
User, who is working with GUI of ProII, may not be aware of different classes and variables
names, under which ProII stores parameters and their values. This would very challenging from
the user stand point to import ProII variables "as is" into RESOLVE.
To avoid this OpenServer interface for ProII was created in RESOLVE. OpenServer provides
ProII variables with self explanatory names (similar to the ones in the GUI) and structures them
by splitting into two categories - Input (writable variables defined before solving ProII case) and
Output (calculation results) - independent of their internal storage structure.
However, OpenServer interface only covers variables for streams and most widely used items
(Flash, Mixer, Splitter, Compressor, Expander, Valve, Pipe, Column, Simple Heat Exchanger,
Rigorous Heat Exchanger and Controller). The variables selected for OpenServer are most
popular parameters engineers will focus on.
Variable that are not covered by OpenServer interface can be imported into RESOLVE using
their internal ProII names via Visual Workflow or internal RESOLVE script. For example, the
calculated temperature for the flash called "F2" can be imported using the following string:
"ProII.F2(Flash):TempCalc"
Names of the parameters and their classes can be determined using "P2View" utility provided
with ProII.
These are accessible by right-clicking on the UniSim Design icon in the RESOLVE graphical
view, or using the "Program Functions" item of the main menu.
July, 2021
RESOLVE Manual
User Guide 404
Most of these functions are described in the "Other GAP functions" section.
Optimiser Setup
Allows the RESOLVE optimisation problem to be set up.
View in ProII
Contrary to UniSim and Hysys ProII driver does not open its interface with the model in
question and runs all calculations at the background. The "View in ProII" function allows
user to case file in ProII interface, e.g. to review results.
2.5.16.5.1 Optimisation
The link to ProII supports the optimisation functionality implemented in RESOLVE. These
screens illustrate how to set up an objective function, control variables, and constraint equations
in the ProII model.
An example of this usage would be if the user had a GAP (surface network) model connected to
a ProII model. It is possible to have control variables and constraints in the GAP model (i.e.
these could be well choke settings or lift gas injection quantities) and constraints and an
objective function in the ProII model (i.e. compressor duty and maximisation of molar flow at a
stream). The RESOLVE optimiser allows these to be distributed between different models.
The screens to set up controls variables, constraints, and an objective function, can all be
accessed by clicking the right mouse button over the ProII icon in RESOLVE and selecting the
"Optimiser Setup" section.
July, 2021
RESOLVE Manual
User Guide 406
To set an objective function, check the box at the top of the screen. A list
of variables will be displayed for each stream and equipment item for
each flowsheet in the model (as for the control variable setup). The
variable to optimise on can be selected in this section. When a variable
is selected, the display panel below the list will display the name of the
variable and the ProII unit of that variable.
Constraints Constraints on plant variables can be set with the second tab:
Again the variable list can be browsed for the variable to select as a
constraint. The variable label and unit will be displayed in the panel below
the variable list.
Controls This tab allows to setup control variables in the ProII model.
The list on the left hand side is a hierarchical list of all the spreadsheets,
streams, and equipment in the model. If an item is expanded (e.g. C1 in
the above screenshot) a list of all the variables that can be changed for
that particular item will be displayed.
July, 2021
RESOLVE Manual
User Guide 408
The UniSim Design driver allows instances of UniSim Design to be opened and controlled from
RESOLVE.
Once the driver is registered, it will be possible to create instances of UniSim Design in
RESOLVE, as per the workflow described below.
Create an instance of UniSim Design from the "Create Instance" menu on the
RESOLVE main screen. Click on the RESOLVE main display screen to create a
UniSim Design icon.
The driver will now determine the "sources" and "sinks" of the UniSim Design model
for display on the RESOLVE screen.
These correspond to the input streams and output streams of the model.
The UniSim Design streams can now be connected in RESOLVE to other sources and
sinks from other applications.
Note that the streams in UniSim Design are "uni-directional". This means that a single
point is passed by the program that supplies the data to UniSim Design and UniSim
Design returns no data back. The streams can only be connected to other uni-directional
sources/sinks.
RESOLVE only extracts the basic black-oil results from each connection.
If it is necessary to set up additional variables from UniSim Design to monitor in the
RESOLVE results, right click on the UniSim Design icon in RESOLVE, and click on
"Output variables".
As many cases of UniSim Design as the user wish can be loaded into RESOLVE at
one time.
When the case is run in RESOLVE, data will be passed to the connected input stream(s) in
UniSim Design. This data will consist of a single point of pressure, temperature, mass flow
rates, and black oil and compositional data. This data will be set at the input stream.
The compositional data is set in UniSim Design by poking the data into the fluid package. This
means that all streams that access the same fluid package will be affected. If more than one
input stream is connected through RESOLVE, and these share a fluid package, then the
compositional data of that fluid package will effectively be written twice, once for each stream.
This is only a problem if the input streams arise from different sources with different properties.
It is possible to set up different fluid packages for different streams in UniSim Design, but then a
stream cutter (or some mechanism for mapping components between fluids) will be necessary
in the UniSim Design model.
Once the data has been passed, UniSim Design is allowed to solve the system. Data for the
monitored variables are extracted. Also, the data for any output streams that are connected in
RESOLVE are generated. Black oil data (that may be required for connection to black oil
applications) is obtained from a series of flash calculations.
The UniSim Design driver does not require configuration. The path to executable will be
automatically detected from Windows registry.
July, 2021
RESOLVE Manual
User Guide 410
File name This is the UniSim Design case name (extension *.usc)
Machine UniSim Design cases run from RESOLVE can be distributed over a
network.
Enter in this field the name of the machine on the network on which the user
would like the UniSim Design case to run. The machine name can be given
as an IP address or a name in the DNS register (e.g. "dave-8200").
Leave the space blank to run UniSim Design on the local machine.
When entering file (case) names for remote machines, the file name
entered should be relative to the local machine
Start date By default data will be passed to or from UniSim Design as soon as the
RESOLVE forecast starts. This option can be used to schedule a plant
model coming on line at a point in time midway through a RESOLVE
prediction. If the field is left blank then the plant model will start at the same
Advanced If this section is selected, the following will be displayed, allowing access to
Options the following settings:
July, 2021
RESOLVE Manual
User Guide 412
Force solver In some cases UniSim Design may fail to solve and can
timeout enter a loop from which it never returns. In these cases
RESOLVE never regains control of the application and
so the only way to finish the run is to kill the processes
from the Windows task manager.
If this is a possibility then it might be reasonable to enter
a "timeout" time in this field to force UniSim Design to
return after a certain time interval
Leave This enables to keep the interface open during the
interface open solves
during solves
Debug This enables to create a debug logging file that enables
Logging to troubleshoot the run if issues arise
Add Output This invokes a screen that enables to add additional output (source) icons
Streams to the RESOLVE interface that represent internal plant streams. This can be
useful to export results from internal streams to an Excel spreadsheet for
reporting or other purposes. This button is disabled until a model has been
loaded into RESOLVE
RESOLVE is able to build a list of input and output variables for every operation, stream, sub-
flowsheet, and column in a UniSim model.
July, 2021
RESOLVE Manual
User Guide 414
Select the UniSim Design section to access the UniSim Design variables and click on Edit
Variables.
The list down the left hand side is common to several screens in the UniSim Design link. It
consists of a list of the streams and operations (and sub-flowsheets and column flowsheets, if
present).
When one of these items is expanded, it displays a list of the variables that are supported by the
item in question.
Directly from the programming interface of UniSim Design. As items are added to this
July, 2021
RESOLVE Manual
User Guide 416
interface by Aspentech they will automatically appear when this list is generated; no
changes are necessary to the RESOLVE software.
Some variables do not appear directly in the UniSim Design programming interface.
Specifically, these are process stream property correlations, e.g. Wobbe Index, which
appears under the "Gas" correlations. A default set of these properties have been
included in RESOLVE (comprising most of the "Standard" and "Gas" correlations) but
this set can be appended to - See the "Object Browser" section to do so.
In order to publish one of these variables, select the variable to consider in the left hand list and
click on the red arrow. The variable will automatically be passed into the list of published
variables on the right hand side of the screen, as displayed above.
The following assorted functions are available in the UniSim Design driver.
These are accessible by right-clicking on the UniSim Design icon in the RESOLVE graphical
view, or using the "Program Functions" item of the main menu.
Most of these functions are described in the "GAP other functions" section.
Optimiser Setup
Allows the RESOLVE optimisation problem to be set up.
Output Variables
Allows variables from the UniSim Design model to be added to the RESOLVE
"Reporting" section.
July, 2021
RESOLVE Manual
User Guide 418
Show Case
This simply makes the case visible to the user.
It is important that UniSim Design is not exited while RESOLVE is controlling it, as the
connection can not be remade once it is lost without re-opening the RESOLVE file. Note
that, to save UniSim Design licenses, RESOLVE will only open a single UniSim Design
application for all the UniSim Design models in the RESOLVE system. This means that it
is not possible to view all the UniSim Design models at once; they can only be switched
between using this function.
2.5.17.5.1 Optimisation
The link to UniSim Design supports the optimisation functionality implemented in RESOLVE.
These screens illustrate how to set up an objective function, control variables, and constraint
equations in the UniSim Design model.
An example of this usage would be if the user had a GAP (surface network) model connected to
a UniSim Design model. It is possible to have control variables and constraints in the GAP
model (i.e. these could be well choke settings or lift gas injection quantities) and constraints and
an objective function in the UniSim Design model (i.e. compressor duty and maximisation of
molar flow at a stream). The RESOLVE optimiser allows these to be distributed between
different models.
The screens to set up controls variables, constraints, and an objective function, can all be
accessed by clicking the right mouse button over the UniSim Design icon in RESOLVE. and
selecting the "Optimiser Setup" section.
To set an objective function, check the box at the top of the screen. A list of
variables will be displayed for each stream and equipment item for each
flowsheet in the model (as for the control variable setup). The variable to
optimise on can be selected in this section. When a variable is selected, the
display panel below the list will display the name of the variable and the
UniSim Design unit of that variable.
July, 2021
RESOLVE Manual
User Guide 420
In a similar way, constraints on plant variables can be set with the second
tab. Again the variable list can be browsed for the variable to select as a
constraint. The variable label and unit will be displayed in the panel below
the variable list.
The list on the left hand side is a hierarchical list of all the spreadsheets,
streams, and equipment in the model. If an item is expanded (e.g. Feed1 in
the above screenshot) a list of all the variables that can be changed for that
particular item will be displayed.
In the above example, a single optimiser control variable has been selected.
This is the pressure of a stream ("To reboiler") which flows into a
fractionating column in the plant model. The units (kPa) have been picked
up automatically from UniSim Design. The user has imposed a range on the
July, 2021
RESOLVE Manual
User Guide 422
To delete a variable from the list, click the delete button on the right hand
side of the grid next to the variable in question
This screen is invoked by right-clicking on the UniSim Design icon and selecting "Output
variables".
Variables can be selected from the list on the left hand side. Once selected, they can be added
to the list of additional (reported) variables by clicking on the "Add" button.
Items can be deleted from the reported list by highlighting them and clicking the "Remove"
button.
If a parent item is deleted, then all children of that item will also be deleted.
The reported variable list can be cleared completely by clicking the "Clear" button.
2.5.17.5.3 Scheduling
This functionality is redundant now that the "Event driven scheduling" has been implemented in
the RESOLVE application itself. This is retained for backwards compatibility only.
The hierarchical list at the top of the screen contains the variables that can be changed for each
July, 2021
RESOLVE Manual
User Guide 424
To schedule To schedule a variable change, browse to the variable that has to change in
a variable the schedule.
change In the above example, the variable "PercentVapourValveOpening" of the
equipment "V-101" has been selected.
The equipment name, variable name, and unit for this variable ("%") are
automatically displayed.
It is now possible to enter the date at which the variable has to be changed
and the value that the user would like it changed to. Click Add to add the
variable change to the schedule. It will appear in the list at the bottom of the
screen, as shown
To schedule Click on a piece of equipment in the list at the top of the screen. T
an enabling/ he screen will present a checkbox to allow disabling or enabling the
disabling of equipment.
equipment Enter the date at which the enabling/disabling has to take place.
Click Add to add the event to the schedule. It will appear in the list at the
bottom of the screen, as shown
Other Delete This removes any highlighted events in the list at the
functions bottom of the screen
Edit entry Allows any existing schedule entries in the list at the
bottom of the screen to be altered.
A new "Edit schedule entry" screen will be presented
Sort by date This just sorts the schedule event list from the earliest
event to the latest
This screen can be invoked by right-clicking on the Hysys icon and selecting "OpenServer /
Object browser".
The list on the left is a hierarchical list of all the items in the UniSim Design model with all their
supported variables. By expanding an item and selecting a variable, the corresponding
OpenServer strings for the variable's value and unit can be viewed. As a convenience, the
current value of that variable and its read / write status are also displayed.
Directly from the programming interface of UniSim Design. As items are added to this
interface by Aspentech they will automatically appear when this list is generated; no
changes are necessary to the RESOLVE software.
Some variables do not appear directly in the UniSim Design programming interface.
Specifically, these are process stream property correlations, e.g. Wobbe Index, which
appears under the "Gas" correlations. A default set of these properties have been
included in RESOLVE (comprising most of the "Standard" and "Gas" correlations) but
this set can be appended to.
To add additional property correlations, right-click on the UniSim Design icon and select "Add
UniSim Design Correlation Property".
July, 2021
RESOLVE Manual
User Guide 426
Those properties that are highlighted in blue are fixed and can not be removed. Additional
properties can be added, as shown. These changes are stored in the Windows registry, so
when RESOLVE is opened subsequently it should not be necessary to add the properties again.
If properties are added to the list, then any variable list that has already been built in the model
will have to be rebuilt if the new variables(s) are to be seen. This can be done by right-clicking
on the icon and going to "Rebuild variable list".
Note that properties are not checked for validity as they are added. If a property that is not valid
is detected when the variable list is built, the property will simply be ignored. In this case, it is
important to check that the wording / spelling of the property is exactly the same as appears on
the UniSim Design interface. If there are any doubts, contact Petroleum Experts.
Here are some other points that should be considered when creating the UniSim Design model.
The driver has been developed for use with steady state (non-dynamic) UniSim
Design. Dynamic (time-dependent) behaviour is assumed to come from the connected
models.
When the model is created, the fluid package should contain all the components that
are expected from the connected model (e.g. GAP). The components can be pure or
pseudo (hypo). They do not have to have the same names as in the GAP model as
RESOLVE will map GAP components to UniSim Design components as part of the
initialisation process. If the set of components is different between the connected
modules then the compositions will be re-normalised by RESOLVE when the data is
passed across to exclude the components that do not map.
When the compositional properties (critical temperature, pressure, etc) are passed,
only pseudo-component properties can be changed in UniSim Design. Pure
component properties are left unchanged. If the user would like to pass the critical
properties for a pure component, then this would have to be set up as a hypothetical
component in UniSim Design. This would, however, mean that UniSim Design does
not have any knowledge of the pure component that it is supposed to represent (for the
purposes of enthalpy calculations, etc).
When compositional properties are set in UniSim Design, they are applied to the fluid
package that is accessed by a stream. If the fluid package is shared across several
input streams all streams will be affected by the changes. This may also affect cases
where more than one input stream is connected in RESOLVE. If the streams access the
same fluid package, then the fluid package will be updated twice, potentially with
different fluid properties. It is possible in UniSim Design for the input streams to
access different fluid packages, but then a stream cutter or some other mapping will be
required to map the different packages together in the UniSim Design model.
Data will be passed by RESOLVE to the input feed of UniSim Design as indicated on
the RESOLVE graphical view. This input feed will therefore be the source of user
entered (i.e. fixed) variables to the UniSim Design model. On some occasions UniSim
Design models are built which are based on different fixed variables: if these models
are linked with no changes consistency errors may occur.
Consider, for example, the following system. A UniSim Design model consists of a
separator followed by a compressor. The user who built the model requires a fixed
pressure at the output stream which he has entered directly in this screen. UniSim
Design is then allowed to calculate the inlet pressure to the system. If this model is
connected to GAP it is the inlet condition that will be fixed. If the model is connected
with no modification a consistency error on the output stream pressure will be flagged
by UniSim Design.
If, in a system such as this, a constraint on discharge pressure is to be met then it is
July, 2021
RESOLVE Manual
User Guide 428
To enable these applications to be launched remotely, the respective COM objects must be
configured on the target (remote) machines. The COM object to be configured is called "UniSim
Design Simulator" (under the identify tab, do not select "Interactive User").
UniSim can also be run on PxCluster. This is useful if sensitivities are being performed on the
UniSim model, or if UniSim is part of an integrated model on which sensitivities being run (via
Scenarios, Case Manager, Sensitivity Tool, Crystall Ball or @Risk for instance).
Please contact Petroleum Experts if further help is required to setup a remote UniSim run.
2.5.17.8 Troubleshooting typical UniSim errors
‘Failed connection to UniSim Server: …’.
This error indicates that RESOLVE has been unable to communicate with the UniSim COM
object. There is no configuration to be done for this to work and this should work automatically
(based on RESOLVE reading information about UniSim from the registry).
The Excel driver is a generic link to the functionality provided by Excel. Data can be passed to,
or from, an Excel spreadsheet.
The user just has to tell RESOLVE which worksheets and cells provide the source or destination
for the data.
Reporting In the simplest case, Excel can be used just to provide a repository for the
data that comes from a forecast. The spreadsheet can be tailored to format
the data that comes into Excel, plotting the results or performing calculations
Data sources Excel can be used to provide a source of data into a RESOLVE system. For
example, lift gas to a GAP model can be calculated by a spreadsheet and
entered as a source into GAP
Arithmetic Excel can be used, for example, to copy a stream. A single input stream can
operations be copied to several output streams which can then be fed into different
(splitting, modules in parallel
mixing, tees,
etc)
Data An Excel spreadsheet can be set up to perform calculations on input data
manipulation and provide resulting output data. An example of this would be if the user had
a plant model in Excel
There is no limit to the number of input or output streams that the link can support. In addition to
writing input data to the worksheet and extracting output data from the worksheet, Visual Basic
macros can be run to perform calculations at each timestep. There is also no limit to the number
of Excel instances that can be added to a RESOLVE system.
2.5.18.2 Loading and Editing a Case
2.5.18.2.1 Loading and Editing an Excel case : Overview
The Excel driver edit screen is divided into four different sections.
July, 2021
RESOLVE Manual
User Guide 430
Excel Details This includes the number of input and output streams to set up and the Excel
file name
Input Data Specifies the destination cells for incoming data (optional)
Output Data Specifies the source cells for outgoing data (optional)
Macros Allows the user to enter a VBA macro to run at each timestep (optional)
The Excel details screen enables to enter the general properties of the Excel module.
Excel This is the name of an Excel file (.xls) that will be loaded when the OK button
spreadsheet is pressed.
file This is optional: if no file is entered then a blank spreadsheet will be used
Number of These numbers give the number of inputs and outputs to the Excel module.
inputs / When OK is pressed icons will be generated for each input and output
Number of stream.
outputs There is no limit to the numbers of inputs and outputs
EOS data Pass This has to be check to enable the passing of
compositional compositional data to Excel
data
Setup This enable to setup the component names which will be
Compositions passed to or from this module.
July, 2021
RESOLVE Manual
User Guide 432
Hide Interface This option hides the Excel interface when a RESOLVE calculation is
when performed. It prevents accidental interference from the user, who may click
RESOLVE is inside Excel while the calculation is being performed
calculating
This screen enables to specify the destination Excel cells for each input stream.
July, 2021
RESOLVE Manual
User Guide 434
Input fields
The rest of the screen is split between Solver and Forecast sections.
The cells specified in the "solver" part will be overwritten at each RESOLVE
timestep.
The cells in the "forecast" part will represent the cells written to at the first
Forecast The data required here is equivalent to the data required in the "solver"
section, the difference being that the data is arranged either vertically below
these cells or horizontally to the right of these cells: this section enables
therefore to keep track of the data through time in the Excel spreadsheet,
whereas the data passed using the solver data section will be overwritten at
each timestep
This screen enables to specify the destination Excel cells for each output stream.
July, 2021
RESOLVE Manual
User Guide 436
Input fields
Output The data for each output stream should be entered as required. The names
of the output streams are listed in the drop down list here
Copy from It is possible to simply copy the input data from one of the input streams to
input the output stream.
To do this, select the target input stream from the drop down list and then
click the "Copy" button.
The other options are equivalent to those described on the "Input data" tab
Composition This screen is invoked from either the "Input data"tab screen or the "Output
data" tab screen. Obviously the data entered here will be either input data or
output data depending on where the screen was invoked from.
Compositional data will only be passed if the check box has been checked in
the "Main data entry screen".
Composition From this drop down list box it will be needed to select a
composition that has previously been set up in the
composition setup screen
Composition The compositional data will be extracted from or written to
data cell a rectangular block of cells in Excel. In this cell the top left
hand corner cell of this block needs to be entered
Binary In a similar manner, enter here the top left hand corner of
interaction the block of cells that will contains the binary interaction
coefficients coefficients
EOS model The Excel cell reference to report the EOS calculator i.e.,
PR or SRK. NOTE: The EOS calculator cell reference
must be specified if the composition is to be passed on
from the Excel input to an Excel output for instance
Mass rate cell The mass rate of the stream will be written or extracted
from this cell. The units of the quantity can be changed
from the drop down list next to this field
Units For those properties that have units it is possible to
July, 2021
RESOLVE Manual
User Guide 438
change the unit in this table. All the units can be changed
to a different system by changing the drop down list at the
top of the table (in the example above he unit system is
set to "Oilfield")
This screen allows to run an Excel macro (VBA) at each timestep in RESOLVE.
This allows, for example, to generate data that can be input into another module (e.g. GAP
gaslift data).
The table consists of a list of potential points in the calculation timeline from which a macro can
be called. Macro names can be entered in the table alongside the required entry point to
In addition, there are two possibilities regarding how the Excel macro is called from RESOLVE.
If "Pass current then the macro will be called with just the timestep count as an argument.
timestep date as
argument to In this case, the subroutine in VBA must be declared as follows:
macro" is NOT
checked Sub AMacro (timestep As Integer)
End Sub
Note that the timestep count starts from zero for the first timestep
If "Pass current then the date will also be passed.
timestep date as It will be passed as a string, so in this case the subroutine must be
argument to declared as follows:
macro" is
checked Sub AMacro (timestep As Integer, dt As String)
End Sub
In all cases, the macro name supplied must be fully qualified with the location of the
macro, e.g. "sheet1.amacro" or "ThisWorkbook.amacro".
These variables can then be used for "Event driven scheduling", or for building up different
"Scenarios", for example.
July, 2021
RESOLVE Manual
User Guide 440
By clicking on "Edit Variables", the following screen appears, enabling to select which Excel
variables have to be published in RESOLVE:
Name In this section, the user specifies the name of the variable to publish
Unit In this section, the user specifies the name of the variable to publish
Worksheet In this section, the user specifies the Excel worksheet where the variable
to publish is located
Cell In this section, the user specifies the Excel cell where the variable to
publish is located
Cascade If the variable considered is set to a specific value during the forecast, this
DoSet in will allow the user to keep track of that variable value by automatically
forecast from feeding a column or row in Excel.
Enter the cell reference for the variable value to be recorded i.e., C5.
July, 2021
RESOLVE Manual
User Guide 442
2.5.18.4 Optimisation
An example of how this could be used would be if Excel were calculating a quantity that could be
seen as an objective function for the system, for example a calculation of revenue. This is
illustrated in the second of the GAP-Hysys-Optimisation step-by-step examples.
The screen to setup the optimisation components can be accessed by clicking the right mouse
button over the Excel icon in RESOLVE. Alternatively, this can be accessed from the
Optimisation | Setup screen from the main menu of RESOLVE.
July, 2021
RESOLVE Manual
User Guide 444
Optimiser The top part of the first screen allows an objective function to be set up. It is
Objective not obligatory to set an objective function in Excel, although obviously there
Function should be an objective function somewhere in the coupled system.
To set an objective function, check the box at the top of the screen. The
data fields below this can be entered as shown. In the example above we
are choosing to maximise the value of cell D3 (worksheet Sheet1). The
objective function is "tagged" with a name and a unit for reporting purposes
only
Constraints In a similar way, constraints on Excel cell values can be set in the table
below the objective function fields. Up to 100 constraints can be set.
RESOLVE node When a model is loaded into RESOLVE, RESOLVE considers the model
descriptions as a black box calculation engine which exposes inputs and outputs (i.e.
also referred to as sources and sinks) to the outside world. These sources
and sinks are called "nodes". There are various types of nodes, as
described below, and these types determine the connections that are
possible from one node to another.
Source /
Sink node ( - source)
( - sink)
July, 2021
RESOLVE Manual
User Guide 446
Uni-directional / When the RESOLVE system is run, a node is allowed to generate a table
Bi-Directional of inflow data or a single point of inflow data.
links
In the former case, the data acceptor will calculate an operating point on
the table (i.e. which may or may not be optimised, see the
"Schedule"screen) and that operating point will be returned to the data
provider. This is a bi-directional link.
Nodes that generate tables of data are indicated with a red bar
over the icon
In the latter case, the data transfer is entirely one way from data provider to
data acceptor
Compositional / Each driver that is registered with RESOLVE indicates whether it is able to
Non- generate EOS data, and also whether it requires EOS data to perform its
Compositional calculations.
nodes For example, a process simulator may require EOS data, whereas GAP
will be able to provide EOS data while still using black oil calculations
internally. Refer to the "Driver Registration" section for further information
on this subject
Nodes producing tables of data must be connected to nodes accepting tables of data
(bi-directional).
Nodes producing a single data point must be connected to nodes accepting a single
data point (uni-directional).
Data acceptors that require compositional data for their calculations must be
connected to data providers that can supply compositional data.
2.5.19.2 Models and Loops
This section discuss the particularities or running RESOLVE models containing loops.
Consider the following system (the arrows represent the direction of data flow):
When RESOLVE is run it will detect this loop and the system will be effectively reduced to:
July, 2021
RESOLVE Manual
User Guide 448
Loops can be setup from the Run | Edit Loops menu, and the procedure to do so will be
highlighted in the "Loop Edit" screen for more information.
At the end of the loop solve, data will be passed from the loop (models D and C) to the
receiving models E and F.
There is no limit on the number of loops in the RESOLVE system, although nested loops are not
currently supported.
2.5.19.3 Composition Tables
If a RESOLVE system is built which consists of applications which require or generate EOS
compositional data, it is necessary that there is a continuity in the compositional data throughout
the system.
This means that, for example, the composition passed to RESOLVE by one application should
map to the composition required by the connected application: although the component names
can differ, it is very desirable that there should be a one-to-one correspondence between each
component passed between each application.
When a compositional case is run for the first time, RESOLVE queries all the connected nodes
for their base composition.
Node This is a list of all the nodes in the RESOLVE system that are connected
connection together, grouped according to the applications that are connected (e.g. in
list the above case, all the connections between REVEAL and GAP are
grouped together).
There is a key at the bottom describing what the symbols mean: the red
cross indicates that no mapping has been made, the green blob indicates
that some components are mapped, whereas a tick indicates that the
mapping is complete. If a model is run while the mapping is not complete,
those components that are not mapped to something will have their mole
fractions set to zero and the composition will be re-normalised as required
RESOLVE Enables the user to specify if the lumping / delumping facility of RESOLVE
Lumping / is being used.
Delumping The options available are:
July, 2021
RESOLVE Manual
User Guide 450
Once this mapping has been setup it will be saved with the RESOLVE (.rsl) file. This means that
this screen will not be displayed on subsequent runs. If it is needed to edit this mapping before
a run, go to Run | Edit Composition Tables from the main menu.
When the run is carried out, RESOLVE will cross-check the current base compositions of the
client modules against the compositions saved with the file, and will re-display this screen if
necessary.
2.5.19.4 Direct Connections between instances
It is possible in RESOLVE to establish a direct connection between two instances, in order to
automatically pass the value of a specific variable from one application to another: for instance,
this can be useful to pass any variable from GAP to Excel for further calculations in a
spreadsheet.
The example below will illustrate how this option can be used: the oil rate obtained for well34 is
to be passed directly to the cell B3 of the worksheet "Sheet2" in Excel.
The first step will be to publish the variables to be considered by going to Variables | Import
Application Variables.
The following screen will be displayed, where it is possible to select the variables to publish
from GAP and Excel, as illustrated in the snapshots below:
July, 2021
RESOLVE Manual
User Guide 452
The "Edit Variables" button enables to publish specific variables, such as illustrated below for
the Excel variable - Further Information regarding how to publish variables can be found in the
"Publish Application Variables" section.
Once the variables are defined, the two applications can be linked directly using the Edit
System | Link option.
July, 2021
RESOLVE Manual
User Guide 454
It is possible to specify which variable is passed from one instance to another by double-
clicking on the arrow icon.
The following screen will be displayed. The setup illustrates how to specify the well 34 oil rate to
be passed from the GAP variable to the Excel one.
It is possible to see that the variable could be modified by using a multiplier and a shift.
By default, the multiplier will be set to 1 and the shift to 0, which will lead to the variable not being
modified.
July, 2021
RESOLVE Manual
User Guide 456
Data objects can be used in isolation or connected together to form larger models.
The data object tutorial examples (general overview) cover many of the features of data objects.
Reading this will provide a good understanding of the role of data objects within RESOLVE.
The following sections detail the properties of each available data object in turn. For each data
object, we detail:
1. The exposed properties of the data object (e.g. critical temperature for the EOS-PVT object).
These can generally be changed either from the data entry screen or programmatically from
a visual workflow.
2. The exposed function (methods) of the data object. Again, these can be invoked from a
visual workflow.
There are several different ways that data objects can be connected together, and can be
connected to other objects in the RESOLVE framework.
1. Most data objects can take one or more input data objects; these may be optional or
obligatory depending on the context. When a connection is made from a data object, any data
objects that can take that object as an input will be highlighted:
July, 2021
RESOLVE Manual
User Guide 458
The data objects that each object accepts as inputs are given in the following sections.
In addition, a data object may output a different data object: in the example above, the blending
object outputs an EOS-PVT object which contains the result of the blend.
2. Some data objects can be connected to application items in Resolve (e.g. a GAP separator,
a process output feed). The behaviour of these connections depends on the context, and is
discussed in the sections below where applicable.
3. Some data objects can pass text representations of themselves to other applications, most
notably Excel:
Again, the behaviour depends on the object. In the above case, the columns of the DataSet will
be 'pasted' into Excel, and further work can be performed on the data or macros can be run
programmatically.
The use of raw unit values is the default. In this case, field units will be used wherever a value is
passed in and out of the object (e.g. through workflows). Generally, the screens where these are
used will display the unit that is expected.
The second option will use the current unit system set in RESOLVE. In this case, field units will
not be used and the user specified units will be used. Care should be taken if the unit system is
changed after setting up a calculation.
EOS calculations
Mathematics library
Tight reservoir
Openserver - access public functions in the IPM programs to automate data input and model
calculations
RDO System - palette to add several data objects and perform calculations using these data
objects in a separate interface
PVTP calculations
July, 2021
RESOLVE Manual
User Guide 460
Wax - Perform wax appearance temperature and wax amount tests calculations
PROSPER calculations
Wellsource-File - allows the file path to the PROSPER model of interest to be specified
PROSPER Calculator - performs a gradient calculation using a specified PROSPER file and
allows a variety of results to be retrieved
Wellsource-Online - allows a 'PROSPER Online' pipeline to be defined
Corrosion Calculator - performs a CO2 corrosion calculation according to the NORSOK model
Erosion Calculator - performs an erosion calculation according to the DNV model
Slug Catcher Calculator - performs a slug catcher calculation for design and assessment
MWA - Multi well allocation tool to allocate field measured total rates to individual well phase
rates (oil, water, gas) using physical models and mathematical regression
Field data - Stores field measured data to be used as inputs for the MWA tool.
Ledaflow
GAP calculations
SAGD -Determine pre-heating times and generate a full REVEAL numerical simulation model;
this includes a detailed description of the wells and a reservoir grid to match the well flow path
and capture the near wellbore effects for SAGD systems
Well builder - Build complex REVEAL well descriptions for use in the SAGD object and export
to REVEAL
ICD Analysis
ICD Analysis - designed to build REVEAL simulation models for ICD and ICV optimisation
studies.
Case Manager
Case Manager - provides an interface for managing and running multiple instances of a model
Sensitivity Tool
Sensitivity Tool - provides a simple interface to run model sensitivities for a set of parameters
Probabilistic
Crystall Ball - integrates with the Crystall Ball add-on of Excel to perform probabilistic studies
@Risk - integrates with the @Risk add-on of Excel to perform probabilistic studies
Optimisation
The @Risk data object integrates the @Risk add-on of Excel and a numerical model, and is
very similar in its interface and workflow to the Crystal Ball data object. The object interface is
built based on CaseManager, which runs the model with a controlling workflow. The workflow
sets input parameters in the model, runs it and extracts the results. The data input/read by the
workflow is transferred from/to @Risk in Excel which enables to use the full functionality of
@Risk for further analysis of the results.
July, 2021
RESOLVE Manual
User Guide 462
July, 2021
RESOLVE Manual
User Guide 464
To define @Risk variables select 'Open @Risk Spreadsheet' button in the top right corner. This
will display a standard @Risk template provided with RESOLVE.
The following three elements should be defined in the @Risk spreadsheet:
Assumptions: two columns on the left should be defined with variable names and
assumptions (via Define Distributions).
Forecast variables: two columns on the right should be defined with names and initial values
of output variables (via Add Output)
Iterations: the number of cases to be generated should be define under Iterations on the
@Risk tab of the Excel ribbon.
Note that the path to the @Risk Excel plug-in (risk.xla) should be given to RESOLVE, using the
corresponding button.
Once @Risk assumption and forecast variables are defined, it is required to map them to
corresponding OpenServer or non-OpenServer variables in the model. This is done in the
@Risk interface of RESOLVE. The interface is identical for decision and forecast variables and
looks as follows:
July, 2021
RESOLVE Manual
User Guide 466
NOTE:
OpenServer variables are automatically detected by template workflows and
transferred to the model. For non-OpenServer variables the workflow may need
to be modified to take them into account
Once defined name, unit and tag will appear in the 'Description' column of the
variables table.
The button 'Debug model workflow with test values' displays the underlying CaseManager
object and allows running it with test values. This may be useful to debug the model and the
workflow if this has been edited.
NOTE:
To populate the underlying CaseManager with the input and results variables defined in the
@Risk Model tab, it is required to click on 'Debug model'.
NOTE:
July, 2021
RESOLVE Manual
User Guide 468
When the run is finished @Risk will also display its own distribution plot for each forecast
variable.
The user can also use standard @Risk tools for analysis and reporting. Those are available
from the Excel interface.
Description: The Black Oil PVT data object encapsulates all of the Petroleum Experts black
oil calculations. Oil, gas and black oil condensate fluid types are supported:
All the features for a black oil PVT description such as PVT matching, Tables and PVT
properties generation are available for this data object.
The data object can be called from a visual workflow; a description of how the input properties
can be entered and the functions that can be performed using this data object is provided in the
Visual Workflow User Guide.
Input connections:
Output:
None
July, 2021
RESOLVE Manual
User Guide 470
None
Properties:
These are described in the Visual Workflow User Guide, as these would be accessed through
the VWK interface.
The CaseManager data object provides an interface for managing multiple instances of a
model. The CaseManager allows creating multiple cases of a model and running them
sequentially or in parallel using PxCluster.The interface is designed with flexibility in mind and
does not limit the user to a particular scope. As such the object can be utilised for various
engineering tasks from history matching to experimental design.
1. Model
The base level of the Case Manager is a model that is selected by the user and will be used
to perform calculations. The CaseManager can run all tools that are available in the list of
applications, including third party tools. Therefore, the term model covers a wide range of
elements from a single equipment model (e.g. compressor modelled in UniSim or
PROSPER well model) to a full field integrated model (e.g. RESOLVE model that may
include reservoir, surface networks, process, economics etc.).
2. Controlling workflow
A controlling workflow is built within Case Manager on top of the model and performs 3
functions, namely (1) sets input parameters for the model, (2) runs the model and (3) retrieves
results. The workflow is responsible for how the model will be run. This could be a forecast, a
single solve, a regression or any other calculation depending on the model. The presence of
a workflow makes the Case Manager very flexible as the workflow can be configured for any
task.
Model and Workflow define the core of the Case Manager, i.e. the physical model and the way
that model is ran. Together they allow running a single case of the model. It is then possible to
vary input variables that are used by the workflow, thus creating multiple cases and running them
sequentially or in parallel using PxCluster.
The Case Manager interface consists of 3 tabs: Variables, Workflow and Cases
To add a variable enter its name, type and default value, the select the button.
It is possible to define data objects by selecting "User" and then choosing from the drop down
list. This allows communicating any data object to Case Manager and underlying model.
July, 2021
RESOLVE Manual
User Guide 472
The Model that will be used for calculations is defined on the right hand side of the Variables
tab.
2.6.3.2 Workflows
The Workflow tab consists of a standard workflow editor window as shown below.
It is possible to create several workflows, as Case Manager allows defining individual workflow
ro each case.
The Edit button can be used to change the name of the Workflow after it has been
created.
2.6.3.3 Cases
The Cases tab is used to setup individual cases and run them sequentially of on cluster. Each
case should be supplied with a workflow and a set of variables.
July, 2021
RESOLVE Manual
User Guide 474
Once all cases are defined it is possible to run them using "Test runs" section. The following
fields are available:
Execute all cases Will run all cases defined
Use cluster Check box can be used to run cases in parallel using PxCluster
functionality. The cluster should be started beforehand.
Cluster options The button allows setting up options for the cluster. such as select a
host, defined maximum number of parallel jobs and enable logging.
NOTE:
In case of limited number of licenses it might be of use to limit the
maximum number of jobs. As each job will consume one license of
RESOLVE and a license(s) required to run a model. If license is not
available the job will fail.
NB: Log files can be created when the cases are run on the cluster. These are useful for
debugging purposes, in the event that a particular case has failed. Please refer to this section
of the manual for further information.
The CrystalBall data object integrates the Crystal Ball add-on of Excel with any model. The
object interface is built based on CaseManager, which runs the model with a controlling
workflow. The workflow sets input parameters in the model, runs it and extracts the results. The
data input/read by the workflow is transferred from/to Crystal Ball for further analysis.
July, 2021
RESOLVE Manual
User Guide 476
July, 2021
RESOLVE Manual
User Guide 478
To define Crystal Ball variables select 'Open Crystal Ball Spreadsheet' button in the top right
corner. This will display a standard Crystal Ball template provided with RESOLVE.
The following three elements should be defined in the Crystal Ball spreadsheet:
Assumptions: two columns on the left should be defined with variable names and
assumptions (distributions).
Forecast variables: two columns on the right should be defined with names and initial values
of output variables (initial values are only required to 'Define forecast' for the cell in
question).
Trials: number of trials should be define on the Crystal Ball tab of the Excel ribbon.
Once Crystal Ball decision and forecast variables are defined it is required to map them to the
corresponding OpenServer or non-OpenServer variables in the model. This is done in the
Crystal Ball interface of RESOLVE. The interface is identical for decision and forecast variables
and looks as follows:
July, 2021
RESOLVE Manual
User Guide 480
NOTE:
OpenServer variables are automatically detected by template workflows and
transferred to the model. For non-OpenServer variables the workflow may need
to be modified to take them into account
Once defined name, unit and tag will appear in the 'Description' column of the
variables table.
The button 'Debug model workflow with test values' displays the underlying CaseManager
object and allows running it with test values. This may be useful to debug the model and the
workflow if this has been edited.
NOTE:
To populate the underlying CaseManager with the input and results variables defined in the
Crystal Ball Model tab, it is required to click on 'Debug model'.
NOTE:
In case of limited number of licenses it might be of use to limit the
maximum number of jobs. As each job will consume one license of
RESOLVE and a license(s) required to run a model. If license is not
July, 2021
RESOLVE Manual
User Guide 482
When the run is finished Crystal Ball will also display its own distribution plot for each forecast
variable.
The user can also use standard Crystal Ball tools for analysis and reporting. Those are available
from the Excel interface.
It includes methods that perform large-scale copying; filtering; searching; sorting and statistical
or arithmetic calculations.
source may not be uniformly sampled, and in this case the data needs to be re-sampled.
The objective of the Non Uniform Resampler data object is to re-sample a non-uniformly
sampled input data set to create a uniformly sampled data set.
July, 2021
RESOLVE Manual
User Guide 484
One of the advantages of this method is that it retains more information about the input data,
particularly if down-sampling is required.
Input Connections
None
Output Connections
None
Properties
NonUniformResampler.Settings.[...]
This field contains the re-sampling settings described above.
NonUniformResampler.Results.[...]
The resampled data is held internally within a DataSet, and the properties available in this field
are identical to those of a DataSet.
NonUniformResampler.Results.Column[i].[...]
The re-sampled data is held in this field, and can be accessed in the same way as the values of
a DataSet or DataStore.
Functions
Uniformly resample a column that has non-uniform sample locations, starting at a specified
point for a specified number of output samples
July, 2021
RESOLVE Manual
User Guide 486
locations
There are three methods of resampling, which are described in the Non Uniform Resampler
section of this manual. Please refer to that section for further details.
Input Connections
None
Output Connections
None
Properties
UniformResampler.Settings.[...]
This field contains the re-sampling settings described above.
UniformResampler.Results.[...]
The resampled data is held internally within a DataSet, and the properties available in this field
are identical to those of a DataSet.
UniformResampler.Results.Column[i].[...]
The re-sampled data is held in this field, and can be accessed in the same way as the values of
a DataSet or DataStore.
Functions
Resample an input column, starting at a specified point for a specified number of output
samples
Resample multiple columns, starting at a specified point for a specified number of output
samples
2.6.5.3 Spectral Analysis
The Spectral Analysis data object applies the Fast Fourier Transform algorithm to a uniformly
sampled signal, and provides insight into the frequency components making up the signal and
their relative power.
July, 2021
RESOLVE Manual
User Guide 488
Due to the potentially large quantity of input data, spectral analysis is generally performed on a
given period (or periods) of the signal. This period must be chosen so as to be representative
of the signal and to contain the information required, i.e. large enough for all the frequencies of
interest to be included. The settings of the Spectral Analysis object relate to defining this period
of the signal.
Number of The number of samples in the period considered must be specified. Due to
samples the requirements of the Fast Fourier Transform algorithm, this must be a
power of 2.
Define periods The period length can be defined using the period duration (the sample
using rate will be calculated), or the sample rate (the sample duration will be
calculated).
Perform single The analysis can be performed over the first period only, or over multiple
or multiple successive periods.
period analysis If the analysis is over successive periods, the reported power at each
frequency can be the average, the maximum or the minimum power of that
frequency over the periods. See Advanced settings - Multiple periods.
The Spectral Analysis calculations are triggered via a workflow, using the functions listed below.
Once the fourier transform has been performed, its results are plotted in the Results tab.
Input Connections
None
Output Connections
None
Properties
SpectralAnalysis.Settings
This field contains the above settings and the Advanced settings.
SpectralAnalysis.Results.[...]
The resampled data is held internally within a DataSet, and the properties available in this field
are identical to those of a DataSet.
The data is stored in columns which have the following names. If single period analysis has
been performed:
"Frequency"
"Power"
"Phase"
If multiple period analysis has been performed:
"Frequency"
July, 2021
RESOLVE Manual
User Guide 490
"Average Power"
"Maximum Power"
"Minimum Power"
SpectralAnalysis.NumberOfAnalysisPeriodsExtracted
SpectralAnalysis.NumberOfAnalysisPeriodsValid
Functions
The objective of detrending is to remove the period mean or the period trend. These may create
large zero or low frequency components in the spectral analysis, making it more difficult to
analyse the higher frequencies of interest. Even though the higher frequency information exists,
it will be less visible on a spectrum if the plot is dominated by the low frequency components.
The period is then multiplied by a window function such as the one shown below. The objective
of the multiplication is to ensure that the resulting signal is zero at both ends. As FFT assumes a
periodic infinite signal, if the data at the end of the period is not equal to the beginning, it
creates a discontinuity in the infinite signal. Through the FFT, this discontinuity would produce
high frequencies which would be an artefact. The spectral window prevents this from happening.
July, 2021
RESOLVE Manual
User Guide 492
Settings tab
Number of The number of samples in the period considered must be specified. Due to
samples the requirements of the transform, this must be a power of 2.
July, 2021
RESOLVE Manual
User Guide 494
Invalid data Tolerance for missing or invalid data in the input data set
threshold %
Decomposition Number of wavelet elements to decompose the original sample into
levels
Statistics Calculator
Copy Data
Simple Arithmetic
Outlier Filter
The outlier filter functions aim at invalidating data points which are far from the average spread
of the data (i.e spikes).
July, 2021
RESOLVE Manual
User Guide 496
Invalidate values in multiple columns beyond specified limits from the mean
The columns are treated individually.
Invalidate values in a column beyond specified limits from both immediate neighbours
Invalidates values which are less (or more) than a specified number of Root Mean Square of
Successive Differences (RMSSD) from both immediate neighbors.
A column may be sourced from either a DataSet or a FlexDataStore.
A column from a FlexDataStore must be of type Double.
Invalidate values in multiple columns beyond specified limits from both immediate
neighbours
The columns are treated individually.
Subtract the local average from each value in the input column
The local average is defined using a specified pre-sample lenght and a post-sample length. The
use of 'post samples' must be carefully considered as physical systems do not have access to
'future' data.
A column may be sourced from either a DataSet or a FlexDataStore.
A column from a FlexDataStore must be of type Double.
Replace each value in the input column with its local average
A column may be sourced from either a DataSet or a FlexDataStore.
A column from a FlexDataStore must be of type Double.
should be used.
A column may be sourced from either a DataSet or a FlexDataStore.
A column from a FlexDataStore must be of type Double.
Threshold filter
July, 2021
RESOLVE Manual
User Guide 498
For example, we may have a series of gas rate measurements for a separator versus time:
We want to remove the large spikes from the data but keen the rest of the results in tack to give
us a more physical trend for our data:
The Spike filter has two main options; Based on Standard Deviation or Based on Median of
Absolute Differences. As the names suggest, these two options differ in how they calculate the
'background variability' of the sample. The 'background variability' is essentially a measure of
how much noise there is in the sample. If a sample has a large value, it means that most points
could be considered spikes in isolation but that does not mean that they necessarily represent
spikes when we consider the entire range of the signal.
Alpha Alpha is essentially the allowable noise on the stable signal. Any
spike which is found to be larger than alpha multiplied by the
background variability will be removed. Therefore, the larger this
value, the larger a spike needs to be to be removed.
Use Iterative If a spike is very large, it can have the effect of making other spikes
Filtering? seem less important (by increasing the standard deviation for
example). This means that when the first spike is removed, it may be
required to reassess the entire signal to see if any other the other
spikes are now large enough to remove.
If this option is selected, then the filter will be repeated on the
sample until it can process the sample without removing any new
spikes. This is the recommended option as it gives a better chance
of obtaining an output signal which has no spikes.
Interaction
It is expected that this Data Object be interacted with via a Visual Workflow. Both the input data
and the resultant windows are passed to and from the object using the SampleList variable
type.
For more information on the different operations which can be done with the Windows Filter
July, 2021
RESOLVE Manual
User Guide 500
For example, we may have a series of Flowing Wellhead Pressure measurements for a well
versus time:
We wish to find the windows in which the wellhead pressure is relatively stable as this will be
used to run the steady state well modelling calculations. The desired result is therefore a series
of windows which define these stable periods:
Calculation Steps
The Window Filter will start by splitting the data signal into as many windows as it calculates is
required to meet the settings. If this total number is less than the maximum number of windows
defined then the filter is complete. If the total number is greater than the maximum number of
windows, then it will try to find the most representative windows which meet this maximum value.
Interaction
It is expected that this Data Object be interacted with via a Visual Workflow. Both the input data
and the resultant windows are passed to and from the object using the SampleList variable
type.
For more information on the different operations which can be done with the Windows Filter
object, please refer to the Visual Workflow User Guide.
July, 2021
RESOLVE Manual
User Guide 502
Description: The Data Store data object is a general storage object which can be assigned
certain set columns and have data both written to and read from it. No unit conversion is applied
to the data.
The user interface allows the column names to be entered directly for this data object:
The 'Edit Columns' button allows the column headings to be edited by the user. New columns
can be added and the data can also be entered using visual workflows; the properties and
functions to perform this task are described below.
Input connections:
A variety of applications and workflows can be connected to the DataStore as inputs to pass
data to this data object.
Output:
Properties:
Column[name or number]
Refers to the entries in the specified column name or number
Value[row number]
This property refers to the value of the item in a particular row/column in the DataStore
Tablecount
July, 2021
RESOLVE Manual
User Guide 504
Functions:
The following functions can be accessed for a DataStore from an operations element in a visual
workflow:
Clear the data (leave the column headings) from the DataStore[DataStore name]
This clears the entries in the DataStore, however the column headings are retained after the
operation
Populate or initialise the data table with a certain number of rows[DataStore name, number of
rows]
This function populates the DataStore with the specified number of rows
The data can then be entered, or imported from a DataSet or a Data Store in the form of an
*.rdo file.
Input connections:
Output:
None
Properties
FlexDataStore.TableCount
This is the number of columns of the FlexDataStore.
Column[n]
VariableName
July, 2021
RESOLVE Manual
User Guide 506
DataCount
The number of data points in the table
Value[n]
The value of the nth data point in the table, if the column is of type String, Double, Integer
or DateTime.
SampleList[n]
The value of the nth data point in the table, if the column is of type SamplePtList.
DefaultValue
Functions
Clear the data (leave the column headings) from the store
View the DataStore interactively (will block workflow execution until screen is cleared)
2.6.6.2.1 SamplePt
A SamplePt is an object that contains two variables: a DateAndTime and a Double.
Properties
SamplePt.TimeOfSample
Contains a variable of type DateAndTime
SamplePt.Value
Contains a variable of type Double.
2.6.6.2.2 SamplePtList
A SamplePtList is a list of SamplePt:
Properties
SamplePtList.SampleCount
Number of SamplePt in the SamplePtList
SamplePtList.Sample[i]
Returns the ith SamplePt
TimeOfSample
Contains a variable of type DateAndTime
SamplePt.Value
Contains a variable of type Double.
2.6.6.3 The use of data store objects in a Visual Workflow
If a FlexDataStore or DataSet has a Date variable in the first column, the entire table will be
added to the Resolve plotting at the end of a run.
To illustrate this functionality, we have added a FlexDataStore on the canvas in RESOLVE and
populated this object with data defining first column to be the time.
July, 2021
RESOLVE Manual
User Guide 508
As a result of the run, RESOLVE plots the graph time against the data defined in the Data
store, as shown below.
2.6.7 Distribution
The Distribution data object can be used to generate random numbers/samples drawn from a
distribution defined by the user. It is also part of the functionality for defining inputs in the Sibyl
data object.
Min sample value The distribution will not be evaluated below this
value
Max sample value The distribution will not be evaluated above this
value
Shape of curve The chosen distribution type, options are:
Cauchy
Dual exponential
Dual gaussian
Laplace (exponential)
Normal (gaussian)
Rectangular
Tapered rectangular
Triangular
Is logarithmic Checking this box will evaluate the distribution for
the log of the x-axis variable. This allows the
creation of, for example, a log-normal distribution
when combined with the gaussian option
Curve properties Depending on the distribution chosen, there will
be different options associated with defining the
precise shape and range
X-axis for graphical display For certain distributions it may be more
informative to examine the shape on logarithmic
or geometric axes
Generate samples Will randomly generate a set of samples of the
desired size that conform to the defined
July, 2021
RESOLVE Manual
User Guide 510
distribution
Show manual curve scaling Controls y-axis values, by default they are
controls assigned such that the area under the curve is
equal to 1
2.6.8.1 Input
PROSPER models Define the IPM-OS object containing the PROSPER model that
describes the tubing string, one model for each
FWHP Tubing head pressure of the string
Water Cut Water cut of the liquid produced through the string
GOR Producing GOR of the production fluid in the string
Orifice Depth The expected injection depth of each string
Orifice Size The size of the orifice where injection is expected. This will be used
to calculate valve/orifice dP
July, 2021
RESOLVE Manual
User Guide 512
Thornhill-Craver This coefficient is used to scale down the maximum gas injection
Derating rate that can be flowed through a valve or the orifice. As the
maximum gas rate is decreased, this means that to flow the same
gas rate as the original case (with no de-rating), larger valve or
orifice should be used
2.6.8.2 Calculation
This screen will display the results of the dual string calculations, based on the user selected
option of which wellhead data is measured and available.
Performance curves for each string and the overall dual string completion are also calculated
and available
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual.
EOS-PVT Data Object : Contains the a fluid composition and a number of thermodynamic
calculations
Comp-Allocation Data Object: For a set of input compositions and an output composition, along
with an output rate, this performs a back calculation to obtain the input composition rates.
Comp-Blend Data Object: The Comp-Blend data object computes a new composition for a set
of input compositions and rates.
Description
The EOS-PVT data object contains a detailed fluid composition and encapsulates a number of
EOS thermodynamic calculations. These include performing a flash calculation, target GOR
calculations, calculations for flow assurance (hydrates, waxes etc.). The properties of the
composition can also be accessed via a workflow to perform dynamic calculations. A full list of
properties that can be accessed and the operations that can be performed are explained in the
'Properties' and 'Functions' sections below. Examples of using this EOS data object for flow
assurance and compositional blending are available in the Data Objects section of the example
guide.
An IPM EOS file (.prp) can be imported using the import .PRP button below. Standard PVT
calculations are available via the Data Objects interface (Phase envelope calculations, target
GOR, flash calculations etc.) and these can also be accessed via Visual Workflow.
July, 2021
RESOLVE Manual
User Guide 514
Input connections
Outputs
None
If connected to an application item, the composition will be passed and will populate this object.
In addition, the P,T of the FlashPt object will be set to the pressure and temperature of the
connected item.
Properties
The physical properties of the composition stored in the EOS-PVT object can be accessed
dynamically via a workflow using the assignment/operations elements. A description of these
properties and functions is given in the Visual Workflow User Guide, along with examples of
their use and corresponding commands to perform flash calculations etc.
Description: For a set of input compositions and an output composition, along with an output
rate, this performs a back calculation to obtain the input composition rates.
When an EOS-PVT is connected to this data object, a window appears querying whether the
connect EOS is 'Solve compositions' i.e. input composition or 'Total composition' i.e. output
composition
The output total rate can be entered directly in the user interface of the 'Comp-Allocation' data
object:
July, 2021
RESOLVE Manual
User Guide 516
The calculated rates for each of the input streams that result in the total rate entered in this
object are accessible using the properties explained below in a workflow.
Input connections:
EOS-PVT object (mandatory) - the composition which results from the blend of the other input
compositions
EOS-PVT list (at least two are required) - the input composition against which the back
allocation is performed.
These can also be assigned without the physical connection when performing the allocation
within a Visual Workflow.
Output:
None
None
Properties:
Accessible properties and functions are accessed via Visual Workflow. These are outlined in
the Visual Workflow User Guide and explained in detail alongside examples of its use.
Description: The Comp-Blend data object computes a new composition for a set of input
compositions and rates.
The EOS-PVT data objects that are to be blended need to be connected to the 'Comp-Blend'
data object and another EOS-PVT data object which will hold the output composition should be
connected as shown:
The user interface of the 'Comp-Blend' data object allows the mass/volumetric rates to be
entered. These can also be entered using a workflow and the operation to calculate a blend
performed.
July, 2021
RESOLVE Manual
User Guide 518
See the sections on 'Properties' and 'Functions' below for a description of the attributes and
operations that can be performed using this data object. An example where this Data Object is
used is given in the Examples Section.
Input connections:
EOS-PVT list (mandatory) - the required PVT objects that will be blended together
Output
None
Properties:
EOS[n]
The label of the nth composition attached to this data object
MoleRate[n]
The input mole rate for the nth composition (RateType = 0)
MassRate[n]
The input mass rate for the nth composition (RateType = 1)
OilRate[n]
The input oil volume rate for the nth composition (RateType = 2)
GasRate[n]
The input gas volume rate for the nth composition (RateType = 3)
RateType[n]
The type of rate for the nth composition:
0 - mole rate
1 - mass rate
2 - oil volume rate
3 - gas volume rate
ResultComposition
The output composition resulting from the blend. This is a data object of type EOS-PVT; the
properties and methods of this object can be accessed and used in subsequent calculations.
Functions
Multiple EOS-PVT objects can be blended in a Visual Workflow in the given fractions . This can
be done by following the steps outlined in the algorithm below:
1. Place the required number of EOS PVT objects on the RESOLVE canvas and define a
composition within each object, as shown below.
July, 2021
RESOLVE Manual
User Guide 520
2. Place a Workflow client on the RESOLVE canvas and within the Workflow client create
two variables: 1) an array with a size of 3 and 2) a variable of type Collection.
3. Within the Workflow element, place and define an Operational element which will collect
different PVT compositions together.
July, 2021
RESOLVE Manual
User Guide 522
4. Add a second Operation element to the workflow which will blend multiple compositions in
the given fractions together. The command responsible for this operation is called Blend
multiple compositions in the given fractions from EOS thermodynamic calculations.
Another PVT object can be used as the output of the new composition.
The user interface allows selection of which set (lumped or full) is required as an output.
July, 2021
RESOLVE Manual
User Guide 524
For more information on creating lumping rules see the PVTP user guide.
Input:
EOS-PVT Data Object with the composition along with a lumping rule
Output:
An EOS-PVT Data Object which holds either the full or the lumped composition based on the
choice made in the Data Object.
Properties:
Delump
Non-zero indicates that a delump is required when the data object solves, zero indicates that a
lumping is to be performed.
Functions:
None
This Data Object contains the field data used to be used for the Multi Well Allocation (MWA)
data object. The first tab ("Field data & management") requires inputs for the field measured
totals for each of the three phase rates (oil, water, gas).
It is possible to initialise the Field data object with the equipment present in an existing GAP
model in the "Equipment management" section above by clicking on 'Go'. Additionally, existing
results (of a 'Solve Network') in a GAP model, if present, can be brought into this object.
July, 2021
RESOLVE Manual
User Guide 526
The field data is grouped in different tabs by equipment type; the data can be entered directly in
the screen above. A new property that is not present on the list can be added by clicking on the
'Add property' button above.
Input connections
None.
Output
None
Output connections
The Field Data object is connected to the MWA tool to which it is associated. The data object
can also be connected to other applications to pass data to the applications
The properties and functions that can be accessed for the Field Data object via a workflow are
explained below.
Properties
Item: This variable corresponds to the label of the equipment in the model
Properties: This variable refers to the property for the specified equipment that is measured in
the field.
Functions
This function adds a property (e.g. "GasLiqRatio") with specified units (e.g. "scf/STB") to a
specified equipment type (e.g. Well/TANK/Joint/Separator). The arguments required are the
name of the Field Data object in RESOLVE, the equipment type, the property name and the
unit. This property appears in the 'Well' section and the measured value of this property in the
field can then be entered.
This function adds a new piece of equipment (e.g. new well) for which the available measured
July, 2021
RESOLVE Manual
User Guide 528
data can be entered. The arguments required are the name of the Field Data object in
RESOLVE, the equipment type (e.g. Well/TANK/Joint/Separator) and the name of the newly
added equipment.
This function displays the user interface of the Field Data object to edit the object while the
workflow is running. The argument required is the name of the Field Data object in RESOLVE.
This function initialises a new Field Data Object and associates it to the specified GAP module.
Additionally, data for the specified equipment type can be initialised from the solve network
results of the specified GAP module. The arguments required are the name of the new Field
Data object, the name of the GAP module in RESOLVE, the equipment type and the instruction
to initialise data from model (=1) or not (=0). This function is equivalent to getting the data from
GAP in the 'Equipment management' section ("Field data and management" screen above).
This function removes a property from a given equipment. The arguments required are the
name of the Field Data object in RESOLVE, the equipment type and the name of the property to
be removed (e.g. "GasLiqRatio")
This function removes all data from the specified Field Data object
There are a number of different Data Objects which all allow different GAP calculations to be
performed. These streamline the interaction process by allowing the User to insert the input
parameters into the Data Object, have the Data Object call the calculation and then allow the User to
extract the results from within the same Data Object. This avoids the need to know the exact
OPENSERVER strings for the different inputs, calculation commands and output variables for each
different calculation.
These Data Objects have no screens and so cannot be interacted with manually. Instead, all
interactions with the Data Objects are done via VisualWorkflows . The properties allow the input data to
be set and the results to be extracted while the functions (accessible via the Operation element) allow
the calculations to be executed.
The different possible calculations are detailed below and more information on each is given in the
next sections:
ChokeDpCalculator - Used to calculate the pressure drop across a choke for given conditions
ChokeRateCalculator - Used to calculate the rate through a choke for given conditions
ChokeSizeCalculator - Used to calculate the choke size required for a given set of conditions
IPRCalculatorPFromQ - Used to calculate the BHP from the flow rate using the IPR curve
IPRCalculatorQFromP - Used to calculate the flow rate from the BHP using the IPR curve
VLPIPRCalculator - Calculates the rate (and other properties) of a well for a given set of boundary
conditions.
TPDCalculator - Interpolates the VLP curves to for a set of boundary conditions to find the solution
output results.
PCCalculator - Calculates the FWHP vs Flow Rate performance curve for a well.
PCAutoALQCalculator - Calculates the performance curve for a gas lifted well with an automatically
generated range of GLR injected values.
2.6.11.1 Choke dP Calculator
The ChokeDpCalculator Data Object is used to carry out a choke calculation in GAP for a given
set of inlet conditions and known phase rates. The calculator will return the calculated outlet
pressure and temperature along with the critical rate, pressure and temperature.
Within the GAP Data Objects, a number of different calculations can be set up with different
input parameters and then when the object calculation is executed all of these calculations will
be performed together. The results can of each calculation can then be picked up from the Data
Object.
These procedures would be performed using a Visual Workflow. Therefore, more information
on the properties and functions of this object are described in the Visual Workflow User Guide.
2.6.11.2 Choke Rate Calculator
The ChokeRateCalculator Data Object is used to calculate the rate which balances the inlet and
discharge conditions of a choke using a choke model. The calculator will return the calculated
fluid rates along with the critical rate, pressure and temperature.
Within the GAP Data Objects, a number of different calculations can be set up with different
input parameters and then when the object calculation is executed all of these calculations will
be performed together. The results can of each calculation can then be picked up from the Data
Object.
These procedures would be performed using a Visual Workflow. Therefore, more information
on the properties and functions of this object are described in the Visual Workflow User Guide.
2.6.11.3 Choke Size Calculator
The ChokeSizeCalculator Data Object is used to calculate the choke size required to pass a
certain set of phase rates through a choke at given inlet and outlet conditions. The calculator will
July, 2021
RESOLVE Manual
User Guide 530
return the calculated choke diameter along with the outlet temperature and critical rate, pressure
and temperature.
Within the GAP Data Objects, a number of different calculations can be set up with different
input parameters and then when the object calculation is executed all of these calculations will
be performed together. The results can of each calculation can then be picked up from the Data
Object.
These procedures would be performed using a Visual Workflow. Therefore, more information
on the properties and functions of this object are described in the Visual Workflow User Guide.
2.6.11.4 IPR BHP Calculator
The IPRCalculatorPFromQ Data Object uses an IPR curve defined in GAP to calculate the
Bottom Hole Pressure of the well for a given flow rate. It will also return the derivatives of the
solution point.
Within the GAP Data Objects, a number of different calculations can be set up with different
input parameters and then when the object calculation is executed all of these calculations will
be performed together. The results can of each calculation can then be picked up from the Data
Object.
These procedures would be performed using a Visual Workflow. Therefore, more information
on the properties and functions of this object are described in the Visual Workflow User Guide.
2.6.11.5 IPR Rate Calculator
The IPRCalculatorQFromP Data Object uses an IPR curve defined in GAP to calculate the flow
rate of the well for a given flowing bottom hole pressure. It will also return the derivatives of the
solution point.
Within the GAP Data Objects, a number of different calculations can be set up with different
input parameters and then when the object calculation is executed all of these calculations will
be performed together. The results can of each calculation can then be picked up from the Data
Object.
These procedures would be performed using a Visual Workflow. Therefore, more information
on the properties and functions of this object are described in the Visual Workflow User Guide.
2.6.11.6 Performance Curve Calculators
The Performance Curve Calculator Data Objects will calculate the FWHP vs Flow Rate
relationship for a well using the VLP and IPR curves of a well in a GAP model. This is done for a
certain fixed point in time and therefore the fluid phase ratios (water cut, GOR, CGR etc) are
fixed for the curve and the reservoir properties also remain constant.
As well as calculating the production rates of the well, other properties (such as maximum
mixture velocity, pump and motor performance etc.) are also reported. A specific property can
be set and the derivatives of all the resultant parameters with respect to this property will also be
reported.
PCCalculator - This object is used to generate a single Performance Curve for a fixed set of
conditions. For an artificially lifted well, this means a single artificial lift quantity
(such as gas lift injection rate or frequency) must be used.
PCAutoALQCalculator - This object is intended to be used when modelling gas lifted wells and
will calculate a series of Performance Curves for an automatically
generated range of GLR injected values.
Within the GAP Data Objects, a number of different calculations can be set up with different
input parameters and then when the object calculation is executed all of these calculations will
be performed together. The results can of each calculation can then be picked up from the Data
Object.
These procedures would be performed using a Visual Workflow. Therefore, more information
on the properties and functions of this object are described in the Visual Workflow User Guide.
2.6.11.6.1 PCCalculator
The PCCalculator Data Object is used to generate a single Performance Curve for a fixed set
of conditions. For an artificially lifted well, this means a single artificial lift quantity (such as gas
lift injection rate or frequency) must be used.
These procedures would be performed using a Visual Workflow. Therefore, more information
on the properties and functions of this object are described in the Visual Workflow User Guide.
2.6.11.6.2 PCAutoALQCalculator
The PCAutoALQCalculator Data Object is intended to be used when modelling gas lifted wells
and will calculate a series of Performance Curves for an automatically generated range of GLR
Injected quantities.
These procedures would be performed using a Visual Workflow. Therefore, more information
on the properties and functions of this object are described in the Visual Workflow User Guide.
2.6.11.7 Lift Curve Calculator
The TPDCalculator Data Object uses the VLP curves of a GAP well to find the flowing
conditions for a given flow rate and set of boundary conditions. For example, for a given Water
Cut, GOR, FWHP and liquid rate the operation will return the FBHP, gauge pressures etc.
This could be included within a loop which changes one of the inputs (such as the liquid rate)
until one of the results (such as the gauge pressure) matches a measured point.
The calculator also allows the User to define a derivative variable and it will then return the
derivative of each of the result variables with respect to this derivative variable. For example, if
the derivative variable is entered as liquid rate, all of the result variables will include how the
variable changes with respect to liquid rate.
July, 2021
RESOLVE Manual
User Guide 532
Within the GAP Data Objects, a number of different calculations can be set up with different
input parameters and then when the object calculation is executed all of these calculations will
be performed together. The results can of each calculation can then be picked up from the Data
Object.
These procedures would be performed using a Visual Workflow. Therefore, more information
on the properties and functions of this object are described in the Visual Workflow User Guide.
2.6.11.8 VLP/IPR Calculator
The VLPIPRCalculator Data Object uses the VLP and IPR curves of a GAP well to find the
solution rate of the well for a given set of boundary conditions. This essentially carries out a
system calculation using GAP and returns the well's production rates.
The calculator also allows the User to define a derivative variable and it will then return the
derivative of each of the result variables with respect to this derivative variable. For example, if
the derivative variable is entered as well head pressure, all of the result variables will include
how the variable changes with respect to FWHP.
Within the GAP Data Objects, a number of different calculations can be set up with different
input parameters and then when the object calculation is executed all of these calculations will
be performed together. The results can of each calculation can then be picked up from the Data
Object.
These procedures would be performed using a Visual Workflow. Therefore, more information
on the properties and functions of this object are described in the Visual Workflow User Guide.
2.6.11.9 GAP Topology
The GAP Topology object queries a GAP model to understand the paths and links between all
nodes in the system (tanks, wells, joints, pipes etc). With Visual Workflows, this can be used to
investigate possible routing paths for a well, whether two parts of the system are connected
hydraulically, and which of these are masked. This is described in detail in the Visual Workflow
User Guide.
2.6.11.9.1 Interface
The main operations possible with the GAP Topology object are performed using Visual
Workflows. However, the interface of the GAP Topology object allows the user to get an
overview of the path(s) from the upstream or downstream boundaries of the system. This is
done by pressing 'Get topology' after selecting the relevant GAP model from the drop down
menu.
This will show the path from one boundary to another, with all joints/nodes in between displayed.
It can be selected whether to show the topology from the 'bottom up' (upstream boundary) or 'top
down' (downstream boundary) - the differences shown below for an example topology.
Options
Ignore equipment Will allow the user to obtain the topology even if
validation GAP equipment is invalid (e.g. missing VLPs or
tank files etc)
These procedures would be performed using a Visual Workflow. Therefore, more information
on the properties and functions of this object are described in the Visual Workflow User Guide.
2.6.11.11 Match IPR Calculator
This calculator is designed to match IPRs of wells in GAP to data of Q vs FBHP. This can be
done for oil or gas phase wells, and the reservoir pressure can be specified or fitted using the
test data. There is also the option to automatically update the well in question with the results or
not.
To use the object it must first be set up with information on the equipment name, layer, test
phase ratios etc. These procedures would be performed using a Visual Workflow. Therefore,
more information on the properties and functions of this object are described in the Visual
Workflow User Guide.
July, 2021
RESOLVE Manual
User Guide 534
The tool is designed to help engineers screen through various input parameters and understand
their relative impact on the model response. From this, several inputs can be selected and used
as the control variables of an optimisation algorithm, which aim is to minimise the mismatch
between the history data and the simulation.
Two methods are available for performing sensitivities on the input parameters:
Independent sensitivity analysis: each input parameter is varied independently from the other
parameters starting from a reference case
Dependent sensitivity analysis: all combinations of the inputs parameters are evaluated
July, 2021
RESOLVE Manual
User Guide 536
The imported data then requires to be mapped to the different model variables. For example,
mapping will be required between well names in the imported data and in the model, and
between columns in the imported and model variables such as phase rate, BHP, WHP etc.
Import history
Select 'Import history' to begin the import process. The following window appears:
Import data source Select the data source for data import. This may be:
A table of data copied from the clipboard
One or several text files: *.csv, *.tsv and *.txt are supported
Note: Multiple files may be selected at once for data import. This may
be used for example if the production history of the different wells is
held in separate files.
Start date This entry is optional and is used in cases where the date is defined as
the number of days since the beginning of production
Format of data This allows to define the culture format of the data being imported
being imported
The following examples illustrate the supported format for the data.
Notes:
Excel tables should first be copied into the clipboard or saved as a *.csv or tab delimited *.txt
file.
The column names, well status etc. do not need to be the same as in the examples below.
Mapping between the columns and model variables will be performed subsequently.
Example 1:
July, 2021
RESOLVE Manual
User Guide 538
Example 2:
July, 2021
RESOLVE Manual
User Guide 540
Plot history
This button allows the user to plot the imported production history and ensure that the data has
been imported correctly.
The available controls are dependent on the data which has been imported in the 'History data'
tab. For example, the model can only be controlled with fixed WHP if WHP data has been
imported.
July, 2021
RESOLVE Manual
User Guide 542
An overall error value will be calculated for each history case using the differences between the
history values and their corresponding model variable values.
Variable weightings are a simple means to set the relative significance of each mapped
variable to the overall error. A weighting of zero (or blank) will exclude that variable from the
overall error calculation.
Note that variables that have been defined as well controls are not shown here, as it is assumed
that all well control targets will be achieved and thus the history-to-model differences for all well
control variables will be zero.
2.6.12.5 Report variables
When running a history match, RESOLVE automatically stores the model results that
correspond to the imported history data, so that comparisons can be made (for example
between the historical BHP and the simulated BHP). This tab allows the user to define
additional model variables to be stored and plotted by RESOLVE.
Add - View/Edit
These variables can be defined using an OpenServer string or selected from the available drop-
down menu.
The RESOLVE ICD Analysis Data Object is designed to build REVEAL simulation models for
ICD and ICV optimisation studies.
The primary objectives here are to maximise an objective function (revenue or production) by
investigating optimum device type/configuration for a single well along with reservoir uncertainty
analysis.
July, 2021
RESOLVE Manual
User Guide 544
The well design is based on production performance and economics; this involves reducing
water/gas coning effects, unwanted fluid breakthrough delay and optimization. Well design
varies depending on reservoir types, well types and user objectives. Higher level objectives are
to maximise revenue or profit, with lower level objectives being equal liquid production along the
wellbore (influx equalization). The use of the devices can delay the start of artificial lift required
for the well, which can improve cash flow by delaying capital costs.
The basic idea here is that we define our PVT, reservoir conditions and start with a pre-existing
well description with a number of controllable devices based on equipment found in the
REVEAL equipment database. The locations, types and control of the devices then become
part of possible scenarios for the ICD analysis data object.
The ICD Analysis Data Object uses full reservoir simulation, thereby allowing full life of well
analysis rather than analysis only for a particular point in time. The use of REVEAL as an
integrated well-reservoir simulation tool implies tight coupling between the response of the well
and the reservoir, thereby creating a consistent numerical model.
This section illustrates the design ideas of the ICD analysis data object added in IPM 9 for ICD/
ICV optimization studies.
Go to General description.
2.6.13.1 General description
This section describes the workflow for using the ICD Design Data Object.
In order to setup an ICD analysis design model it is necessary to have the following:
The permeability and porosity profile can be entered through the lithology profile within the
Resolve Well Builder (optional) or directly within the ICD Analysis Data Object Reservoir
section. The advantage of using the lithology option within the RESOLVE Well Builder is that we
obtain a visualization of permeability/porosity profile which can be used to help identify different
inflow characterizations; this information can then help in the determination of isolation
requirements and the ICV or ICD distribution.
One or more ICD Analysis design Data Objects can then be added to the RESOLVE canvas
and a link to both the BO-PVT and Well Builder Data object created:
The action of linking the Well Builder Data object to the ICD Analysis Data Object will result in
the automatic generation of reservoir grid into sections (referred to as layers) based on MD.
Layers are vertical for a horizontal-oriented well, whereas layers are horizontal for a vertical well.
If a lithology survey has been included within the Well Builder Data object, layers are
automatically generated based on the lithology survey and the associated permeability/porosity
profile transferred to the ICD Analysis Data Object. If a lithology survey is not defined the depths
and lengths of the layers are based on the ICD/ICV device positions and packer isolation in the
completed section of the Well Builder Data object. The layer definitions and permeability/
porosity profile may be manually updated within the ICD analysis data object.
IMPORTANT NOTE: As the linking action results in automatic reservoir grid layer generation
July, 2021
RESOLVE Manual
User Guide 546
it is strongly recommended that the link between the Well and ICD Data Objects is made
AFTER the Well Object description has been fully defined.
The PVT screen displays a summary of key PVT attributes entered via the BO-PVT data object.
The ICD Analysis Data object supports black oil (Gas, Oil and Retrograde Condensate) fluid
types. A BO-PVT data object needs to be populated and a link created between the BO-PVT
data object and the ICD Analysis Data Object prior to using the ICD analysis data object. PVT
(black oil) attributes may be shared by creating a link between a single BO-PVT object instance
and one or more ICD Analysis Data object instances.
Refer to the BO-PVT Data object help section for more information.
The data entered in the reservoir screen is listed below, and the areas where this is entered is
shown in the screenshot above:
If the well builder description contains a lithology profile (recommended) the layers automatically
generated by connecting the well builder object to the ICD analysis object will reflect the lithology
profile (including the porosity/permeability profile). This has been discussed in the general
description section.
If lithology layers are not defined in the well object then the ICD analysis object will automatically
generate reservoir layers that match the ICV/ICD and packer positions. In this case the
permeability and porosity profile is entered through the reservoir screen of the ICD data object.
The user is free to replace the reservoir layer description to reflect porosity and permeability
variation (e.g. from log files).
3. Configure zones to reflect Oil/Water/Gas contacts e.g. aquifer and other initialisation data. If
there is no Oil-Water contact present then this cell can be left blank
A visualisation is available which shows the well trajectory in relation to the reservoir:
July, 2021
RESOLVE Manual
User Guide 548
The gridblock length for the first row of data entered in the 'Zones/Layers' tab corresponds to the
distance from the reservoir edge to the start of the completed section (indicated by the arrow in
the visualisation screen above). Subsequent gridblock lengths are calculated using the entered
MD(End) data.
The Scenario Management tab in the well screen can be split into the following four sections as
shown in the screenshot above:
1. Equipment
2. Simulation control
3. Scenarios
4. Simulation and results
1. Equipment
The objective here is to define the equipment (ICDs/ICVs/screens) that are to be sensitized on
for the analysis.
The device positions in the 'Scenarios' table (explained below) matches the ICD/ICV equipment
count found in the completed section of the connected Well Builder Data Object. From the well
screen it is possible to setup a list of additional ICD/ICV/Screen equipments to be made
available for design consideration by the various ICD methods found on this screen.
The Add Equipment command will create a new equipment item name as an alias for an
July, 2021
RESOLVE Manual
User Guide 550
existing device found in the REVEAL equipment database. The list of available equipment
corresponds to REVEAL Equalizer, Nozzle, and Generalised equipment databases. The
selected Type (EqualizerTM, Generalised ICD, Nozzle ICD, ICV or Screen) will enable
additional filter controls to refine the search and configure device settings. For further details on
these devices please refer to the REVEAL manual, however a brief description of their usage in
relation to the ICD analysis data object is given in the following table:
ICD This is an Inflow Control Device and corresponds to either of the following types:
Baker EqualizerTM, Generalized ICD or Nozzle ICD. If a Generalized ICD or Nozzle
ICD is used, then its properties must first be defined in the REVEAL equipment
database before being used in the ICD data object. Refer to the REVEAL manual
for instructions on adding these equipments.
ICV An ICV is an Inflow Control Valve and is modelled as an orifice with a controllable
area. The properties that specify an ICV are its area and the discharge coefficient.
Screen Screens are modelled as devices that cause a very small pressure drop as the
fluids flow from the annulus to the base pipe. Screens are not controllable (i.e. fixed
dP) and if they exist in the original well object, then their individual device positions
are not transferred i.e. their position is not included as a column in the scenario
table below. Screens may be added to the available equipment list and included by
the simulation methods (i.e. in one or more scenarios) by substitution of a
controllable device at any of the available device positions.
Note – The action of running a simulation including one or more scenarios does not update the
original well description i.e. the original ICD/ICV equipment found in the completed section of
the well builder data object will remain unchanged after running a simulation.
2. Simulation control
Fixed THP Well control is fixed WHP (pressure at the top of the well) for the duration of
simulation time and not fixed BHP or fixed rate. Note that it is important that the
entire well is defined up to the surface in the well builder data object before
performing comparisons between different scenarios. This will allow a fair
comparison to be made by considering both the IPR and the VLP aspects for the
well.
Time (days) This specifies the duration of the simulation run for the scenarios. If blank, the
model will run till the abandonment WCT is reached
Marginal The marginal water cost is the fraction of the revenue of oil required for treating
water cost water. This is used in a number of calculations, such as defining the abondonment
(alpha) WCT = 1/(1+alpha), calculation of the NPV (objective for optimisation) etc.
Annual The annual discount rate is the factor used to discount future production to
discount present terms for purposes of calculating the NPV. Note that the NPV calculated
rate in this object represents discounted oil production only and is not the NPV
calculated using financial calculators. The RESOLVE NPV allows various well
configurations to be compared and therefore shows which configurations provide
more oil in early time.
3. Scenarios
The auto-generation process (which occurs on first connecting the ICD and well data objects)
will result in the creation of a single scenario based on ICD position layout found in the well
object. From the well screen it is possible to setup and run additional ICD/ICV device scenarios
using a range of design methods.
Depending on the selected method (see table below) scenarios can be setup in order to
compare alternative equipment configurations. In each scenarios description it is possible to
either
manually set a device at each device position from the list of available equipment items
provide grouping options, which allow the system method to automatically substitute devices
at each position
ICD This method creates a simulation model that includes a well with fixed ICD device
types at the specified positions. ICD equipment will appear on the list only if it has
been pre-defined in the equipment section on this screen.
ICV Open/ This simulation method uses controllable ICVs at the specified locations to
Closed maximise the objective function of oil production: the ICVs are either fully opened or
fully closed at any timestep during the simulation. Once an ICV is closed, by default,
it will not be opened again.
ICV This method uses controllable ICVs to maximise the objective function: the ICVs
Gradient can be partially opened (rather than fully open or closed) and their settings will be
calculated at each time-step to optimise oil production.
GA Optimize the ICD configuration for the given objective function over the duration of
the simulation using a modified genetic algorithm (GA). The algorithm generates
and runs a number of equipment configurations using a genetic algorithm.This
approach auto-seeds possible configurations and uses techniques to cross over
July, 2021
RESOLVE Manual
User Guide 552
from existing (elite) generations to new ones in order to find the best ICD layout
(maximise objective function) over the simulation period. See the section on
genetic algorithms for more information on their use in RESOLVE.
Since this approach can be setup to use simultaneous REVEAL instances, the
maximum number of simultaneous REVEAL models should be setup in the
configuration tab prior to running the simulation.
Note: The grouping configuration for the ICVs is used to determine the ICV layout to apply to the
well description before running or exporting the scenario. If different groups are included (e.g.
Group1, Group2, etc.) then this means that different ICVs will be placed in the simulation model
and can be controlled independently for the scenarios to be run. Having just one group at all
locations means that the ICV size for all of these device locations will be the same at any time
(i.e. they cannot be controlled independently).
The following options are available with regard to running the scenarios and viewing the results:
Run This command runs the scenarios selected in the Scenarios table. Multiple
selected selections can be made in the scenarios table to run multiple scenarios at once.
Use Check this box to use the IPM PxCluster when running scenario simulations. If this
cluster option is selected the cluster must be started before the scenario simulations are
run. See the configuration tab for information on this.
Scenario This function shows the inputs/information for the scenarios selected in the
Info Scenarios table. The inputs include the reservoir layer and zone properties along
with equipment details. If the Scenario has been run, then the objective function
results will also be displayed. The reservoir input data in the Scenario Info
corresponds to the data when the scenario was run. If the reservoir conditions are
changed, this data is not overwritten unless the scenario is run again under new
reservoir conditions.
REVEAL This function exports the well description described in the well builder data object
Export along with the inputs in the ICD Analysis data object.
Show Results for the selected scenarios can be viewed using this option. See below for
results more information.
Cumulativ Data on comparison of the cumulative production and NPV is available to find out
e the best scenario for the optimization. See below for more information.
comparis
on
When one or more scenarios have been successfully run and results exist, the scenario results
may be plotted individually or compared side by side. The scenarios table has a colouring
system which indicates which scenarios have results available that correspond to the inputs in
the data object (green) and which have results available, however they do not correspond to the
inputs in the data object (amber). Scenarios that have not been run are kept white. The plots
and corresponding input data for a given selected scenario are not overwritten until the scenario
is run for a different set of input conditions.
By selecting multiple rows in the scenario table, it is possible to compare different scenario
results together using the "Show Results" command:
Tubing results are reported for the entire length of the well representing the first and last ICD/ICV
control device position. ICD/ICV equipment may exist beyond the completed length of the well,
and the results will be reported even if they do not exist as controllable device positions
(scenario columns).
The ICD/ICV can be located either on the base pipe or the annulus. Based on the location of the
device, the following results are available:
1 Tubing or Annulus ICD Rate
2 Tubing or Annulus ICD pressure drop
3 Annulus Reservoir Pressure or Tubing Pressure
4 Oil/Gas/Water inflow phase rates
5 Permeability and Zone profile against ICD profile. If the selected scenario is either ICV
July, 2021
RESOLVE Manual
User Guide 554
Gradient or ICV Open/Closed method, the ICD icons are replaced with open/closed status bars
at each device position
6 Oil/Gas/Water phase saturations
7 Oil/Water/Gas produced (or Cumulative).
All the above plots can be viewed versus time using the scrollbar at the top of the screen.
Cumulative Comparison
The "Cumulative comparison" command in the screen above provides cumulative and NPV
data comparison for the selected scenarios on the same plot:
Access to cumulative and rate comparison results vs time data are also provided as a
RESOLVE Workflow Methods.
Go to the next section.
2.6.13.4.2 Configuration
The configuration tab provides various options for setting the number of REVEAL instances, GA
options as well as PxCluster settings:
Convergen The convergence tolerance acts as a multiplier to the convergence limit settings
ce in the control section of REVEAL. Smaller this fraction, tighter will be the
tolerance convergence tolerances for the simulations at the expense of longer run times.
The default value should be kept unless the REVEAL model has been properly
tested.
Maximum The scenarios use simultaneous REVEAL instances for ICD/ICV configuration
number of and optimization. This input sets the maximum number of REVEAL instances that
simultaneo will be run at the same time.
us Reveal
instances The input should be configured with due consideration of the number of available
REVEAL licenses available e.g. by default the maximum number of REVEAL
instances is set to 4: this means that if we run one or more scenarios up to 4
concurrent licenses of REVEAL will be used simultaneously.
Log This option creates a text file that logs details of the simulation run. The path
simulation where the log file is saved is shown on the screen.
July, 2021
RESOLVE Manual
User Guide 556
errors
GA A number of inputs are provided for the GA method. Note that the total number of
configuratio cases to run for the GA method = number of generations to run x number of
n chromosomes in each generation.
PxCluster Click on the "Run PxCluster Console" button (or alternatively from the RESOLVE
Settings Wizards menu) to start and/or configure PxCluster. The "configure driver" button
shows options for the PxCluster executable.
Example to setup up a local cluster and run scenarios: Start the local cluster on
the machine where RESOLVE is running i.e. click on "Run PxCluster Console"
and then the large open folder icon to start the local PxCluster service (please be
patient as this may take some time). Next return to the "Scenario Management"
tab on the Well screen, check the "use cluster" option, select one or more
scenarios and choose the "Run Scenarios" command. The PX Cluster
management console will display statistics of running jobs (our scenarios).
Well Description: Well Object - the well description and deviation survey
Output:
None
None
Properties:
DiscountRateFrac
FixedTHP
MaxGridBlockSize
OilToWaterCost
PayThickness
ReferenceDepth
ReferenceTemperature
SimulationTime
Functions:
Add ICD/ICV/Screen (REVEAL) equipment type to the list of equipment available to the various
ICD methods.
Example 1:
methodType = ICDMethodType.ICD
commaSeparatedDeviceList "EQ-0.2,EQ-0.4,EQ-0.4,EQ-0.4,EQ-0.4,EQ-0.4,EQ-
0.4,EQ-0.2"
2 ICDs named EQ-0.2 and EQ-0.4 exist in equipment list, 8 device positions available
Example 2:
where each comma separated token (representing each available device positions)
has prefix ‘Group’ and integer less than or equal to GetDeviceCount (8 device
positions available in this case)
July, 2021
RESOLVE Manual
User Guide 558
Data Set Data Object: Generalised storage for columns of data, with plotting and regression
options
Vector Data Object: Vector structure that can be used to perform vector mathematics
operations.
Matrix Data Object: Matrix structure that can be used to perform matrix calculations such as
inversions, transpositions etc.
Simplex Data Object: Method used in linear programming to find the minimum or maximum of a
linear function of multiple variables within a set of constraints.
2.6.14.1 DataSet Data Object
Description: Generalised storage for columns of data, with plotting and regression options.
Input connections:
Output:
None
July, 2021
RESOLVE Manual
User Guide 560
If they do not already exist, columns will be created for each of the properties that are passed in
the standard Resolve data set (i.e. pressure, temperature, phase rates, mass rate, date). At
each timestep, a new data point will be added to each column.
Properties:
TableCount
The number of tables (columns) in the DataSet
Column[n]
Returns the nth column of the DataSet as a DataSetTable.
VariableName
The name of the variable of the table
VariableUnit
A unit identifier into the Resolve units system, if present.
UserUnit
If no unit identifier, this returns the text of the unit for the variable of the table
DataCount
The number of data points in the table
RealDataCount
The number of data points, excluding blanks, in the table
Value[n]
The value of the nth data point in the table.
ClearRslvDataAtStart
This flag is used when the DataSet object is connected to an application item on the Resolve
screen (such as a GAP separator). In this case, the DataSet will be populated by the data
calculated at the separator at each timestep (if a forecast). This flag indicates whether the
current set of data held in the DataSet should be cleared at the start of the Resolve run.
ClearRslvDataEachPass
As above; this flag indicates that the data should be cleared at each pass of Resolve during a
run.
CurrentXVar
CurrentYVar
CurrentOutVar
The x, y, and output variables (as text) as input to regression problems.
PolyFitCoeffs
A polynomial fit fits the expression: y = a + b.x + c.x^2 + d.x^3 + e.x^4 + f.x^5
Returns the current array of polynomial coefficients as calculated by the polynomial linear
regression.
PolyFitCoeffFlags
Sets or returns an array of flags indicating whether a coefficient is to be part of the fitting.
ExpFitCoeffs
ExtFitCoeffFlags
The exponential fit fits the expression: y = a.e^(b.x). These properties are analogous to those
above for the polynomial fitting.
ModelDll
The name of a DLL which contains the code for a curve fit using Levenberg-Marquadt. See the
user interface discussion for more information.
Functions
The first set of functions are simplified versions of more general fitting routines further below.
July, 2021
RESOLVE Manual
User Guide 562
Returns the chi-squared (variance) between the two columns of data. Either integer indices
(second form) or variable names (first form) can be used to specify the data columns in
question.
The remaining functions are more specific versions of the above with additional options.
The DataSet is a generic way of storing data in tabular form, and allowing operations to be
performed on that data. Each column now has an assigned unit to it meaning that we can carry
out unit conversions between different systems without needing to include this in our own logic.
The data can also be interpolated or have regressions run on it using the inbuilt functionality.
Finally, the data can be plotted for visualisation.
The data entry screen, which is obtained by double-clicking on the icon, consists of several
sections.
DataSet section
DataSet colum ns
The data can be entered by hand, pasted from another source or, as in this case, derived by
connecting the object to a Resolve application icon (e.g, a separator in GAP).
The columns are originally set up by clicking on the Edit Columns button. This is also brought up
automatically the first time a DataSet object is created. The following screen is presented:
July, 2021
RESOLVE Manual
User Guide 564
Layout
This drop down lists sets the DataSet to one of a group of pre-defined types. For example, if set
to an IPR table, the DataSet is set up with columns for BHP, phase rates, and mass rates.
These columns can then not be deleted although they can be moved around (from the Layout
button, above). Other columns can still be added.
Properties
Properties screen
This screen defines some basic properties that govern the behaviour of the object when it is
part of a Resolve model.
These two options become active when a DataSet is directly connected with a separator of a
GAP model. When the model is run , the production rates, pressure values would be added to
new columns of the DataSet at each time step.
July, 2021
RESOLVE Manual
User Guide 566
None - no action
Perform polynomial fit with saved parameters - perform a polynomial regression with
parameters as already set up in the regression screen.
Perform exponential fit with saved parameters - as above, but performing the exponential fit
Perform user model fit (Levenberg-Marquardt) - as above, but performing the user fit with the
model as supplied by the user
Evaluation
Evaluation screen
This screen allows interpolation (or extrapolation) between any two columns in the DataSet.
Select from the drop down lists the variables for which the evaluation is to take place. The
known value for either variable can be entered. To calculate the value of the unknown variable,
the appropriate arrow button can be clicked.
In the example above, the date (06/07/2000) was known and the oil rate at this date was
evaluated. Equally, the date at which an oil rate was produced could be estimated in the same
way.
2.6.14.1.1.4 DataSet Object - Regression
Regression
July, 2021
RESOLVE Manual
User Guide 568
The regression screen allows curve fitting between two columns of data in the DataSet. The
columns of data that are to be fitted are selected from the drop down lists at the top of the
screen. In addition, and optionally, an addition output table can be specified. The result of the
curve fit will be entered in that column for ease of plotting.
1. A 6th degree polynomial. Coefficients can be removed (or fixed) in the fit by checking or
unchecking the boxes in the grid; this can therefore supply a linear, quadratic, or lower order
polynomial.
2. An exponential, y = a.exp(b.x).
When the calculate button is pressed, the coefficients of the fit are displayed as shown. In
addition, the chi-squared of the fit is given, and a new column of data is created with the result of
the fit if required.
User regression
The regression capabilities also include the ability to perform a regression to any user model by
way of a Levenberg-Marquardt algorithm.
User regression
The regression requires that a model be set up by the user. This model represents the functional
form that is the expected relation between the two tables of data. It can be entered in two
different ways:
1. From a script
2. From a DLL
The script, as shown above, can be written in either C# or Visual Basic (.NET). Existing
templates in either language are supplied and can be set up by clicking the appropriate 'default
July, 2021
RESOLVE Manual
User Guide 570
template' button. If nothing is pressed, then the 'Edit script' button will invoke the script editor
with the standard C# template code:
The standard template, as shown, compiles into an assembly which satisfies the requirements
of a Levenberg-Marquardt model for the DataSet.
The comments should provide sufficient guidance. Suffice to say, there are two entry points that
need to be present: an array of double precision values which represent the fitting parameters
(coefficients) of the model, and a routine into which the algorithm will call. This routine is passed
a value of x and array of fitting parameters, and should return the corresponding value of y along
The templates implement the model y = a.exp(b.x), which is also one of the standard functions.
The above code can be compiled into a DLL (either from the script editor or from a standard
development environment) and this can be entered under the 'model file' in preference to the
script.
Once the script, or DLL, is ready, the 'Load Model' button should be pressed. This will load the
grid below the button with the array of fitting parameters with their starting values.
When the 'calculate' button is pressed, the fitting algorithm is carried out as normal. The
calculated coefficients will be displayed in the grid, and the iterations of the Levenberg-
Marquardt will be displayed in the panel, as shown above.
Any errors in any of the above procedure should be displayed in the panel at the bottom of the
screen.
Plotting
July, 2021
RESOLVE Manual
User Guide 572
The plotting tab allows any columns of the data set to be plotted against each other.
In the example above, the oil rate (on the left hand axis) is plotted against the date. In addition,
the column 'fit', which represents the results of a curve fit of oil rate against date, is plotted on the
same axis. The temperature is plotted on the right hand axis.
The required variables are selected from the list boxes on the left hand side of the screen.
Multiple variables can be plotted on the same axis provided that they have the same unit.
The properties of the plot can be adjusted by clicking on the 'Properties' button. In addition, this
allows printing and exporting of the plot to various formats, and various manipulations of the
data.
2.6.14.2 Vector Data Object
Description: The Vector data object is a general vector structure that can be used to perform
vector mathematics operations.
Through the user interface, the vector can be defined directly, and additional rows added using
the 'Add row' button. The vector description can be entered using a visual workflow, and a
description of the properties and the functions that can be performed is given below.
Input connections:
Output:
None
None
Properties:
Value[n]
The nth value of the vector. If setting this value, the vector will be expanded automatically if the
index is out of the current range.
Size
The number of elements in the vector.
Sum
The sum of all the elements in the vector.
Average
The average of all the elements in the vector.
Functions
v = v1 + v2
v = v1 - v2
Arithmetic operations * and / are supported between vectors and scalars, e.g.
v = v1 * d
v = v1 / d
Description: The matrix data object is a general matrix structure that can be used to perform
matrix calculations such as inversions, transpositions etc.
July, 2021
RESOLVE Manual
User Guide 574
The matrix entries can be defined directly through the user interface: the 'Add row' and 'Add Col'
buttons expand the matrix size. A full description of the functions and operations for the matrix
data object is provided below.
Input connections:
Output:
None
None
Properties:
Rows
The number of rows which comprise the matrix
Cols
The number of columns which comprise the matrix
Row[n]
Returns a vector which represents row 'n' of the matrix
Col[n]
Returns a vector which represents column 'n' of the matrix
Val[i][j]
Returns the i,j element of the matrix
Square
Return non-zero if this is a square matrix
Unit
Returns non-zero if this is a unit matrix
Functions:
Transpose(Matrix A)
Transposes the matrix A in place.
LUInvert(Matrix A)
Inverts matrix A in place
Determinant(Matrix A)
Returns the determinant of the matrix A.
July, 2021
RESOLVE Manual
User Guide 576
m = m1 + m2
m = m1 - m2
m = m1 * m2
m = m1 * v (vector)
One of the examples of where the Simplex method can be applied in Petroleum Engineering is
a problem of calculating water fraction (water-cut) from individual layers in a multilayer wells.
Consider the problem, when the well in question is producing from 3 non-communicating layers.
The well is periodically tested with production logging tool (PLT) on different regimes, which
provides liquid rate for each layer and total water rate at surface, however individual layer water-
cut is unknown.
Applying the simplex method formulation to the described well the above results can be used to
create a set of constraints:
If necessary additional tolerance can be introduced using inequalities. (e.g. <10, instead of =0)
If the tests are not spread far apart in time, it is possible to assume that water-cut for each well
is the same. This allows summing up all three equations and create a total objective function:
Minimising this function within the above constraints will provide an estimate of individual layer
water-cut values.
2.6.14.4.1 Properties
Internal name: Simplex
Input connections:
None
Output:
None
July, 2021
RESOLVE Manual
User Guide 578
None
Properties:
Input variables
These properties are held in the ObjectiveFunction layer of the Simplex. All of the following
strings are accessed through the full string:
Simplex.ObjectiveFunction...
These properties are held in the ConstraintEqn[x] layer. The following parameters should be
accessed through the full string:
Simplex.ConstraintEqn[y]...
constraint 'y'.
.Function.Constant Defines the constant value of the constraint 'y'.
.Relation Defines the relationship between the constraint
and its right hand side. The following inputs are
possible:
negative number - '<'
zero - '='
positive number - '>'
.RHS Defines the right hand side (Limit) of the
constraint equation.
Output variables
The following parameters hold results of Simplex calculation and should be accessed through
the full string:
Simplex...
Functions
None
2.6.14.4.2 Functions
Category of Operation: Maths Library Functions - Simplex
Description:
Before the calculation is performed, the calculation inputs must first be set and once the calculation
has been completed the results can be extracted. Below shows a simple workflow to carry out the
calculation:
July, 2021
RESOLVE Manual
User Guide 580
The first element is present to set up the input parameters, such as number of controls and
constraints, coefficients and constants for objective function and constraints. Full details of the input
parameters required are available in the Properties section of Simplex description.
Once this has been completed, the final element allows the results to be extracted. Full details of
output results are provided in the PRoperties section of the Simplex description.
The outlet conditions of the choke (as well as the critical conditions) are calculated and can be
extracted.
Inputs:
Outputs:
Return Value:
The Return Value will be either 0 (for success) or non-zero for an error.
In order to do so, a neural network needs to be trained against a training data set, which consists in a
set of inputs and known outputs. One of the advantages of neural networks is that once the network is
trained, the calculation cost is very small.
Each node of the hidden layer behaves according to a mathematical representation of a neuron. Each
node of the hidden layer is connected to all the nodes of the input layer, and receives a signal xi from
these nodes (this is the value of each input). The node performs a weighted average of these signals:
, where wij are weights and tj offsets. The node then calculates an output
, where F is typically a sigmoid function such as the one shown below. This is then
passed on the output layer: the final output is then calculated as a weighted sum of the signals it
receives from the hidden layer.
July, 2021
RESOLVE Manual
User Guide 582
Training the neural network involves performing a regression on the weights wij and on the offsets tj in
order to minimise the error between the calculated outputs and the known outputs from the training
set.
The general approach used to train the network is to have two data sets of inputs and outputs. One
data set (called the training set) is used to train the network and perform this regression. The other
data set (called the validation set) is used to check and validate the performance of the trained
network.
In order to overcome this, the training and validation data sets should first be normalised, such that
inputs and outputs are of the order of unity. Therefore the weights will also be of the order of unity, and
thus they can be given a suitable random initialisation prior to the network training.
Several data normalisation schemes can be used, and the results of the neural network may depend
on this. The data normalisation scheme is therefore a trial and error process. Considering a set of
data (x1,...xn), possible mappings include:
Linear mapping to the [0,1] interval:
Where
When a trained network is used to calculate an output, this will return a scaled output. It will be
necessary to apply the inverse transformation to obtain the actual physical output.
Model setup
The Neural Network data object requires at least one DataSet or DataStore to be connected. This
should contain the training data set: the columns should correspond to the inputs and outputs, and
each row should correspond to a data point. Within the Neural Network data object, it will be possible
to define which columns corresponds to the inputs and outputs.
Up to two data sets can be connected to the object: one will be used for training and one for validation.
The names of the columns in both data sets should be identical (only column names common to the
two data sets will be displayed in the object).
Application: Selection of the type of output and methodology the neural network will be used
for
Data: The measured input values and their outputs corresponding to the unknown function
Normalisation: Tools for normalising data to avoid issues with vastly different ranges
Train & Test: used to perform the training of the network and view the results of the training
and the validation sets
July, 2021
RESOLVE Manual
User Guide 584
Additional note
There are two possible difficulties when using neural networks:
Over-fitting: as with fitting a polynomial curve to data points, if the number of matching
coefficients is too large, the model may have poor predictive performance outside of the
training set data points. The use of a validation set should help to identify if this is occurring.
Over-training: as the number of training cycles increases, the error on the training set and on
the validation set decreases. There may be a point however where the error on the validation
set starts to increase with more training cycles: while the network gains accuracy on the
training set, it loses its predictive capability outside of the training set points. This can be
identified by performing a sensitivity on the number of training cycles.
2.6.15.1.1 Application
Regression models use neural networks to approximate an unknown (but measurable) function. For
a given set of input values, the network will calculate a corresponding set of output values.
Classifier models use neural networks to predict which class(es) the set of input values belong to.
Typically, the output is membership of one of N classes (where N > 1), though multiple class
membership is supported. The output value for each class will be in the range 0.0 <= y <= 1.0
The sigmoid (logistic) function is the traditional transfer function for neural networks. It is a
differentiable form of the hard threshold function.
Tanh is a bipolar sigmoid function which can take values from -1.0 to +1.0.
Neural Network Regression: this network type is used when outputs are continuous variables
Type
Classifier: this network type is used when outputs are discrete variables
Time series: time series are used if the output is dependent on the history of
the input data. The input data is assumed to represent a time series, with time
incrementing down the table. The underlying assumption is that the rows of the
table correspond to a constant time increment.
In such a network, input nodes are added (as many nodes as there are nodes in
the hidden layer). At any given time t, the outputs of the hidden layer nodes at t-1
are fed as input to the network. Using this recurrent method, the entire input
history is 'stored' and accounted for by the network.
July, 2021
RESOLVE Manual
User Guide 586
2.6.15.1.2 Data
For regression applications, the data contains measured values of the inputs with their corresponding
outputs.
The data columns that form the desired data set can be selected and imported here. Local edits to
this data can be made if necessary.
The resulting data set can be split into two parts, one to be used for training and the the other to be
used for validation.
If the data is not in a random order, better training performance can be achieved by randomising the
order of the data items. To achieve this, the order of the data in the data set can be shuffled. The
original order is stored so that the data can be "unshuffled" easily if required.
For classification applications, the inputs are measured properties while the outputs correspond to
potential class membership. At least 2 output classes are required.
The data columns that form the desired data set can be selected and imported here. Local edits to
this data can be made if necessary. The resulting data set can be split into two parts, one to be used
for training and the the other to be used for validation.
If the data is not in a random order, better training performance can be achieved by randomising the
order of the data items. To achieve this, the order of the data in the data set can be shuffled. The
original order is stored so that the data can be "unshuffled" easily if required.
2.6.15.1.3 Normalisation
When training it is possible that having outputs (and inputs) covering vastly different ranges can
cause problems.
For columns with very large dynamic range (for example GOR) it is sometimes necessary to apply a
"squashing" function prior to normalisation.
For this "squashing" purpose, either natural log or base-10 log functions can be applied to the data
prior to normalisation
July, 2021
RESOLVE Manual
User Guide 588
2.6.15.1.4 Training
The second order optimisation method is generally the most rapid method of solving small neural
networks. The second order optimisation method's training time increases as the square of the
network order, though is linear with the size of training data.
The gradient descent method is slower at solving the network but is suitable for larger networks or
where the training data set is large. The gradient descent method's training time increases linearly
with the network order and with the size of training data.
2.6.15.1.5 Test
A regression model can be reviewed by plotting comparisons of the calculated values and the
expected values from the training and validation data sets.
Any output value can be plotted for any input or output value. When an output value is plotted against
itself, the expected value will be plotted on thex axis while the calculated value is plotted on the y axis,
this should be a straight line, with a slope of 1.0 and have an intercept of 0.0.
For any other pair the expected and calculated values will be plotted using different symbols. For any
plot there is an option to include the residual values (expected - calculated).
A classifier can be reviewed by generating a scatter plot of any two variables. Each point is colour
coded to indicate which class the network selected and if the classification is correct, incorrect or
inconclusive.
For the purposes of testing, the user can select a threshold value: if a network output exceeds (1.0 -
threshold) the point is in the class; if it is less than threshold then it is out of the class; between these
values it is inconclusive.
July, 2021
RESOLVE Manual
User Guide 590
On a separate tab the neural network can be tested by entering any set of input values in text boxes
and viewing the calculated values.
Note: If the data was normalised, then the inputs and outputs of this tab correspond to normalised
data.
2.6.15.2.1 Interface
The SciKit Scaler has no interface, instead is a calculation object which can be called from a visual
workflow. The scaler is used to prepare the data for the SciKit MLP Classifier and SciKit MLP
Regressor, by in effect normalising the data to avoid over-weighting of certain data input due to its
relative magnitude.
2.6.15.2.2 Inputs/Outputs
Required Inputs
Output
Scaled data set which can be used within the neural network wrapper of the SciKit MLP Classifier and
SciKit MLP Regressor.
2.6.15.2.3 Properties
The object properties and functions that can be accessed via a visual workflow are explained below.
More information on interacting with Data Objects from within a VisualWorkflow can be found in the
Visual Workflow User Guide.
2.6.15.2.4 Functions
The SciKit Scalar object can be interacted with programmatically via a workflow. When being
interacted with via a workflow a number of different functions can be called to carry out calculations
within the object.
These functions are detailed in the PETEX Machine Learning Utilities of the Data Object Calculations
Section of the Visual Workflow User Guide.
2.6.15.3 SciKit MLP Classifier and SciKit MLP Regressor
The SciKit MLP Classifier and SciKit MLP Regressor data object are wrappers for the neural network
available free in the SciKit MLP library.
2.6.15.3.1 Interface
The interface for the SciKit MLP Classifier and SciKit MLP Regressor data object are composed of
three tabs: General, Solver, Hidden Layers.
July, 2021
RESOLVE Manual
User Guide 592
General Tab
The following input fields are available with brief descriptions. For further information on specific
settings please refer to the SciKit website which documents each extensively.
Solver Tab
The following input fields and check boxes are available. For further information on specific settings
please refer to the SciKit website which documents each extensively.
Solver Solver weight for optimisation
Iter. with no changes Max number of epochs to not achieve tolerance improvement
Learning rate Learning rate weight update schedule
Momentum Momentum for gradient descent update
Learning Rate Init. Initial learning rate
Batch Size Size of mini batches for stochastic optimisers
Early Stopping Define whether to terminate training early when no improvement in validation
score
Shuffle Whether to shuffle samples in each iteration
Power T Inverse scaling learning rate exponent
Nevsteros Nevsteros momentum toggle on/off
Momentum
Size Batch size manual specification
Validation Fraction Proportion of training data to use as validation set for early stopping
July, 2021
RESOLVE Manual
User Guide 594
2.6.15.3.2 Inputs
Required Input
Note the scalers must be built before using the SciKit MLP Classifier and SciKit MLP Regressor
2.6.15.3.3 SciKit MLP Classifier and MLP Regressor Properties
The object properties and functions that can be accessed via a visual workflow are explained below.
More information on interacting with Data Objects from within a VisualWorkflow can be found in the
Visual Workflow User Guide.
2.6.15.3.4 Functions
The SciKit MLP Classifier object can be interacted with manually or programmatically via a workflow.
When being interacted with via a workflow a number of different functions can be called to carry out
calculations within the object.
These functions are detailed in the PETEX Machine Learning Utilities of the Data Object Calculations
Section of the Visual Workflow User Guide.
July, 2021
RESOLVE Manual
User Guide 596
The MultiWell Allocation (MWA) Data Object is used to allocate the total production measured
in a field to individual wells using an integrated model. This is a powerful tool which proves
valuable in situations where we do not have direct measurements for all the wells in the field and
need to achieve a better understanding of how the field is performing simply based on total field
rates. Access to the production on a well-by-well basis is important as this allows a variety of
tasks to be achieved (optimisation, history matching etc).
The objective here is that given field measurements of well head pressures, bottom hole
pressures, choke dPs and separator rates, we need to determine the individual phase rates for
each well. This is needed when the WCT and GOR for individual wells cannot be measured
directly (for example, a collection of deep sea wells all flowing into a riser, where wells cannot
be tested individually).
Typically, a multi-well allocation problem has multiple solutions depending on the amount of data
available and the number of values to be calculated. The MWA object therefore allows the
calculation of the phase rates to be done using different physical techniques (e.g. VLP, IPR,
choke etc.) to provide a unique solution to the MWA problem. This involves including the phase
rates from the different techniques to be included in a single function to match available field
data (such as FWHP, total rates, Gauge Pressures, choke sizes etc).
The MWA object is designed to perform a multi-variable regression on individual phase rates
for each well to minimise this error function which includes a difference between total measured
and calculated rates.
The field data is entered in the Field data object and this is connected to the MWA object. The
MWA object then has options to perform the regression and analyse the results. This procedure
can be automated using a visual workflow, and the properties and functions available to achieve
this are explained in this section. Further information on the data object including aspects to
consider when performing the regression can be found in the worked example.
The regression options screen provides options for the MWA regression algorithm. Weightings
are provided on this screen, and a value greater than 1 for a particular rate ensures that the
algorithm gives a higher preference to match that rate. For most situations these options do not
need to be modified, as the MWA regression calculation achieves a solution relatively quickly
when consistent data is provided.
The Chi2 value is an indicator of the 'goodness of fit' of the regression algorithm. The smaller
this number, the better the fit which means that the MWA calculated phase rates will be reliable.
Wells Section
The wells section allows the user to inspect the field data passed from the Field Data object,
provides options for the calculation and displays the calculated results. It is possible to provide
ranges (e.g. min/max GOR/WCT for wells) to limit the calculated results within these values.
Weightings can also be given to individual calculation techniques (e.g. calculating the rate from
the VLP/choke/IPR methods).
July, 2021
RESOLVE Manual
User Guide 598
The MWA calculation can be performed by pressing 'Calculate'. The results allow comparison
between the measured and calculated values for the measured properties. Additionally, the Chi2
function shown on the 'Regression options' screen is an indication of the 'goodness of fit' of the
regression calculation. The lower this value, the better the fit, with a value = 0 indicating a
perfect match.
Input connections
Output
The results that can be accessed from the MWA tool are explained in the 'Properties' section
below.
Output connections
The data object can be connected to other applications to pass data to the applications
Properties
LowLevelUsageMode: The low level usage mode allows the data to be entered directly in the
MWA data object without the need of a field data object. This is more relevant for populating the
MWA tool using real time data. The variables below are used to set and retrieve data and
ResultData
Chi2: This variable returns the Chi2 value which is an indication of the 'goodness of fit' of the
regression calculation. The lower this value, the better the fit, with a value = 0 indicating a
perfect match.
Iterations: This variable returns the total iterations required for the regression calculation to
achieve the fit.
TotalCalcResults: This contains the total calculated results (total oil/water/gas rates) from the
field
TotalMeasuredResults: This contains the total measured results (total oil/water/gas rates)
from the field
WellResults: This array contains the calculated phase rates for each well in the MWA
calculation.
These properties correspond to the regression options for the MWA tool. As mentioned above,
in general these settings do not need to be changed for most cases.
UserData[]
UserDeltaA
UserInitLambda
UserMaxRetry
UserTolerance
UserTotalGasWeight: The weighting given to the total gas rate for the regression.
UserTotalOilWeight: The weighting given to the total oil rate for the regression.
UserTotalWaterWeight: The weighting given to the total water rate for the regression.
Functions
July, 2021
RESOLVE Manual
User Guide 600
Load measured field data into the tool (returning error if required)
This function loads the field data specified in the Field Data Object into the tool. The name of
the field data object and the MWA object are required inputs. In addition, any error message
encountered in this process is returned as a string.
Perform regression with current field data and user options (returning error if required)
This function performs the same operation as the one above, and in addition returns any error
messages as a string.
The OpenServer data object is required to access public functions in the IPM programs to
automate data input and model calculations. Using OpenServer, a value of a physical property
can be set in the IPM programs, a value retrieved or a function (calculation) performed. Further
information on the use of OpensServer, including detailed information on the available
commands and variables, is given in the OpenServer user manual.
The user interface of the OpenServer data object allows an OpenServer command/variable to
be tested:
The application to be tested is chosen in the drop down list (Select connection). The variable or
command is entered in the box shown above and evaluated. In case of a DoGet or a DoSet
operation, the variable value is returned/entered in the 'Value' box. For a command, if the
evaluation is successful, then a message is displayed in the 'Value' box above.
Input connections
None
Output connections
None
Properties
A wide variety of variables can be retrieved/set using the OpenServer data object for a number
of applications. A description of the variables related to the IPM programs is given in the
OpenServer manual. However, it is also possible to set variables for external (third party)
applications such as Eclipse, UniSim etc. A description of the variables for these external
applications should be available from the documentation/vendors of these applications.
Functions
July, 2021
RESOLVE Manual
User Guide 602
Interface overview:
The optmisation data objects make use of the Case Manager architecture and is composed of
three tabs: Physical Model, Optimisation Variables and Run & Results.
The Physical Model tab contains a workflow which describes how to calculate the objective
function for a given state of the control parameters
The Optimisation variables tab is used to define the optimisation control variables and the
objective function variable
The Run & Results tab is used to launch the calculations and analyse the results.
Particle Swarm
The Particle Swarm data object encapsulates a particle swarm stochastic optimiser. The
objective of the particle swarm algorithm is to maximise or minimise an objective function over a
defined search space.
The general approach followed is to move a set of particles within the search space, in search
of the optimal solution. Each location within the search space represents a particular set of
values of the optimisation variables of the problem.
The algorithm starts by randomly selecting an initial population, and evaluating the objective
function for each member. For each particle, a velocity is calculated based on
the particle's current velocity
the distance from the particle to the overall best location so far (considering all particles)
The three components above are combined to calculate the particle's speed, which is then used
to update the particle's position. The new particle locations are evaluated, and the process is
repeated. If no improvement is made over an iteration, then a local search is made near the
best solution (poll search), by taking steps in all directions from the current best position. If a
better solution is found through polling, then the particles will start to swarm towards this. When
July, 2021
RESOLVE Manual
User Guide 604
particles become too close together they are removed from the pool as they become
unnecessary. Further details on this can be found in the Run & Results section.
The particle swarm algorithm can be used for a variety or applications given its general nature. It
can be used to maximise a quantity, or minimise a quantity such as an error function: an
example of this is history matching a reservoir model.
GIRO
This utilises the GIRO optimisation method (which can be set up as part of the interface in
RESOLVE) in data object format. The principles behind this method use integer controls/values
and are fully outlined in the dedicated section on GIRO optimisation.
NMSimplex
Utilises a Nelder-Mead non-linear algorithm to find the optimum point of a 'surface'. This is best
used for simple multi-variable problems. For an objective function which is unimodal and varies
smoothly, defined by n variables, the NMSimplex method approximates a local optimum.
Examples of the simplex concept utilised include a line segment on a line, a triangle on a plane,
a tetrahedron in three-dimensional space etc.
2.6.18.2 Physical Model
The Physical model tab defines the model and selects or builds the controlling workflow. Its role
is to set the inputs into the physical models, run the appropriate calculations and retrieve or
calculate the value of the objective function.
July, 2021
RESOLVE Manual
User Guide 606
Objective function The 'Variable tag' button will prompt a window from which the objective
function variable can be defined in one of two ways:
Debug model Pressing the button will display the underlying CaseManager populated
workflow with test with the optimisation control variables and some default values
values (average of the specified mimimum and maximum). It will then be
possible to run the CaseManager in debug mode to verify whether the
workflow is working as expected.
NOTE:
In case of limited number of licenses it might be of use to limit the
July, 2021
RESOLVE Manual
User Guide 608
The particle swarm stops when one of the following criteria is met:
A target value of the objective function is met (the objective function is greater than the
specified value if maximising, less than the specified value if minimising).
The algorithm exceeds a specified maximum number of evaluations of the objective function
The particle speeds fall below an internal criteria (this indicates that they are all close to the
best solution)
The polling search step size falls below a specified tolerance
The particle swarm algorithm can be configured by clicking on the 'Options' button.
July, 2021
RESOLVE Manual
User Guide 610
Riser stability
Objective: To verify the stability of a riser in the surface network during the prediction and inject
gas if necessary at the base of the riser.
Situation: An offshore production system includes a riser which transports the produced fluid
from the sub-sea pipelines to the separator at the platform. Over time, it is expected that the
reservoir pressures in the system will decline and the WCT will increase. This is likely to cause
the riser to become unstable. The objective is to check if the riser is flowing at stable conditions
and inject gas if necessary at the base of the riser so that it becomes stable.
Solution: The results of a GAP model (pipe inlet/outlet conditions) can be extracted to
RESOLVE and passed to the PROSPER calculator data object. A gradient can be run and the
gas velocities retrieved into RESOLVE. Comparison of the gas velocity to standard measures of
stability (say, the Turner velocity) will indicate whether the riser is stable. If necessary, the model
can be controlled via a workflow by adding varying amount of gas-lift gas to the base of the riser
such that the riser becomes stable.
Situation: A high rate deepwater offshore gas field has a considerable risk of hydrate formation
at the wellhead, where the temperature is expected to drop due to Joule-Thomson cooling when
the well is choked. The objective is to ensure that there is no hydrate formation at the wellhead
during the integrated model forecast.
Solution: The results of a GAP model (wellhead choke downstream conditions) can be
extracted to RESOLVE and passed to the PROSPER calculator data object. A gradient can be
run and the hydrate formation flag available in the results retrieved at the wellhead to check if
there is any hydrate formation.
Output: None
The following properties and functions can be accessed for this object using a visual workflow:
Properties:
Filename
This variable contains the file path to the PROSPER file of interest.
Functions:
None
The PROSPER calculator performs a gradient calculation using a specified PROSPER file with
the inputs entered in this data object. The results can then be retrieved via this object.
July, 2021
RESOLVE Manual
User Guide 612
The user interface allows the input data required for a gradient calculation to be entered. A
summary of the results of the gradient calculation are also displayed on the screen.
The 'pipe' button indicated above allows access to detailed gradient results along the length of
the well/pipe.
Input connections:
Wellsource-File
Note:
The source can be connected directly to the ProsperCalculator (without the tag), however this
connection will disregard gas lift gas. Using the Flowing Conditions tag, gas lift gas is taken into
account and included in the PROSPER gradient calculation.
Output:
A variety of results are available from the data object: these are explained in the 'Properties'
section below.
Output connections:
The data object can be connected to other applications to pass data to the applications
Properties:
GradientIn:Inputs
July, 2021
RESOLVE Manual
User Guide 614
All the inputs shown in the screenshot above are required to run the PROSPER Calculator. The
parameters required as inputs are self-explanatory.
GradientOut:Outputs
The following calculated outputs are available to be accessed via a visual workflow for this data
object. For more information on the technical details of these outputs, it is suggested to review
the PROSPER manual or standard industry texts.
Cmax: This is the maximum C value in the pipe that is relevant for erosion studies.
dPAcc: This is the total pressure drop due to acceleration in the pipe.
dPFric: This is the total pressure drop due to friction in the pipe.
dPGrav: This is the total pressure drop due to gravity in the pipe.
GradientProfile[i]: This array contains the the total gradient profile (pressure drop per unit
length) for the well/pipe. The integer i refers to the location inside the pipe segment (row number
in the results screenshot above).
HoldupProfile[i]: This array contains the the holdup profile for the well/pipe. The integer i refers
to the location inside the pipe segment (row number in the results screenshot above)
HydrateProfile[i]: This array contains the the hydrate flag for the well/pipe. A value of 0
corresponds to the flag = No (no hydrates) whereas a value of 1 corresponds to the flag = Yes.
The integer i refers to the location inside the pipe segment (row number in the results
screenshot above)
MSDProfile[i]: This array contains the bottom measured depth information the for the well/pipe.
The integer i refers to the location inside the pipe segment (row number in the results
screenshot above)
Pmax: This variable returns the maximum pressure present inside the pipe.
PressureProfile[i]: This array contains the pressure profile. The integer i refers to the location
inside the pipe segment (row number in the results screenshot above).
ProfileCount: This variable contains the total number of result rows shown in the screenshot
above.
RegimeProfile[i]: This array contains the flow regime information for different locations in the
pipe. The integer i refers to the location inside the pipe segment (row number in the results
screenshot above). The flow regime information is stored in the form of integers, where an
integer value represents a particular flow regime depending on the flow correlation.
TemperatureProfile[i]. This array contains the temperature profile. The integer i refers to the
location inside the pipe segment (row number in the results screenshot above).
TVDProfile[i]: This array contains the TVD profile of the well/pipe. The integer i refers to the
location inside the pipe segment (row number in the results screenshot above).
Vmax: This is the maximum mixture velocity of the fluid in the pipe
Functions:
This data object allows a 'PROSPER Online' pipeline to be defined. The object can then be
connected to the PROSPER Calculator, a gradient calculation performed and the results
retrieved in RESOLVE. The GAP manual should be consulted for further technical information on
the use of PROSPER online.
The user interface allows the pipe data to be entered directly by selecting 'Edit pipe data'.
Alternatively a PROSPER file can be associated by selecting the browser button shown above.
The interface to enter the pipe data is the same as that for a PROSPER online object in GAP.
July, 2021
RESOLVE Manual
User Guide 616
Input connections
Wellsource-Online: If connected, a copy will be performed.
Output
None
Properties
The data in each of the required sections (Options, Equipment etc.) can be defined via a
workflow. The variables available to build the PROSPER online model via a workflow are self-
explanatory.
Functions
This module calculates the uninhibited rate of corrosion of the material. With a given inhibited
rate and design life, it then calculates the required inhibitor availability (from the corrosion
allowance), or the corrosion allowance (from the inhibitor availability). For further details on this
model, please refer to the PROSPER manual.
The Corrosion Calculator interface consists of a single screen, containing the inputs and the
outputs to the model.
This object is designed as a standalone calculator, and can be used from the interface or from a
workflow.
Input Connections
None
July, 2021
RESOLVE Manual
User Guide 618
Output Connections
None
Properties
Corrosion-Calculator.CorrosionIn.[...]
This field contains the inputs to the corrosion calculator. It is possible to obtain the
correspondence between the input field and the property name by CTRL+Right Clicking on the
object interface.
Corrosion-Calculator.CorrosionIn.CALCTYPE
=0: Calculate Required Thickness
=1: Calculate Required Availability
Corrosion-Calculator.CorrosionIn.PHMODE
=0: Calculate pH
=1: Enter pH
Corrosion-Calculator.CorrosionOut.[...]
This field contains the outputs of the corrosion calculator.
NOTE:
If calculated, the pH, corrosion allowance or inhibitor availability can be retrieved from
Corrosion-Calculator.CorrosionIn.[...].
Functions
This module calculates the erosion rate and the material loss over a given design life, for a
variety of geometric configurations. For further details on this model, please refer to the
PROSPER manual.
The Erosion Calculator object consists of a single screen, which contains the inputs and the
outputs of the model. The inputs are spread over four tabs: Material, Sand, Fluid and Pipe. The
different geometric configurations available require different inputs.
This object is designed as a standalone calculator, and can be used from the interface or from a
workflow. As different geometric configurations require different inputs, it is important to ensure
that all the required inputs have been set prior to performing the calculation.
Input Connections
None
Output Connections
None
Properties
Erosion-Calculator.ErosionIn.[...]
This field contains the inputs to the erosion calculator. It is possible to obtain the
correspondence between the input field and the property name by CTRL+Right Clicking on the
object interface.
Erosion-Calculator.ErosionIn.TYPE
=0: Straight Pipe
July, 2021
RESOLVE Manual
User Guide 620
Erosion-Calculator.ErosionIn.MAT
=0: Steel
=1: Titanium
=2: Epoxy
=3: Vinyl Ester
Erosion-Calculator.ErosionIn.DEF
=0: Ductile
=1: Brittle
Erosion-Calculator.ErosionOut.[...]
This field contains the output of the erosion calculator.
Functions
Based on slug characteristics (calculated from PROSPER, GAP or an external source), this
module calculates the holdup volume, peak surge volume, peak surge time etc... The maximum
gas and maximum liquid velocity for complete separation of gas and liquids is also calculated.
If the slug catcher dimensions are input, then the normal liquid level and the surge liquid level
can be calculated, as well as the actual fluid velocities. Liquid carry-over or gas carry-under
flags are reported, based on the maximum and actual fluid velocities. For further details on this
model, please refer to the PROSPER manual.
The Slug Catcher Calculator interface consists of a single screen. General inputs such as fluid
densities, K factor, retention time or slug catcher geometry are input on the left pane. The
slugging characteristics are input in the 'Flow characteristics' tab.
This object is designed as a standalone calculator, and can be used from the interface or from a
workflow.
Input Connections
None
Output Connections
July, 2021
RESOLVE Manual
User Guide 622
None
Properties
SlugCatcher-Calculator.SlugCatcherIn.[...]
This field contains the input data to the slug catcher calculator. It is possible to obtain the
correspondence between the input field and the property name by CTRL+Right Clicking on the
object interface.
SlugCatcher-Calculator.SlugCatcherIn.MODE
=0: Mean Slug
=1: 1/1000 Slug
=2: Pigged Slug
=3: Steady State
SlugCatcher-Calculator.SlugCatcherIn.KMD
=0: Calculate K
=1: Enter K
SlugCatcher-Calculator.SlugCatcherIn.RTMD
=0: API Spec 12J
=1: Enter Time
=2: Normal Liquid Level
SlugCatcher-Calculator.SlugCatcherIn.DIM
=0: Dimensionless
=1: Dimensional
SlugCatcher-Calculator.SlugCatcherIn.GEO
=0: Horizontal
=1: Vertical
Functions
SlugCatcher Model
Performs the slug catcher calculation
2.6.20 PVTP Data Objects
Internal name: PVTPResolveObjects.dll
2.6.20.1 Path to surface
Overview
Surface volumetric rates (measured at standard conditions) depend on the path taken to
standard conditions for the measurement, whereas the mass rates are independent of the
process path. This raises important challenges in working with and comparing volumetric rates
in a number of situations where different paths to standard conditions are present.
For example, if the path to standard conditions on which the field measurements are based
(e.g. straight flash to standard conditions) is different to the path to standard conditions for the
model (e.g. through a separator train), then this will cause significant errors if uncorrected
measured rates are used for a number of modeling activities such as matching models to field
data, well rate estimation and comparison, well production allocation etc. Therefore, in order to
use the field data, the rates that are reported need to be corrected such that they are all based
on a common path to standard conditions.
The Path to Surface data object in RESOLVE was developed to allow this conversion to be
made. The composition for the calculation is present in an EOS-PVT data object and is
connected to the Path to Surface data object. The path to surface is then defined in the data
object. Once the model is run, the properties that allow these rate conversions to be made
(surface volume of oil and surface volume of gas) are reported.
Methodology
Consider the conversion of a measured rate which is based on a 'measured' path
to surface to a corrected rate which is based on a 'reference' path to surface. By
defining both paths to surface in separate Path to Surface data objects, the surface volumes of
oil, which are the volumes of oil obtained by taking 1 mole of the composition through the
respective paths to surface, are reported via the data object ( and
).
Since the mass rate is constant for both process paths, the corrected rate can be easily
obtained through simple proportionality rules:
Other useful parameters such as the GORs, FVFs etc. can be derived using the same
methodology.
Description
The path to surface is defined graphically between the 'Source' which represents the inlet
stream and the available output streams. The path to surface has five available output streams:
Gas
LNG
July, 2021
RESOLVE Manual
User Guide 624
LPG
Condensate
Oil
The path to surface requires a connection to the gas output stream and at least one connection
to a liquid output stream. The available elements are shown below.
The stage pressure and temperature are input by double-clicking on the separator icon. The
'Liquid Output' option is only available for chillers.
For chillers an additional 'Liquid Output' option is available. This can be set to Oil, LNG, LPG or
Condensate:
If set to Oil: the liquid outlet can be connected to any element or any liquid export line.
If set to LNG, LPG or Condensate: the liquid outlet can only be connected to the
corresponding liquid export line. The chiller icon will colour will change to that of the
corresponding liquid export line.
July, 2021
RESOLVE Manual
User Guide 626
Double-clicking on the export block allows to set the export pressure and temperature for the
LNG, LPG and condensate lines, as well as the standard pressure and temperature. In the
results, the fluid properties will be reported at standard conditions and at the defined export
conditions.
The elements are connected using the icon. The icon validates the screen and ensures
that all the required data is entered. The arrows connecting elements are drawn automatically,
however it is possible to define any path for an arrow by holding down the Shift key when
drawing. Note that the arrow will default back to the automatic arrow if elements are moved.
July, 2021
RESOLVE Manual
User Guide 628
The icon will create a blank new process and replace the current process. The icon
enables to define a standard separator train:
The Path To Surface object should be fed with a composition, therefore it is required to
establish a connection with an EOS-PVT data object before running calculations. Once the
connection is made and the RESOLVE model is run, the overall results are reported in the
'Surface Results' tab. The calculation can also be triggered using the icon. Result tabs will
be created based on the connected export lines. For the LNG, LPG and condensate lines, the
fluid properties will be reported at standard conditions and at export conditions, as described
above.
July, 2021
RESOLVE Manual
User Guide 630
Further detailed results (composition and properties) at any point in the path to surface
calculation can be accessed by double clicking on the required element (e.g. joints, separator
etc.) or on the export lines.
Input connections
Output connections
The Path to Surface data object can be connected to a variety of applications to pass variables
to those applications, or to several data objects which require a path to surface such as the
CCE or the CVD data object.
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual.
July, 2021
RESOLVE Manual
User Guide 632
2.6.20.2 CCE
Description
The CCE data object performs a flash calculation to a specified pressure and temperature. The
resultant liquid and vapour phases at these conditions can then be taken through (separate)
paths to standard conditions. The results of these streams taken to standard conditions are then
reported.
The composition for performing this flash calculation is given in a separate EOS-PVT data
object which is linked to the CCE data object.
The test conditions (pressure and temperature) to which the fluid is flashed is entered in the
'Management' tab. The following three tabs, viz. 'Liquid', 'Vapour' and 'Total', allow a (separate)
path to surface to be defined for the resultant liquid, vapour and total stream compositions after
flashing to the test conditions. The results of the flash calculation are reported in the 'Results'
tab.
The path to surface is defined using the same interface and functionality as a PathToSurface
Data Object. Please refer to the PathToSurface Data Object section for further details.
Once a path to surface is defined for any of the 'Liquid', 'Vapour' or 'Total' streams in their
respective tabs, the path can be copied to the other streams using the copy function available in
the 'Management' tab above.
Alternatively, a PathToSurface Data Object can be connected to the CCE Data Object. The
user will be prompted with the following window, which is used to define the fluid to which the
path to surface applies. Multiple PathToSurface Data Objects can be connected to a CCE.
When a path to surface has been connected to the CCE, the corresponding path to surface
cannot be edited from within the CCE window, and should be edited from the path to surface
object itself. Note: the path to surface data is copied from the object on Calculate.
Once the module is solved or calculated, the CCE results are available from the 'Results' tab.
July, 2021
RESOLVE Manual
User Guide 634
Detailed results of the path to surface calculations are available from the 'Liquid', 'Vapour' and
'Total' tabs.
Input connections
Output connections
None
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual.
2.6.20.3 CVD
Description
The CVD data object allows a Constant Volume Depletion (CVD) calculation to be performed
using a given Equation of State (EOS) model. An EOS-PVT data object which contains the
compositional information needs to be linked to the CVD data object. Further technical details
on the nature of the calculation can be found in the PVTP user manual or in standard industry
texts.
July, 2021
RESOLVE Manual
User Guide 636
The CVD temperature and the pressure steps are then entered in the user interface of the data
object. To add more rows of data to enter additional pressures, click on the 'Add' button in the
window. For each pressure step it is possible to define a (different) path to surface.
The path to surface is defined using the same interface and functionality as a PathToSurface
Data Object. Please refer to the PathToSurface Data Object section for further details. The
'Copy this path to surface to all stages' button shown above keeps the same path to surface for
all the pressure steps.
Alternatively, a PathToSurface Data Object object can be connected to the CVD object: this will
copy the path to surface data from the object to all stages.
Once the RESOLVE model is solved or calculated, the CVD results for each pressure step are
available in the 'Results' tab:
The results of the path to surface calculation for every stage is available from the 'Stages' tab.
July, 2021
RESOLVE Manual
User Guide 638
Input connections
Output connections
None
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual.
2.6.20.4 Isenthalpic Flash
Description
Isenthalpic Flash data object performs constant enthalpy calculations from starting pressure and
temperature using a given Equation of State (EOS) model.
Enthalpy is determined for the starting pressure and temperature. Fluid is then flashed to the
If necessary Stage Pressures can be populated using Select range of pressures button at the
bottom of the window. Selecting the button will display the window where the user can input
start, end pressure and number f values to populate.
July, 2021
RESOLVE Manual
User Guide 640
Input connections
Output connections
None
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual.
2.6.20.5 Saturation Pressure
Description
Saturation Pressure data object performs calculations of saturation pressure for the specified
temperature using a given Equation of State (EOS) model.
To object also determines density and phase state (vapour or liquid) of the specified fluid.
Input connections
Output connections
None
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual.
Input connections
Output connections
None.
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual. The Water Saturation object and an EOS-PVT data object are required as input. The
July, 2021
RESOLVE Manual
User Guide 642
input pressure and temperature should have been set previously before performing the
calculation.
2.6.20.7 Water Composition
Overview
The Water Composition data object is used to hold a water composition (inhibitors and salts)
which will be used in hydrate and salt calculations when the object is connected to a Hydrate
data object or a Salt data object.
The water salinity is calculated from the entered salt composition. It is also possible to enter the
water salinity by clicking on 'Edit salinity', however this will overwrite the salt composition and
assume that the salinity comes from NaCl only.
Input connections
Output connections
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual.
2.6.20.8 Hydrate
Overview
The Hydrate data object performs two types of hydrate calculations:
Hydrate formation pressure: for each specified temperature value, the data object calculates
the hydrate formation pressure.
Hydrate appearance test: for each specified pressure and temperature, the data object
calculates whether or not the hydrate model predicts hydrate formation.
The composition used in this calculation is given in a separate EOS-PVT data object which is
linked to the Hydrate data object. A Water Composition data object may also be linked to the
data object:
If a Water Composition data object is linked to the Hydrate object, the hydrate calculation will
use this water composition
If there is no water composition linked to the hydrate object, then the calculation will use the
water composition data from the EOS-PVT data object.
'Settings' tab
Calculation options are available from the settings tab:
July, 2021
RESOLVE Manual
User Guide 644
Hydrate calculation Three hydrate models are available: Munck et al, Hydrafract
model Modified Cubic and Hydrafract Modified CPAs. For further details
on these models, please refer to the PVTp manual.
Target hydrate type The calculation can be performed for Type I hydrates or Type II
hydrates. If left to Auto, the object will determine the most likely type
of hydrate to form, and this will be reported in the 'Formation
pressure table' tab and the 'Hydrate appearance test' tab.
Use inhibitors and/or This option enables/disables the use of the water composition in the
salts in calculation? hydrate calculation.
Perform calculations If this option is enabled, then the hydrate calculations are performed
when module is solved when the module is solved, as part of the main RESOLVE solve. In
any case, the calculation can always be triggered via a workflow.
The hydrate type for which the calculation was performed is reported (most likely hydrate type if
'Target hydrate type' was set to Auto in the 'Settings' tab).
entered. The result of the calculation (Hydrate formed: Yes/No) is reported for each pressure-
temperature point.
The hydrate type for which the calculation was performed is reported (most likely hydrate type if
'Target hydrate type' was set to Auto in the 'Settings' tab).
Input Connections
Output Connections
None.
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual.
2.6.20.9 Salt
Overview
The Salt data object performs two types of salt calculations:
July, 2021
RESOLVE Manual
User Guide 646
Salt Map: this calculates the amount of salt which drops out as a solid across a range
of pressures and temperatures
Salt Solubility: this calculates the limit in a particular salts solubility across a range of
pressures and temperatures
The composition used in this calculation is given in a separate EOS-PVT data object which is
linked to the Salt data object. A Water Composition data object may also be linked to the data
object:
If a Water Composition data object is linked to the Salt object, the salt calculation will use this
water composition
If there is no water composition linked to the salt object, then the calculation will use the water
composition data from the EOS-PVT data object.
'Settings' tab
This tab enables to view the connected objects and the salt/inhibitors that will be used in the
calculations.
Adjust Values for Water In a hydrocarbon + H2O system, the gas phase will saturate with
Lost to Vapour
pure water. This water is removed from the liquid water, increasing
the salt concentration. The amount of water required to saturate the
evolved gas also changes with pressure and temperature. In real
systems, this effect can be dominant when considering the salt
precipitation behaviour of the fluid.
The output of the calculation is the molar percentage of salt which has precipitated and the
percentage that remains in solution.
The 'Auto-populate pressures and temperatures' button is used to define a range of linearly
spaced pressure and temperature values, from an entered minimum, maximum and number of
values.
The resulting salt map can be plotted using the 'Plot' button. If pressure is selected on the left, at
July, 2021
RESOLVE Manual
User Guide 648
least one temperature must be selected on the right; if temperature is selected from the left, at
least one pressure must be selection on the right.
The output of this calculation corresponds to the salt solubility with respect to the total amount of
water in the composition, expressed as a concentration and as a mass or mole percentage.
The results can be plotted using the 'Plot' button. The plotting window and functionality is the
same as for the salt map.
Input Connections
Output Connections
None
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual.
2.6.20.10 Wax
Overview
The Wax data object performs two types of calculation:
Wax appearance temperature: calculates the wax appearance temperature for a set of input
pressures.
July, 2021
RESOLVE Manual
User Guide 650
Wax amount test: for a range of pressures and temperatures, performs a multiphase flash to
calculate the amount in the solid phase.
The composition used in this calculation is given in a separate EOS-PVT data object which is
linked to the Wax data object.
'Settings' tab
Calculation options are available from the settings tab.
Wax Calculation Model Five wax models are available: Won Original, Won with Sol
Params, Chung Original, Chung Modified and Pedersen Wax. For
further details on these models, please refer to the PVTp manual.
Split out pseudo Wax deposition is driven mainly by long-chain paraffins. This means
components? that
in order to properly characterize waxes it is required to define in
detail the
hydrocarbons with high carbon number. This option splits the
pseudo
components in its SCN components to more accurately model
waxes.
Perform calculations If this option is enabled, then the hydrate calculations are performed
when module is solved when the module is solved, as part of the main RESOLVE solve. In
The 'Auto-populate pressures' button enables to define a range of linearly spaced pressure
values, from an entered minimum, maximum and number of values.
July, 2021
RESOLVE Manual
User Guide 652
The 'Auto-populate pressures and temperatures' button is used to define a range of linearly
spaced pressure and temperature values, from an entered minimum, maximum and number of
values.
Results can be plotted using the 'Plot' button. If pressure is selected on the left, at least one
temperature must be selected on the right; if temperature is selected from the left, at least one
pressure must be selection on the right.
Input Connections
Output Connections
None.
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual.
2.6.20.11 LNGPlant
The LNG Plant Data Object acts as a proxy for a LNG plant model and enables the user to
calculate LNG, LPG and condensate mass rates and compositions. The plant is represented by
a table which specifies the mole percentage of each component to be sent to each stream, as
well as the relationship between the N2+CO2 content and the plant efficiency.
'Settings' tab
This tab enables to specify the input rate and the mole percentage of each component to be
sent to each stream (LNG, LPG, Condensate and Fuel)
July, 2021
RESOLVE Manual
User Guide 654
Mass Rate/Oil Rate/Gas The input rates can be specified as mass rate, oil rate or gas rate.
Rate If oil or gas rate is selected, this should be with reference to a Flash
Straight to Stock Tank.
Separate Ethane If this option is enabled, ethane will be treated as a separate stream.
Import .prp Imports a composition to be used in the calculations. Note that the
composition can also be provided by a connected EOS-PVT data
object.
Perform calculations If this option is enabled, then the LNG Plant calculations are
when module is solved performed when the module is solved, as part of the main
RESOLVE solve. In any case, the calculation can always be
triggered via a workflow.
July, 2021
RESOLVE Manual
User Guide 656
Input Connections
Output Connections
None.
Using a Visual Workflow, the object can also be used through this and results extracted. The
properties and functions associated with these methods are outlined in the Visual Workflow
manual. The data structure is identical to that of the EOS-PVT data object.
2.6.20.12 Phase Envelope
The Phase Envelope data object is designed to receive a composition from an EOS-PVT data
object and calculate the phase envelope for the fluid. Results such as cricondenbar,
cricondentherm and critical point can be extracted in addition to the full data set for each quality
line calculated.
July, 2021
RESOLVE Manual
User Guide 658
If used in the RESOLVE interface, the object requires an EOS-PVT object as input, as shown
below.
Using a Visual Workflow, the object can also be used independently of this and results
extracted. The properties and functions associated with these methods are outlined in the
Visual Workflow manual.
2.6.20.12.1 Main
The 'Main' tab displays the inputs for the different vapour fraction/quality lines and the calculated
phase envelope. These can be entered manually or automatically entered between 0.5 and 1. It
will also display lines for reservoir temperature (where entered), cricondentherm and
cricondenbar.
2.6.20.12.2 Settings
The Settings tab allows the user to alter the configuration of how the phase envelope is
calculated and displayed.
July, 2021
RESOLVE Manual
User Guide 660
2.6.20.12.3 Results
Key results are displayed here based on the output phase envelope.
The RDO simulation object allows integration of tight reservoir objects in RESOLVE to a GAP
model. This is required to perform full field forecasts using the integrated model in GAP-
RESOLVE.
Each well inflow is represented by a separate Tight Reservoir object. The RDO-simulation then
brings up a list of all Tight Reservoir objects that are present in the RESOLVE model.
July, 2021
RESOLVE Manual
User Guide 662
The field start date can be set manually in the 'Field start date' section. Note that by default the
field start date is automatically determined based on the production history entered in the tight
reservoir object(s).
Once the tight reservoir(s) of interest is (are) selected, inflow icon(s) appear(s) in the RESOLVE
screen. This inflow can then be linked to the well using the icon.
Essentially, the RDO-simulation passes the transient IPR from the tight reservoir object in
RESOLVE to a well in GAP. The fixed production rate controls from the GAP model (based on
the model solution) are then passed back to the Tight Reservoir object(s) in RESOLVE. This
allows determination of a new transient IPR for further forecasting. More information on the
theory behind tight reservoir modelling is given in the Tight reservoir data object section of the
user guide and in the worked example.
Input connections
Tight reservoir data object
Output connections
GAP
The RDO system provides a palette to add several data objects and perform calculations using
these data objects in a separate interface. The RDO system layout can then be saved as a file
July, 2021
RESOLVE Manual
User Guide 664
(.rds) by clicking the icon in the toolbar. A previously saved .rds file can be loaded into the
RDO system using the icon.
The data objects can be added using the drop-down list available in the user interface:
Once added, the objects are linked using the icon in the toolbar and the RDO system
'run' (i.e. the data object dependencies updated) using the icon.
It is possible to register a user-defined data object (as a .dll file) in the RDO system: this will
appear in the drop-down list above. A pre-registered data object can be removed using the
'Unregister' button.
For example, a composition blending system can be setup in the RDO system as shown below
(the steps to set up data objects that perform a simple compositional blend is explained in the
worked example).
Input connections
Data can be passed to the RDO system from other data objects/applications
Output connections
Data can be passed from the RDO system from other data objects/applications
Properties
Objects["object_name"]
This property allows a specified data object within the RDO system to be returned as a user-
defined variable within the workflow. Here, object_name is the name of the data object within
the RDO system that needs to be retrieved; this is a string and should be entered within double
quotes " " as shown above.
July, 2021
RESOLVE Manual
User Guide 666
Functions
The functions for the RDO system object are under the category of 'Sub-system management'.
The available functions are:
Steam Assisted Gravity Drainage (SAGD) is a recovery technique used for heavy oil and
bitumen reserves. Following an initial pre-heating phase of a typical well pair (sufficiently long in
order to establish a good thermal contact) high pressure steam is injected into the upper
injector well through one or more tubings or annulus to heat the oil and reduce its viscosity. This
leads to the upward migration of steam and downward migration of oil leading the heated oil to
drain into the lower producer wellbore under gravity.
The SAGD process is energy intensive; good control systems are essential for efficient steam
delivery into the reservoir to ensure continuous production of oil is achieved without steam
breakthrough at the producer. In order to implement these control systems, it is important that
the engineer has a good understanding of the physical behaviour that governs the SAGD
process.
SAGD control systems are not predictive due to an incomplete understanding of the
From a modelling standpoint, some of the main challenges faced are the importance of
appropriate gridding and the linking of the complex well to the reservoir grid. A strong implicit
coupling is required between the reservoir and the well such that the numerical calculations yield
consistent results.
The SAGD Data Object has been designed to speed up the generation of complex well
geometries such as dual strings. Analytical models are available that estimate required periods
of pre-heating/nominal oil rate which provide a good starting point for further detailed reservoir
modelling.
A full REVEAL numerical simulation model can be created directly from the data object which
includes a detailed description of the wells and a reservoir grid is automatically created to
match the well flow path. This gridding and subsequent calculations ensure tight coupling
between the complex well and the reservoir and also captures the near wellbore effects.
The REVEAL simulation model auto-generated by the SAGD Data object includes a pre-
heating and production schedule with control script to ensure automatic well control.The script
ensures dynamic control by accounting for production sub-cool temperature limits and inter-well
pressure gradients. Effects such as ratios of viscous and buoyancy effects and pressure
maintenance are included. The standard way of controlling the sub-cool is by operating a
proportional-integral-derivative (PID) feedback control system. In a PID feedback control
system, the inter-well sub-cool is kept to a target value by adjusting injection rates or production
rates depending on temperature measurements taken along the well pair.
The REVEAL model enables investigation of the near-wellbore characteristics and provides 3D
visualization of steam chamber development, cross flow, heel-toe effects etc. The SAGD Data
Object provides a means to rapidly generate both a 2D model and a fully detailed 3D model.
The 2D model is intended to be used for surface network analysis: this means that the rates/
pressures calculated from the REVEAL model at the wellhead can be passed to a surface
network for further analysis. The 3D model is intended to be used for further detailed modelling
in the reservoir which can include aspects like well design, well control, steam chamber growth,
sensitivities on injection/production cycles etc.
The simulation automatically generates detailed plots of production versus injection rates, sub-
cool monitoring and inter-well reservoir pressure; these can be used to perform further analysis
regarding the well design and control.
More information on this data object is available in the RESOLVE examples which includes a
tutorial of building a SAGD system.
July, 2021
RESOLVE Manual
User Guide 668
In general all visible attributes are mandatory except Well Header information.Note that the
system validation will treat empty/missing data as an error condition when performing actions
including Export and Run commands.
The Well Builder Data Object is used to describe the individual well geometry of a SAGD
System. The SAGD Data object is then used to capture the PVT, reservoir, history and perform
analysis including the automatic generation of a REVEAL model with schedule and control
scripts.
The workflow to use the SAGD data object is top down going through the different sections. It is
possible to jump to the different sections by clicking on the image (e.g. click on 'Reservoir' to go
to the reservoir section). The user can also click 'Next' to go to these different sections
sequentially. Within each section the data is entered in different tabs and these can be selected
directly to enter the data. Pressing 'Finish' will save the data and exit the wizard.
Go to PVT description.
The SAGD object currently supports dead oil thermal PVT data entry and in order to improve
simulation time supports table lookup. The PVT data is entered in two tabs, 'PVT' requires the
PVT properties of the fluid and the 'Viscosity vs. Temperature' tab supports tabular entry of the
viscosity variation with temperature.
This section is used to define the reservoir geometry, conditions and petro-physical attributes
including permeability, porosity and anisotropy. Data entry is split into three tabs labelled
'General', 'Petrophysics' and 'RelPerms'
July, 2021
RESOLVE Manual
User Guide 670
A typical SAGD well pair comprises a horizontal injector and producer drilled with a few feet
(15-30 feet) spacing between the producer (below) and the injector (above).
SAGD well producers and injectors usually include complex well descriptions (e.g. multi-tubing/
dual string, coiled tubing, ICVs etc.). The Producer and Injector well descriptions may change
between the pre-heating and production phases.
The SAGD Data Object is designed to be used in conjunction with the Well Builder Data Object.
The individual Well Builder Data Objects provide the well information for the producer and
injector wells during both phases of the SAGD process (pre-heating and production).
General tab
The primary purpose of the SAGD wizard Wells section is to establish the well mappings
(explained below). The SAGD data object queries the mapped wells and estimates well pair
separation, completed lengths etc.
Injection pre-heater
Production pre-heater
Injection
Producer
The actual well descriptions are defined separately in four corresponding Well Builder Data
Objects linked to the SAGD data object. These well object names are then mapped to the four
internal wells. Each of the four internal SAGD wells must be mapped to a corresponding Well
Builder Object.
The 'General' tab in the Wells section includes a Well list which needs to be mapped to the Well
Builder data objects connected to this SAGD data object. To map the wells, click on a single
well in the Well List on the LHS and the corresponding well on the RHS. Repeat this for the other
July, 2021
RESOLVE Manual
User Guide 672
Once these well mappings are complete and the corresponding Well Builder Data Objects are
valid, the 'Producer' and 'Injector' tabs in the Wells section are automatically populated with a
summary of the data from the Well builder objects.
The relationship between the SAGD object and the connected Well Builder Objects is
demonstrated in the following table. A minimum of 2 distinct Well Builder Object descriptions
are required.
Case 1
Here the producer well is the same for both pre-heating and production phases. Similarly, the
injector well is identical for both phases.
Case 2
Here, four distinct well object descriptions are present. This will reflect a situation where the
wells have a certain configuration for the pre-heating phase (e.g. not perforated), and a different
one for the production phase (e.g. perforated) and introduction of a pump assembly for the
producer.
SAGD object well names (internal) Well builder Object names Well builder Object names
Case 1 Case 2
July, 2021
RESOLVE Manual
User Guide 674
Note – The Well Builder Object name corresponds to the Well name found on the General
Tab of the corresponding Well Object. In the screen-shot below the SAGD Data object uses
the Well Name “Producer” in this case to represent an unique identifier for the Well Builder
object. This will be the same as the Well Object label in RE SO L VE.
Pump Description
A pump description, if present on the producer well, can be entered in the 'Pump Description'
tab of the SAGD data Object. It is also possible to include the pump description (location, pump
curve) within the corresponding Well Builder Object description, however this is optional. Note
the data entered within the SAGD Wizard pump description screen is used when generating
the REVEAL model by adding a pump to the REVEAL schedule (generic pump or ESP pump).
Therefore, the pump details entered in the SAGD data object takes priority over any pump
entered in the corresponding Well Builder objects when exporting to REVEAL. Pump
information entered in the Well builder data object is not used when exporting to REVEAL.
3D View
This tab provides a visualisation for the wells. The 3D view is provided as a guide that the
individual well trajectories intersect the reservoir target for the entered values of reservoir
reference depth, and pay thickness.
Note: The 3D view displayed does not represent the actual REVEAL reservoir model. The
SAGD export commands will result in appropriate 2D (or 3D) gridding (including near well bore
block size refinement) together with re-alignment of trajectories (azimuth correction) and
trajectory to cell boundary alignment.
2.6.23.4 Analysis
Analysis
The Analysis screen is used to generate the REVEAL simulation model, provide an automated
schedule and ensure the PID control script pre-configured plots are loaded and used by any
subsequent simulation run.
The automated schedule is broken down into a pre-heating and production phase. The pre-
heating phase is configured to be sufficiently long enough to establish good thermal contact
between the well pair.
The pre-heating phase duration is based on an Analytical method used for Chamber Rise
computation and minimum nominal rate computation taken from Butler Roger M Horizontal
Wells for the recovery of oil, gas and bitumen (Chapter 11) Petroleum society Monograph
Number 2, University of Calgary - 1994. The injection and production rates are then adjusted
e.g. to ensure the production rate is not too high at the start of the production phase.
July, 2021
RESOLVE Manual
User Guide 676
The 'Calculate and Export (auto-schedule) ..' button allows the parameters on the screen to be
calculated and a REVEAL model exported. The REVEAL simulation model auto-generated by
the SAGD Data object includes
Schedule based on the estimated chamber rise, nominal rates and the Enthalpy of steam at
the reference conditions.
Pre-configured gridding refinement based on the 2D or 3D model selection
Pre-configured SAGD specific plots including Sub-cool and inter-well annulus pressure
PID controller based control script:
The producer is controlled to maintain optimum Sub-cool between minimum and maximum rate
constraints. The injector rate is controlled to maintain reservoir pressure and balance inter-well
thermal connectivity.
Note that it is also possible to manually enter the data for the minimum pre-heating time,
minimum rise time and the nominal oil rate by selecting the 'Enable Data Entry' box shown
above. The entered values can then be exported to REVEAL using the 'Calculate and Export
(auto-schedule) ..' button.
Validation
In general all visible attributes are mandatory. Note system validation will treat empty/missing
data as an error condition when performing actions including Export and Run commands.
The Export commands will also invoke validation of all linked and mapped Well (Builder)
Objects.
Tip – A recommended approach to creating a SAGD system is to visit each Well Builder
object in turn and (first build and then) invoke the validation option before returning to the
SAGD Object (setup well mapping) and validate the overall system through the SAGD export
commands.
2.6.23.5 History
Well History
This Well history screen is where historical data may be entered and compared with the
simulated numerical model.
An existing Well history may be entered (or pasted from clipboard). The Export History
command can then be used to generate a REVEAL simulation model where a schedule is
automatically generated from the entered historical data.
It is very difficult to match historical data for SAGD systems due to a number of uncertainties
both in the field data as well as the model (e.g. steam chamber stability, shale layer modelling
etc.). For this reason historical data entry is provided for comparison only and not included in
the REVEAL simulation model generated by the Analysis screen export commands.
The simulation results can be viewed with the plot button at the bottom of the wizard. The
simulation results can be saved with the SAGD object as part of the Resolve (.rsl) model.
The well history can be entered manually or pasted from another application e.g. MS Excel
through standard clipboard operations available through the buttons above the grid.
July, 2021
RESOLVE Manual
User Guide 678
The injection heating values and/or tubing split in the auto-generated schedule will reflect the
single versus multi-tubing characteristics of the corresponding well descriptions.
Available Plots
Clipboard support
Copy and Paste ALL data associated within a SAGD object instances using the commands
available in the pop-up menu (available on clicking the right mouse button over the RESOLVE
SAGD Data object).
Add a new SAGD data object, right click on the newly added object and select 'Paste object
from clipboard'.
Note the SAGD Data target of the paste operation will not retain any existing links e.g. to Well
Object data objects.
Clipboard Operations
Clipboard Shortcuts
Copy Ctrl-C
Paste Ctrl-V
Cut Ctrl-X
Select/Deselect Ctrl- A.( Repeated action)
Tip - Unit selection: Care is required to pre-select grid units before paste operations.
If the original data units do not match the grid units after a paste operation it is possible to
change the selected unit without converting (hold Shift key) the underlying data in the grid.
2.6.24 Scripting
2.6.24.1 Python
The Python DataObject enables the user to call Python scripts and execute these either standalone
or dynamically through a forecast, both in IPM and the DOF. The Python scripting tool brings further
integration to the tools by providing access to external libraries such as the SciKit library which has
July, 2021
RESOLVE Manual
User Guide 680
It is assumed that the user is proficient in Python scripting when utilising the object, and it is important
to note that quality checking and review of client scripts is outside the remit of technical support.
2.6.24.1.1 Interface
The interface for the Python data object is set out to define the script which is to be called and
executed, as well as specify the arguments to be called within the script functions.
File Path The file directory in which the python script is held.
Edit Script Modify the python script. Within the Edit script, there are two tabs in the
interface; one to define informative properties of the script such as the
author, the other to modify the script itself.
Function Name The function to be called from within the Python script. Please not only
one function can be called at a time when using the Python DataObject.
Argument Type Define whether the argument type is Fixed or External.
Fixed Argument A fixed value for a variable defined within the script.
External Argument An external variable or set of variables to be called from outside the
Python DataObject. This must be a data source such as a DataStore,
FlexDataStore or DataSet.
If a data table is passed to the Python data object as the DataFrame type,
then it should be passed as an External argument.
Python Type Defines the data source type; i.e. the format in which the data is expected
to be received.
Configure Enables the mapping of the external argument to the Python script.
(Arguments)
Where the argument is defined as Type List or Set or ListList, the below
interface is used to define which columns are relevent to the Python script
when calling the corresponding argument.
July, 2021
RESOLVE Manual
User Guide 682
Configure Selection of the Python version to be used and the file path to the
Anaconda Python distribution directory.
Solve on Execute Defines whether the Python script is to be executed every time RESOLVE
(RESOLVE Only) performs a solve or each time a solve is called during a forecast.
Test Test the script manually prior to exposing the DataObject to other
functions, such as the SciKit Scaler and MLP DataObjects.
2.6.24.1.2 Inputs/Outputs
Required Inputs
Python script
Data source as Python script input (if required).
Output
2.6.24.1.4 Functions
The Python object can be interacted with programmatically via a workflow. When being interacted
with via a workflow a number of different functions can be called to carry out calculations within the
object.
July, 2021
RESOLVE Manual
User Guide 684
These functions are detailed in the scripting section of the Data Object Calculations Section of the
Visual Workflow User Guide.
The SensitivityTool is built based on the CaseManager and provides a simple interface to run a
model sensitivity for a set of parameters without the necessity of building the workflow from the
ground up. Similarly to the CaseManager, the SensitivityTool includes a model and a controlling
workflow. The user however has an option of using one of the existing workflow templates and
an interface which has been designed to set and retrieve OpenServer variables easily.
July, 2021
RESOLVE Manual
User Guide 686
Debug model workflow Pressing the button will display the underlying CaseManager
with test values populated with sensitivity variables and some default values. It will
then be possible to run the CaseManager in debug model to verify
whether the workflow is working as expected.
Set reference case The reference case from which the variables are perturbed can be
defined.
July, 2021
RESOLVE Manual
User Guide 688
The calculations can now be run on either a cluster (by selecting On cluster in the drop down
menu) or on the local machine (by selecting Using model connections in the drop down menu).
If the clustering option is used, enough licences of RESOLVE and the underlying applications
must be available.
Once calculations are complete the tornado plot will be displayed on the right hand side of the
window.
Line Plot This allows to plot the outputs as a function of the inputs, with respect to the
selected reference case.
July, 2021
RESOLVE Manual
User Guide 690
Double clicking in any row of the results table will display Independent analysis results and
tornado plot. Combination of parameters in the selected row will be used as model state to
build the plot.
2D slice The plot shows colour coded values of the selected output parameter with
respect to 2 selected sensitivity variables.
Output filter Shows a scatter plot of one output variable vs another. This may be used to
detect any dependencies in results.
It is possible to zoom into the plot by drawing a square. Once zoomed, the
table on the right automatically adjusts to display only visible points.
July, 2021
RESOLVE Manual
User Guide 692
Relative ranks Display a summary tornado plot for the selected output variable showing
how sensitive it is to the selected parameter.
Values for the plot are calculate din the following way:
1. Output variables are arranged in the ascending order and ranked from -1
to 1
2. Average ranks are then calculated for each value
3. Cumulative ranks are then scaled by dividing them by the absolute
maximum (out of all ranks)
4. Minimum and maximum scaled value for each sensitivity variable is then
displayed on the plot providing a range.
Line Plots Display a sensitivity plot (output vs input) for the selected output (y axis) and
the selected input (x axis).
Several curves can be plotted for each of the non-axis input variables, and
these can be selected from the left panel.
2.6.26 Sibyl
The Sibyl data object allows the user to perform sensitivities and probabilistic analysis on
integrated models. The object interface is built based on CaseManager, which runs the model
with a controlling workflow. The workflow sets input parameters in the model, runs it and extracts
the results.
July, 2021
RESOLVE Manual
User Guide 694
July, 2021
RESOLVE Manual
User Guide 696
Input variables
To define Sibyl variables select 'Add', define a name, and then a variable in the selected model
can either be chosen from a pre-populated list or defined manually using an OpenServer string:
Once a Sibyl variables is defined, it is required to map the input values to a distribution defined
by the user. The options available in this interface are described in the section describing the
standalone Distribution Data Object.
The button 'Debug model workflow with test values' displays the underlying CaseManager
object and allows running it with test values. This may be useful to debug the model and the
workflow if this has been edited.
NOTE:
To populate the underlying CaseManager with the input and results variables defined in the
Sibyl Model tab, it is required to click on 'Debug model'.
Results variables
The results to be reported after each sensitivity case for results visualisation. These are chosen
in the same manner as the input variables above.
2.6.26.3 Run & Results
The 'Run & Results' tab provides an interface for execution of calculations and reporting of
results. Both individual cases results and summary of Sibyl analysis are displayed. It is possible
to debug a given case by double-clicking on the case name. This will open the 'Debug model
workflow with test values' of the 'Sibyl Model' tab, with the input values corresponding the
selected case.
July, 2021
RESOLVE Manual
User Guide 698
NOTE:
In case of limited number of licenses it might be of use to limit the
maximum number of jobs. As each job will consume one license of
RESOLVE and a license(s) required to run a model. If license is not
available the job will fail.
When the run is finished Sibyl will display the results of the sensitivity cases in several plots:
Distribution plots
Shows the resulting values vs frequency for both input and output variables. The input variables
should resemble the shape of the inpout distributions (the more samples, the closer this will
adhere to the defined probability distribution).
Correlation plots
Plots showing the trend of output variables against the input variables
2.6.27 Tight-Reservoir Data Object
Internal name: PxTightOil
Introduction:
The Tight Reservoir object is a feature within RESOLVE which can be employed to tight
reservoirs such as tight gas, shale gas, tight oils etc. where standard transient analytical inflow
models do not properly capture the inflow responses of the systems.
Note that in MBAL, there exists the tight gas module which can be used to simulate and history
match tight gas reservoir responses. However the underlying models assume a vertical well in a
circular reservoir or a fractured vertical well in a circular reservoir. However, in many cases the
geometry of the drainage area is completely different and also is the well configuration e.g.
horizontal wells. Trying to solve these using analytical models becomes fairly complex in terms
of the methods of solutions.The ultimate approach is to make use of a numerical model. Within
the numerical model, the reservoir can be discretised to better simulate the pressure
distribution and inflow responses of tight systems given any geometry or well configuration.
In practice, the various reservoir geometries and well configurations as simulated in a numerical
reservoir model do create a dimensionless response that is identical to the idealised transient
analytical solutions. The approach taken therefore is to create dimensionless type curves (Pd
Td) from these simple numerical models which capture transience of the system, well and
reservoir geometry and fracture effects. These analytical responses can then be available for
further use e.g. to describe the inflow performance of these wells and even for material balance.
The further advantage of this approach is that the tight reservoir module becomes fully
adaptable and upgradeable to model any tight system such as tight oils, shale gas etc.
Further note that the dimensionless PdTd curves can also be directly calculated from REVEAL
for use in MBAL or GAP. Within RESOLVE however, the PdTd module provides a simple
interface where well structure and reservoir geometry can be defined and the PdTd response
created. A least squares match algorithm then exists to be able to tune the dimensionless PdTd
curves to the historical data on an individual well basis. RESOLVE affords the ability to link
these tight models directly to GAP to simulate thousands of well inflows in parallel and be able
to apply workflows or logic within the model.
Description:
This object is designed to create a numerical model for single well systems where the transient
reservoir response of low permeability reservoirs cannot be adequately modeled using
standard transient analytical solutions to the diffusivity equation or material balance.
In particular, this model can be used to numerically create a bespoke dimensionless type curve
(dimensionless pseudo-time versus dimensionless pseudo-pressure or PDTD) capturing the
geometry of a fractured well system within a simple reservoir geometry. The PDTD response
captures the transient geometrical effects of the well, fracture and reservoir using dimensionless
variables.
The definitions in field units of the dimensionless pseudo-time (tD) and dimensionless pseudo-
pressure (pD) are given below.
July, 2021
RESOLVE Manual
User Guide 700
These are solutions to the diffusivity equation for the flowing bottom hole pressure (P) and
depend on the geometry and boundary conditions of the system.
and therefore
and therefore
A fully developed depletion (steady state) flow response has the form:
where the constant of proportionality depends on the reservoir shape and size, and
therefore and
The time to reach this steady state condition (at constant production rate) may be years and is
characterised by the diffusivity and drainage region size. The diffusivity (D ft2/d) and
characteristic area (A ft2) may be estimated using the formulae below:
and
In practice the fractures, well and reservoir boundaries create a dimensionless response that
has features reminiscent of, but different to the idealised analytical solutions.
Effectively, the numerical model is being used to create a dimensionless analytical analogue,
where it is impractical to create and solve analytical equations directly.
Once a PDTD (transient reservoir response) is calculated and tuned, it may be used in a variety
of ways; for example to calculate transient IPRs, average reservoir pressure estimation (for P/Z
gas in place estimates) and other forecasting/optimisation tasks.
Most of the screens for this object are used to define the reservoir, well, fracture location and
properties.
Once the geometry and physical properties are entered, a history should be entered. This
history may be used:
to run the model using either rate, Pwf of Thp control, thus comparing the numerical model
simulated and historical data.
to calculate a PDTD using a mean historical constant rate to simulate the Pwf and hence
pseudo-time, pseudo-pressure relationship. Note that the pseudo-time is calculated with the
PVT properties (1/ ct) calculated at the current Pwf, this is to ensure consistency over the
widest possible uses of the generated PDTD.
Once a PDTD is generated, it may be used to re-calculate the Pwf using the historical rate data
and the principal of super-position, i.e. perform a history match.
The resulting history match may be used to tune the PDTD response, using a Levenberg-
Marquardt least squares algorithm. The tuning modifies the reference permeability and porosity
present in the definition of pseudo-time and pseudo-pressure. Note that the tuning procedure is
only effective if the model and history data have reasonable agreement before the match is
performed.
It may be required to manually change the physical properties of the model (principally
permeability and fracture dimensions) to obtain a reasonable initial match. Note that tuning the
reference porosity is equivalent to tuning rw 2 in the pseudo-time equation, where rw is the
effective flowing radius and includes the effect of fractures.
Note system validation will treat empty/missing data as an error condition when performing
action(s) including Run History or Create PDTD.
Go to Fluids (PVT).
2.6.27.1 Fluids (PVT)
Fluids (PVT)
The Tight Reservoir Data object currently supports black oil (Gas, Oil and Retrograde
Condensate) fluid types.
Tight Reservoir PVT data entry requires a BO-PVT data object to be populated. A link needs to
be made between a BO-PVT data object and the Tight Reservoir Object.
July, 2021
RESOLVE Manual
User Guide 702
PVT (black oil) attributes may be shared by creating a link between a single BO-PVT object
instance and one or more Tight Reservoir object instance(s).
Refer to the BO-PVT Data object help section for more information.
The PVT screen above reports the black oil data entered in the BO-PVT data object. A
prediction WCT/WGR may be added when linking the object to GAP, such that the IPRs passed
to GAP include this WCT/WGR for the forecast.
2.6.27.2 Reservoir
Reservoir
This section is used to define the reservoir geometry, conditions and petro-physical attributes
including permeability, porosity and anisotropy.
The on screen graphic provides (to scale) plan and elevation view of the reservoir dimensions .
The relative position of the completion (see Well Data section below) is also indicated (where
data has been entered).
Pan/Zoom
Click on the graphic and use the mouse wheel(where present) to zoom in/out.
Hold the Shift key down and press/drag the left mouse button to pan the view in any direction.
Refresh- The graphic updates automatically as attributes are changed on the various wizard
pages. Occasionally it may be necessary to force a refresh to reflect updated attribute settings.
Validation
July, 2021
RESOLVE Manual
User Guide 704
This section defines the well completion geometry and location within the reservoir.
The wellbore inclination (inclination from vertical) obtained from the directional survey.
Note - A vertical fully penetrating well is represented by an Inclination = 0 and Z offset = 0, with
completion length = Net Pay thickness.
Fractures
The FCD (dimensionless fracture conductivity) model depends on the fracture width, fracture
half length and (Kh) the (x-direction) reservoir permeability for the FCD model.
Fractures will be automatically generated (and evenly spaced ) along the x-direction using the
controls found on the fracture tab of the Well screen.
The 'has fractures' option is included to toggle the inclusion of fractures within the model
simulation.
Validation
Upper limit - The vertical extent of the well completion must lie within the reservoir boundary
( related attributes completion length, dip angle, Z-offset Net Pay thickness)
Upper limit - The horizontal extent of the well completion must lie within the reservoir boundaries
( related attributes completion length, dip angle, X-offset and the reservoir Length)
2.6.27.4 Well History
Well History
The production history for the well in the tight reservoir is entered in this screen:
An existing well history may be entered or pasted from clipboard. Rates are automatically
computed from cumulative values (i.e. both cumulative and rate data entry is supported).
The gas production data selected for the simulation is highlighted in the grid and additional
historical data can be used for plotting comparisons with the results computed by the simulation.
This means that the gas rate is provided as an input to the PdTd simulation and the Flowing
Bottom Hole Pressure (FBHP) is calculated. The calculated FBHP can then be compared to the
July, 2021
RESOLVE Manual
User Guide 706
historical FBHP to assess the quality of the match in the Analysis section.
The FBHP needs to be entered in the Well History screen. If only well head pressure (WHP)
measurements are available, then the WHP data can be converted to FBHP using the BHP
from WHP calculation in PROSPER.
Simulation results can be viewed with the plot button at the bottom of the wizard. The
simulation results can be saved with the Tight Reservoir object as part of the Resolve (.rsl)
model .
The well history can be entered manually or pasted from another application e.g. MS Excel
through standard clipboard operations available through the buttons above the grid.
MBAL history data may be imported through standard clipboard procedures. Clipboard Paste
operation requires column data to match the order visible on the Well History screen (including
blank data columns). It may be more convenient to import production history data directly from
disk (.MBI) file.
Use the MBAL import button to select a MBAL "Type Gas Curve" or Material Balance
type model (.MBI) file and import a production well history.
If multiple Well information is present in the MBAL (.MBI) file a well selection screen is provided
with a preview of the corresponding well geometry.
Note - To import the production well history AND matching geometry use the command
button. This operation will overwrite any existing geometry data for the selected TightReservoir
model.
Clipboard Operations
Clipboard Shortcuts
Copy: Ctrl-C
Paste: Ctrl-V
Cut: Ctrl-X
Investigation of transient behaviour for Tight situations typically involves very small initial time
steps. Care is required when pasting history data where time steps differ by seconds.
Using e.g. MS Excel it is possible to format the Date/Time data to a precision including
seconds.
Note- The supported date time format depends on machine region/locale (the screenshot
shown above assumes English-UK region/locale).
-Proceed to paste selected data into the Tight Reservoir history grid.
2.6.27.5 Analysis
Analysis
The analysis screen is used to create a PDTD from the data entered in the tight reservoir
object. Once this is done, options are provided to match the PDTD curve such that the FBHP
calculated from the model agrees with the data entered in the History section.
July, 2021
RESOLVE Manual
User Guide 708
Plot History Match Perform a history match and re-calculate the Pwf using
the historical rate data and the principal of super-position
Auto Match Perform a least squares regression to fit the PDTD to
historical data.
Match parameters Match attributes are provided to manually change the
physical properties of the model (principally permeability
and porosity) to obtain a reasonable initial match.
Reference Gas/Oil The PDTD calculation uses a mean historical constant
Rate (main phase) rate to simulate the Pwf and hence pseudo-
time, pseudo-pressure relationship.
The main phase rate attribute can be left at the default
mean historical constant rate, or may be modified. A low
rate that has stable flow (e.g. no fracture cross-flow)
generally generates the best PDTD response over the
largest range of time (TD).
Shut-in Time for P/Z The time period in elapsed days that a zero rate is
applied (shut-in) to estimate the reservoir pressure and
hence P/Z for OGIP estimation.
Number of PDTD During long predictions or even a long shut-in for average
points reservoir pressure estimates, the PDTD curve may be
extrapolated. Additionally, at very late times during the
simulation calculation of the PDTD, small pressures and
hence large PVT variations may have been present. For
these reasons it is often best to truncate (and extrapolate
the PDTD) after a certain point, noting that a linear
extrapolation is equivalent to pseudo-steady state
depletion. The PDTD curve is truncated when it is
created, but the number of points used and hence the
truncation can be controlled with this integer parameter.
Edit PDTD
July, 2021
RESOLVE Manual
User Guide 710
PDTD method: For a given set of tD, pD points (pasted from clipboard)
Compute Time, Pwf, compute the corresponding Pwf and Time points.
from tD and pD *
PDTD method: Generates a PdTd curve assuming an infinite radial
Infinite Radial * geometry from the analytical solution
PDTD method: Generates a PdTd curve assuming an infinite linear
Infinite Linear * geometry from the analytical solution
Apply PDTD Generates a response curve based on the option
method selected in the drop down list.
Plot PDTD Plot the dimensionless pseudo time and pseudo
pressure curves and their derivatives. This can also be
used to asses the truncation at large tD.
Plot PVT Plot the tables of computed pressure dependent PVT
variables used in the PDTD calculations.
* Note: If the PDTD method is changed, then data pasted into the PDTD grid is applied only if
the Apply PDTD Method button command is selected
From IPM 9 build #154 onwards, the connate water saturation is added to PVT calculation such
that it is consistent with tight reservoir calculations in GAP and MBAL. Previously, the total
compressibility definition used for the PDTD was the compressibility of rock and fluid, whereas
from IPM 9 build #154 onwards the total compressibility definition is rock + connate water +
fluid: this is consistent with the definition in GAP and MBAL.
The inclusion of this parameter impacts the PDTD generated by REVEAL ("create PDTD"
command). This parameter also affects an existing PDTD if the option "Enable PDTD reference
conditions" is selected under the "Advanced" button, and the PVT regenerated ("Regenerate
parameters" is clicked in the "Edit PDTD" tab).
Advanced Screen
Additional options for the PdTd object can be accessed from the Advanced section. Some of
the settings are global (i.e. they will apply to all Tight reservoir objects on the PC) whereas
others are local (i.e. they will apply to the given model only).
July, 2021
RESOLVE Manual
User Guide 712
Enable Reveal interface while This option enables the REVEAL interface while the PDTD
simulating (Global setting) is being created. This allows the reservoir and well results to
be monitored during the PDTD creation. Note that care
should be taken not to open the REVEAL script wizard and
modify the REVEAL model while RESOLVE is controlling it
as this can cause communication problems between the
programs
Enable Edit PDTD reference This option allows the reference conditions in the "Edit
conditions (Global setting) PDTD" tab which are used for the PDTD calculation to be
changed
Enable History Filter (Global This option filters the production history data points, and has
setting) been designed to deal with high frequency data (i.e. 1000's
of well history data points). The tight reservoir DataObject
will read all the entered pressure data for superposition, and
without selecting this option for wells containing 1000's of
data points, long model initialization times will be expected.
The option filters the production history data by applying
early-time, mid-time and late-time weighting to the entered
data.
Simulation refinement factor Increasing the refinement factor increases the resolution of
(Local setting) the grid for creating the PDTD. The default value works for
most cases, however this may be increased (which will
require additional computational time)
Use history when running The forecast can be run from the end of history (option
Prediction (Forecast) (Local checked) or from the start if production (option unchecked).
setting) Superposition requires the entire history to be simulated at
every time-step, and therefore checking this option can save
time in the prediction
Pseudo Time generation PVT This option sets the pressure to be used for calculating the
(Local setting) PVT for pseudo-time. By default, the BHP is used from the
last time-step, however the initial pressure or the reservoir
pressure (estimated from a shut-in at every time-step) may
also be used
Correct for multiple phases This option includes a multi-phase correction when the
(Local setting) reservoir pressure drops below the bubble point/dew point.
The traditional single-phase PdTd formulation can be used
to match the production history above the saturation
pressure accurately. However in some cases where the gas/
oil production is high (below bubble point/dew point) then
the traditional approach cannot get a good match. The multi-
phase correction uses the entered relative permeabilities to
correct the dimensionless pressure and PVT for matching
the field data
Tight Reservoir tag : this can be displayed for a Tight well in GAP which uses the 'Use Resolve
Tight Reservoir - Inflow' option in GAP.
This connection copies across the data from the *.rdo file associated to the GAP well to the
object in RESOLVE.
July, 2021
RESOLVE Manual
User Guide 714
Output:
Tight Reservoir tag : this can be displayed for a Tight well in GAP which uses the 'Use Resolve
Tight Reservoir - Inflow' option in GAP.
This connection copies across the data from the Tight Reservoir object in RESOLVE to the
*.rdo file associated to the GAP well.
This object can pass IPR data to GAP via an RDO simulation object (See example 6.3).
The object properties and functions that can be accessed via a visual workflow are explained
below:
Properties:
Reservoir properties:
ReferenceTemperature
ReferenceDepth
Porosity
Permeability
NetPay
ReservoirLengthToWidthRatio
ReservoirArea
PermKhToKvFraction
RockCompressibility
Well properties:
LateralLength
FlowingRadius
WellID
Inclination
HalfLength
FracHeight
FracFCD
FracCount
Analysis properties:
Functions:
createPDTD(PxTightOil)
Creates a PDTD transient representation of the reservoir and well system using the current
mean history rate.
Returns a PxMathLib.DataSet object for visualisation.
getPDTD(PxTightOil)
Returns the current Analysis.PDTD object.
calcHistoryMatch(PxTightOil)
Performs a history match using the current matched PDTD and history.
Returns a PxMathLib.DataSet object for visualisation.
autoMatchPDTD(PxTightOil, numVar)
Performs an auto match of the PDTD and history.
The match parameters are controlled by using numVar = 1 (permeability), numVar = 2
(permeability & porosity), numVar = 3 (permeability, porosity and PVT)
runHistory(PxTightOil)
Runs the History according to the selected Simulation control setting.
Returns the well history and results(if available) as a PxMathLib.DataSet object
July, 2021
RESOLVE Manual
User Guide 716
RegenerateParameters(PxTightOil)
Regenerate PVT and PDTD parameters ( usage e.g. call routine after applying permeability
decline as function of pressure through cleat compressibility term )
GetResults(PxTightOil)
Returns the well history and results(if available) as a PxMathLib.DataSet (preset Schedule/
History) object
GetHistory(PxTightOil)
Returns the well history as a PxMathLib.DataSet (preset Schedule/History) object
SetHistory(PxTightOil, DataSet)
Set the Well History to match a PxMathLib.DataSet (preset Schedule/History) object
ReInitialze(PxTightOil, pathname)
Reset all properties of the object
Import(PxTightOil, pathname)
Import PxTightOil data from a Tight Reservoir (.rtr) file
Export(PxTightOil, pathname)
Export PxTightOil data to a Tight Reservoir (.rtr) file
Where
multi-phase multiplier -is the correction parameter defined by the user on the Tight Reservoir
Advanced Option screen, as shown below.
Main-Phase -The main phase is oil for an oil reservoir and gas for a gas reservoir.
July, 2021
RESOLVE Manual
User Guide 718
Warning The change in Pwf is less than Small change in Pwf in PDTD response
20 percent of initial Pwf = x ?(psig). Check reference rate
Warning The simulation time is less than Minimum simulation time of 5x365 days
5 years recommended
Warning The history rate used to ?Mean historical constant rate = x .Very
generate the PdTd curve is high reference rate can lead to rapid
more than 10 times larger than decline in the PDTD response
the reference rate
Warning The history rate used to Very low reference rate can fail to capture
generate the PdTd curve is transient response at early time in the
less than 10 % of the reference PDTD response
rate
New - This action will erase all existing data for the selected Tight Reservoir data object.
Note this action can ONLY be undone by reloading the latest Resolve (.rsl) model (if one
exists).
Open/ Import well geometry and production history from various formats
- Petroleum Experts MBAL Type Gas Curve well geometry and production well history.
Note - The button (on the Well History) screen can be used to import the production well
history (without importing well geometry) from a selected MBAL (.MBI) file.
The MBAL well geometry import feature is currently restricted to MBAL Type Gas Curve
models. Production well history may also be imported from MBAL Material Balance type
models.
- Petroleum Experts Tight Reservoir dataset from a (.rtr) file previously exported . This can
be useful to create multiple clones of a single Tight Reservoir object (each representing a single
well).
Undo - This action reverts data on all Wizard screens to match attribute value settings
when the wizard was most recently opened..
Plot - Plot History versus Simulation (various attributes), available only from History
screen.
Note- Cancel action will undo all changes made across the various wizard pages (restores all
attributes to match values at the time the the wizard was most recently opened
Clipboard support
Copy and Paste ALL data associated within a Tight Reservoir Data object instance(s) using the
commands available in the pop-up menu (available on clicking the right mouse button over the
object).
Note the target of the paste operation will retain any existing links e.g. to BO-PVT data objects.
2.6.28 Water Chemistry
2.6.28.1 Introduction
The Water Chemistry data objects are designed to provide the water chemistry functionalities
of REVEAL to be accessible as calculations within RESOLVE. This allows very powerful
chemical thermodynamic calculations to be performed on a much broader scope as the objects
can access the wider functionality within RESOLVE.
A comprehensive database of water based reactions are present which includes ionic species,
solid minerals and gases. Furthermore, the interaction of gases with the aqueous phase
includes original work to best represent literature data on the solubilities of gases such as CO2
and H2S in water for various pressures and temperatures. The vapourisation of water within the
gas phase is also modelled through multi-phase calculations.
As data objects, these can access the wider functionality within RESOLVE including workflows
and integration with full field models. Changing water compositions can be passed from
REVEAL models dynamically during the forecast along with conditions everywhere in the
integrated network. Therefore, calculations such as scaling possibilities and chemical
compositions for pH calculations, corrosion etc. can be performed. Moreover, operating
guidelines can be generated for wells to prevent these issues.
These objects can be used for a number of important applications, some of which are
described below.
July, 2021
RESOLVE Manual
User Guide 720
Scale precipitation
Scaling can be a very serious issue in a number of fields and causes problems such as tubing,
casing, perforation blockages, equipment damage and failure. A methodology for predicting
and quantifying the amount of scale for the life of the field is invaluable. This allows methods of
scale prevention and inhibition to be incorporated in the field operation.
Auto-scaling: Changes in temperature can cause minerals to drop out of the formation water.
This is due to the fact that the solubilities of the minerals are functions of temperature.
Incompatible waters: When waters with different chemical compositions mix, reactions can
occur which cause the formation of scale minerals. For example, injecting sea water
(containing sulphate ions) into reservoir water with a different composition (e.g. containing
Barium ions) can cause scale formation (Barite, i.e. Barium Sulphate) due to the mixing of
waters. Mixing of incompatible waters can also occur during production from multilayer
reservoirs with different water compositions, or in the surface network when mixing production
waters from different reservoirs.
Drying: Injecting a very dry gas can cause the reservoir water to be vaporised into the gas
phase. This increases the concentrations of the ions in the aqueous phase, which drives scale
formation. A typical example of this is scale formation at gas lift valves, where some of the
formation water vapourises into the gas phase causing scale dropout.
Along with the drying of formation waters in contact with dry gases mentioned above, interaction
of gases with waters can significantly affect the chemistry of waters. For example, injection of
gas with CO2 in the reservoir will lead to the CO2 dissolving in water to form a series of
equilibria involving carbonic acid, bicarbonate and carbonate ions. This makes the water more
acidic and can increase the solubility of the reservoir rock (calcite). On the other hand, changes
in temperature can subsequently cause the dissolved calcite to precipitate as scale in the
production tubulars.
pH calculations
The Water Chemistry objects can also be used to predict the pH of water as the chemical
equilibria change due to pressure, temperature and mixing. The pH is required for a number of
calculations, including scaling, predicting corrosion etc.
The Water Chemistry Data Object stores a water composition and performs equilibration
calculations for the chemical species present in the water. The composition can be manually
entered, e.g., formation water or injection water. The composition itself can be dynamically
populated from a REVEAL simulation model, or calculated as a result of mixing operations
between data objects.
Based on the entered water composition and input pH, the equilibration operation calculates the
equilibrium concentration of the ions in the aqueous phase, amounts of solid precipitate, partial
pressure of gases and final pH. This accounts for possible chemical reactions between the
species and also considers non aqueous phase calculations (solids/gases).
The layout of the object shown in the screenshot above is explained below.
July, 2021
RESOLVE Manual
User Guide 722
1. Main tabs
The water chemistry object has the following two main tabs
Input The input section allows entry of all the inputs required for the water
chemistry object. These are explained in detail below.
Results Results of the equilibration calculation are reported in this tab
2. Reference conditions
The conditions of the water sample along with the input pH and pe are entered here. The pH
and pe are calculated after the equilibration calculation on the entered composition.
3. Elements/species list
Elements/species that are to be included in the water are selected using a check-box.
Based on the selection in the section above, a list of ionic species present in the water is
shown. The oxidation state of the element, gram formula weight, input concentration and
equilibrated concentration are also shown.
Similar to REVEAL, the RESOLVE object requires and reports the concentration of the master
species only, and intermediate species are not reported. The master species can be
considered to be the total amount of material in that oxidation state in its usual form. For
example, consider the two valence states of sulphur: S (+6) has the mass fraction (kg SO42- per
kg water) and S (-2) has the mass fraction (kg HS- per kg water). The calculations internally
account for the change in amounts of the species (e.g. SO42-) due to the different chemical
equilibria.
5. Minerals
Possible solid minerals that may be formed are shown in this table based on the selection in the
elements list. The molecular formula and gram formula weight are also reported. An input
concentration can be entered, and the equilibrated concentration is reported. The Saturation
Index (SI) for all minerals is zero, which means that the database value for the solubility of that
mineral in the water will be used. If the SI is entered, then the database solubility of that mineral
will be altered.
A positive value for the SI increases the solubility of the mineral. This can be used to model,
e.g., scale inhibitors present in the water. A negative value reduces the solibility of the mineral
and increases the solid dropout from the water.
6. Gases
The possible gases that can form as a result of the aqueous chemical reactions are shown. The
partial pressures are calculated after the equilibration and displayed for reporting purposes.
7. Command buttons
Validate Validates the data entered for the object. If any inputs are missing
or out of range, these will be displayed after the Validation is done.
Calculate Equilibration This performs the equilibration calculation. The calculation can be
understood as follows: a water sample is initially present at the
reference conditions and pH. Subsequently, the master (ionic)
species and solid minerals with the entered composition are
added to this sample. The final concentrations of aqueous ions,
solid minerals along with the pH+pe are calculated and reported
based on the chemical equilibria in the water. Note that the pH will
be changed if the equilibration results in solid precipitation or
dissolution.
Transfer Once the equilibration calculation has been performed, the
"Transfer" button copies the concentrations from the "Equilibrated
concentration" column to the "concentration" (input) column.
Clear This clears all data from the object.
Import It is possible to export the object description to an XML file and
transfer it between models/objects. This button allows a previously
exported water chemistry description to be imported.
Export This exports the chemistry data object description to an XML file
Ok Saves changes and exits the object
Cancel Cancels all changes
Help Displays the online help
Note on the Water Chemistry Database: The RESOLVE objects use the same water
chemistry database as REVEAL. The database location can be viewed in REVEAL under the
File Menu | Preferences | Water Chemistry.
July, 2021
RESOLVE Manual
User Guide 724
Calculate the concentrations of all species in the mixture based on the entered flow rates
Equilibrate the resultant water composition and report equilibration results
In the screenshot above, the pressure and temperature at which the mixing calculation should be
performed can be entered. If these values are left blank, then the waters are mixed at an
average pressure and temperature of the connected Water Chemistry objects. Weighted
averages are used for various other parameters including pH, Saturation Indices etc.
The flow rates of the input waters can be entered both in the mixer object and also in their
respective Water Chemistry Data objects (the values are kept consistent when they are
changed in either object).
The output of the Mixer has been connected to a blank Water Chemistry Data object. When the
RESOLVE model is solved, the output object is populated with the results of the equilibrated
mixing calculation.
2.6.28.4 Water Chemistry PVT Mixer
The Water Chemistry PVT Mixer data object performs a mixing calculation using the connected
Water Chemistry Data Object water composition an Equation of State description (EOS-PVT
data object). The gas is assumed to be in equilibrium with the water, and a two-way exchange
of species can occur in this situation.
In particular, CO2, H2S and H2O can partition between the water and the hydrocarbon phases.
This can cause significant physical affects: for example injected CO2 from the gas lift gas can
dissolve in the water and change the pH, thereby preventing calcite scaling. If the injected gas is
dry, then liquid water will change to vapour, thereby causing drying and increasing scaling
possibilities. The amount of H2S is also important for souring issues.
The output of the mixing calculation is a final gas composition and water composition such that
the equilibria are satisfied. The output compositions can be stored in a separate Water
Chemistry data object and an EOS-PVT data object.
July, 2021
RESOLVE Manual
User Guide 726
In the screenshot above, the pressure and temperature at which the mixing calculation should be
performed needs to be entered. The flow rates of the input water and gas also needs to be
entered.
The output of the Mixer has been connected to blank Water Chemistry Data object and EOS-
PVT objects. When the RESOLVE model is solved, the output objects are populated with the
results of the two phase mixing calculation.
July, 2021
RESOLVE Manual
User Guide 728
The Water Chemistry Tag Data feature has been added to the REVEAL driver for IPM 10. The
tab automatically becomes available when a REVEAL model with the Water Chemistry option
enabled is loaded into RESOLVE. This enables changing water compositions from the wells in
REVEAL to be dynamically passed to the Water Chemistry Data Object in RESOLVE and vice-
versa. Once the wells have been selected from the list (explained below), a connection can be
made from the well in RESOLVE to the Water Chemistry object as shown below using the link
icon from the RESOLVE toolbar.
As part of RESOLVE, the Water Chemistry Data object itself leverages the wider functionality of
workflows and mixing calculations in an integrated model. The tag data feature therefore allows
these workflows to be run and important decisions taken (e.g. scale prevention) with changing
water compositions automatically populated during a forecast.
The tag data screen requires the user to select wells from a lists for which the water
composition needs to be passed to/from REVEAL. The wells can be of the following two types:
Data providers These will be producers for which have the water composition has been
calculated from the reservoir flow model in REVEAL at any time-step
Data consumers These are injectors which will accept data from a chemistry data object
to inject into the simulation model for that time-step
A closed-loop example of the tag data feature is as follows: the water compositions for all the
producers at any time-step are passed into RESOLVE. Subsequent calculations are done e.g.
mixing the waters, scale calculations, water treatment etc. and a new water composition is
obtained. This composition is sent to the injection wells which will be used by REVEAL for
further reservoir calculations.
2.6.28.6 Water Chemistry functions
This section describes the workflow methods available for all the Water Chemistry objects in
RESOLVE.
Connections
July, 2021
RESOLVE Manual
User Guide 730
Input connections:
The following data objects can be connected to a Water Chemistry data object:
WaterChemistry Mixer: The resulting water composition from the mixing of Water Chemistry
composition will be stored in the object.
WaterChemistryPVT Mixer: The resulting water composition from gas-water equilibrium
calculations at flowing conditions will be stored in the object.
Output connections:
The Water Chemistry data object can be connected to the following data objects:
WaterChemistry Mixer: Mixing with other Water Chemistry compositions
WaterChemistryPVT Mixer: Mixing the given water composition with a hydrocarbon
composition in an EOS-PVT data object.
Workflow: Properties to/from the Water Chemistry object can be passed from/to a workflow
REVEAL Driver: Data from a REVEAL simulation model can be directly passed to/from a
Water Chemistry data object as described in the tag data section
Properties
The following object properties can be accessed from a Visual Workflow using an assignment
element
Name
Pressure
Temperature
pH_In
pe_In
MassSolventRate
MassSolventandSoluteRate
VolumeSolventRate
pH_Result
pe_Result
GasPressure
GasVolume
GasMoles
IonicStength
Arrays :
GasesArray
MasterSpeciesArray
MineralsArray
AqueousArray
Functions:
The following functions can be called from a Visual Workflow operation element
Add species WaterChemistryDataObject = Add species (e.g. Ba) and any related
element label of the minerals/gases to the pre-existing water
pxWaterChemObject, string = chemistry composition
Element Name
Dilute aqueous WaterChemistryDataObject = Dilute Aqueous fraction concentrations. Multiply
fraction label of pxWaterChemObject, all aqueous concentrations by the value
concentrations double = dilution fraction provided
Dilute mineral WaterChemistryDataObject = Dilute Mineral fraction concentrations. Multiply
fraction label of pxWaterChemObject, all mineral concentrations by the value provided
concentrations double = dilution fraction
Export Water WaterChemistryDataObject = Export Water Chemistry description (.xml).
Chemistry as label of pxWaterChemObject, Composition may be subsequently imported
XML string = .xml File Name into a REVEAL well (supporting water
chemistry) or another Water Chemistry Data
Object.
Note: P, Q, T values included in the (.xml) file
do not override REVEAL well settings. Mineral
component(s) target SI are not included in the
(.xml) file
July, 2021
RESOLVE Manual
User Guide 732
Connections
Output connections:
Water Chemistry data object: The water composition from the mixing operation can be stored
in a Water Chemistry data object.
Workflow: Properties to/from the Water Chemistry Mixer object can be passed from/to a
workflow
Connectivity to application items
None
Properties
The properties listed may be set through a Visual Workflow assignment element.
Pressure
Temperature
Functions:
The following functions can be called from a Visual Workflow operation element
Get the Water WaterChemistryMixerDataObj Gets the water output from a WaterChemistry
Chemistry ect = label of Water Mixer Mixer Object calculation (Resolve run)
mixer water Object,
results WaterChemistryDataObject =
label of Water Chemistry
Object
Perform Water WaterChemistryMixerDataObj Perform Water-Water mixing operation using
mixing ect = label of Water Chemistry the weighted average of input conditions (i.e.
calculation Mixer Object water composition inputs).
Perform Water WaterChemistryMixerDataObj Perform Water mixing operation using data
mixing ect = label of Water Mixer objects provided as parameters. The Water
calculation Object, Chemistry data objects provided do not have to
(using data WaterChemistryDataObject = be physically connected in a RESOLVE model.
objects label of Water Mixer Object This function therefore is useful. e.g., if the
provided as input 1, same object needs to be used to perform
parameters) WaterChemistryDataObject = mixing from different input objects during a
label of Water Mixer Object forecast or write results to a different object.
input 2,
WaterChemistryDataObject =
label of Water Mixer Object
output, double water rate 1,
double water rate 2, double
temperature, double pressure
Connections
July, 2021
RESOLVE Manual
User Guide 734
Input connections:
Water Chemistry: The flowing Water Chemistry composition for the mixing operation
EOS-PVT: The gas composition input for the mixing operation
Output connections:
Water Chemistry: The resulting water composition from gas-water mixing at flowing conditions
EOS-PVT: The resulting PVT composition from gas-water phase exchange at flowing
conditions
Note – The output connection(s) are optional but are necessary to view the resulting water
and PVT compositions e.g. it is possible to not include EOS-PVT output connection if we are
only interested in the resulting water composition
Workflow: Properties to/from the Water Chemistry Mixer object can be passed from/to a
workflow
Connectivity to application items:
None
Properties
The properties listed may be set through Visual Workflow assignment elements
Pressure
Temperature
PVTRate – mass rate
PVTVolumeRate
PVTRateType
PVTFinalRate – EOS-PVT output rate
WaterEvaporated - Flag to indicate if water has completely evaporated
Note
Pressure and Temperature values are required to define the reference conditions for the mixing
process. If unset the reference conditions specified through the connected Water composition
data object are used.
Functions:
The following functions can be called from a Visual Workflow operation element
July, 2021
RESOLVE Manual
User Guide 736
In the following example, a Water Chemistry PVT-Mixer object is setup with the corresponding
inputs and outputs (as explained in the manual above). A data set and a workflow are also
present:
The dataset is setup with the first (zeroth) column labelled "Temperature" which has its units
specified as temperature:
July, 2021
RESOLVE Manual
User Guide 738
The workflow consists of an assignment to get the count of the number of rows in the DataSet
above followed by the scale calculation operation element:
The pressure range for the scale calculation is set from 100-5000 psig. The scale calculation
operation is setup as below:
One the workflow is executed, the DataSet is populated with results of the scale calculation as
shown below:
July, 2021
RESOLVE Manual
User Guide 740
Description
This RESOLVE Well Builder Data Object is designed to build complex REVEAL well
descriptions. Equipment attributes present in this data object match the equipment attributes
found in the REVEAL Well Builder sections.
The primary objectives here are to create exportable REVEAL (.XML) well descriptions and
also to couple these with REVEAL specific RESOLVE data Objects such as the SAGD Data
Object.
The basic idea is to create a cross sectional (flow-path centric) view of a well schematic with
associated Data Grid table for sizing equipment parts.
This section illustrates the design ideas of the well builder tool added in IPM 9 to help create,
modify and maintain complex well models.
Go to General description.
2.6.29.1 General description
The Well Builder user interface (UI) contains a row of Tab items from left to right; initially the
main Completion Designer Tab is disabled until the well deviation survey has been entered.
General Description
Data on this screen is optional however the 'Well' field represents the unique identifier for this
Well Object and is used in mapping e.g. to map a connection between this Well Builder Object
and a SAGD Data object.
2.6.29.2 Reference location
This screen defines the situation and reference datum for the current well. The situation can be
Land or Sea for on-land or offshore wells.
The absolute reference can be any location as desired by the user. Additionally, a Zero
Measured Depth (ZMD) location is required to be chosen from either of Mud Line (ML), Derrick
Floor (DF), Kelly Bushing (KB), or Rotary Table (RT). This ZMD location corresponds to the start
of the deviation survey which means that all depths in the deviation survey must be referred to
the ZMD; the ZMD itself can be different to the choice of the absolute reference.
The table allows us to define the elevation offset of our selected ZMD location (e.g. KB) above/
below an absolute reference (which is user-determined) and similarly the earth reference datum
Mud Line (i.e. ground level) above/below the absolute reference. An example data entry that
reflects this is provided below.
July, 2021
RESOLVE Manual
User Guide 742
Note: Absolute reference is not a requirement and it is possible to assume assume zero offset
(i.e. Mud Line = 0 above Absolute Reference). It is however necessary to supply both Mud Line
(ML) and ZMD values as ML is used to determine the uppermost Drill Region start position and
ZMD is used as a start of the deviation survey.
2.6.29.3 Deviation survey
The deviation survey can be entered in one of the following three methods:
The deviation survey data can be converted between X,Y,Z and MD, Inclination, Azimuth or MDs
added to the main table and the Inclination, Azimuth calculated. The three methods of entering
the deviation survey are explained below.
A minimum of two survey grid points are required where the 1st (X,Y,Z) point matches Start X,
Start Y, Start Z (i.e. Start X, Start Y, Start Z fields are non-editable in this model)
For example, to represent 10,100 ft long vertical well, the following data needs to be entered
X Y Z
1250 1250 0
1250 1250 10100
In this method the Start X, Start Y, Start Z coordinates which represents the first (X,Y,Z) point is
entered. Additional survey points (MD, Inclination, Azimuth) are used to compute the second
and subsequent (X,Y,Z) points.
For example, a simple vertical well described above (MD = 10100 feet) may be represented as
follows
MD Incl. Azimuth
10100 0 0
July, 2021
RESOLVE Manual
User Guide 744
In this method the Start X, Start Y, Start Z coordinates which represents the first (X,Y,Z) point is
entered. Additional survey points (MD, TVD, Azimuth) are used to compute the second and
subsequent (X,Y,Z) points.
For example, a simple vertical well described above (MD = 10100 feet) may be represented as
follows:
MD TVD Azimuth
10100 10100 0
The 3D canvas view displays the well survey in a XYZ coordinate system where Z (TVD) is
down. Options are provided to display Equipment and Completion depths.
If multi-laterals are needed, then they may be added (or removed) through the deviation Survey
screen. Each lateral has a parent lateral and MD on Parent Lateral on this parent where the
new lateral starts. This enables the laterals to be simply re-positioned by changing the MD on
the parent. Note that each lateral MD starts from zero.
Equipment item depths along the topmost lateral (Lateral 1/main bore) are relative to the
reference depth chosen through the Reference Location screen (e.g. ML, KB, RT etc.).
Equipment depths for all other laterals are relative to the tie point (i.e. parent MD depth) along
the parent lateral. In this way if the parent MD of a child lateral is changed the relative depths of
any equipment items along its length are maintained.
A child lateral must have a parent e.g. if a lateral is deleted then all descendents of this lateral
will also be deleted by the system.
Lateral Browser
The completion Designer screen displays the equipment survey and schematic representation
of the selected lateral. The lateral designer shows the hierarchical relationship between
individual laterals.
July, 2021
RESOLVE Manual
User Guide 746
Show Equipment depths - Displays equipment depth information and 3D well trajectory
survey.
Show Completion depths - Highlights completed sections along the 3D well trajectory survey
Notes: Select the "Show Equipment depths" option and move mouse position along the length
of the survey; tool tips display equipment summary with depth. A full range of mouse and key
shortcuts is available (click anywhere on the 3D canvas and press F1):
Tip When pasting data into the deviation survey from an external source remember to Copy/
Paste numeric values only (i.e. do not select unit or other headings). Also pre-set the MD,
TVD units to match data on clipboard prior to any Paste operation.
Note: A well description exported to a REVEAL (.XML format) will include sufficient additional
segment nodes to capture the trajectory twist and turn (azimuth and inclination change) in
addition to the full deviation survey.
2.6.29.4 Completion designer
The Completion Designer tab is enabled after the deviation survey has been entered. The
Completion Designer is the main work area and is divided into regions (shown here)
The workflow for defining the completion here is in the order of decreasing equipment
diameters. This means that the drill region is defined first which covers the entire length of the
well, followed by casing, tubing and then additional equipment inside the wellbore.
Go to Adding equipment
The Completion Designer supports both point and click well building and grid row by row
approach to building the well schematic to suit user preference. It is possible to add equipment
items in turn by browsing, selecting and then clicking on the Well Schematic. A range of most
common equipment item icons are provided on the toolbar, and a full range of supported
equipment types are selectable from both the Equipment menu option and the Equipment
Browser.
The 'Equipment Browser' and the 'Menu Bar' options can be used to directly add pieces of
equipment by first clicking at the desired equipment in these sections and then clicking at the
approximate location in the 'Well Schematic' section. The equipment location can then be
specified in the equipment attributes window that appears after the element is added.
July, 2021
RESOLVE Manual
User Guide 748
It is also possible to add equipment using the Data Grid Add command and entering the
equipment specifications in the attributes window which appears after the element is added:
Each row of the Data Grid effectively represents a model instance of commonly available
equipment. This is uniquely defined by a combination of sub-type, description, MD and string
identifier (e.g. base pipe, second tubing).
The Equipment Attributes | Equipment selection (drop down list) displays all available
equipment records matching this sub-type (e.g. Drill Region, ICD, Casing etc.) found in the
model database as well as the common external database (see 'Equipment Database').
When entering equipment attributes it is possible to manually enter the data (e.g. casing ID/OD)
or alternatively choose from a list of commonly available equipment. As an example, click on
the Add command on the Data Grid and select Casing from the SubType drop down list. The
Casing Equipment Attributes screen appears.
July, 2021
RESOLVE Manual
User Guide 750
At this stage we can directly enter a casing OD/ID to fit the surrounding hole and click on OK or
alternatively click on the Tubular Goods Lookup command and select a record from the
standard list of tubular items that appears.
Equipment Location
The position of each equipment item instance is uniquely specified by a MD and string ID. Note
that the MDs for the equipment do not need to have the same values as entered in the original
deviation survey. The MD for the equipment will be interpolated based on the original deviation
survey.
The location of the equipment can be entered by either the MD (Top), i.e., the start location of
the equipment or the MD (Bottom) which represents where the equipment terminates. There is a
selection box that can be used to switch between these two modes of data entry:
Importantly, when editing the MD values in the data grid directly, it is necessary to press 'Apply'
after each change to record the values in the model.
Wherever possible, the length of the equipment can be specified in the Equipment Attributes
screen that can be accessed by selecting the equipment and clicking 'Edit' in the data grid
above.
To make the data entry for repetitive pieces of equipment easier (e.g. assemblies of ICDs and
packers), the 'Clone(Assembly)' option can be used. This option adds the selected assemblies
at the end of the data grid and calculates the MD values for these pieces of equipment
automatically. The rows of equipment that need to be cloned should be selected in the data grid
as shown above and the Clone(Assembly) option clicked:
Note - For a single lateral well, or the parent lateral of a multi-lateral well, the MD displayed is
relative to the reference datum selected on the Reference Location Tab screen (e.g. KB). For
all child laterals the reference depth is the tie point on it's parent lateral.
Lateral Browser
The completion designer screen displays the equipment survey and schematic representation
of the selected lateral. The lateral browser shows in the lower left hand corner of the screen the
hierarchical relationship between individual laterals and allows the selection to be changed.
The tie point(s) of any associated children are indicated on the schematic. Laterals, their child/
parent relationships and their deviation surveys are defined on the Deviation Tab screen.
July, 2021
RESOLVE Manual
User Guide 752
Lithology
It is possible to add the lithology for the well object: this includes the porosity/permeability profile
along with other properties. The lithology is used for the ICD Analysis data object to generate
layers based on the permeability/porosity profile
These options allow the entire completion to be visualised or various equipment can be turned
The true scale schematic displays the completion equipment with their lengths scaled to the
overall well length:
A radial scale schematic can also be viewed which clearly shows locations where the
equipment IDs change by scaling in the radial direction:
July, 2021
RESOLVE Manual
User Guide 754
Cross-sectional views
The RESOLVE well builder shows the cross-sectional views of the completion at different
locations: this allows comparisons with the REVEAL well builder which works on creating
equipment based on cross-sectional views (see the REVEAL documentation for further details
on building well models).
These views can be accessed in the Completion Designer screen by selecting different
equipment either through the data grid or directly from the well schematic:
Cross-sectional views are also available in the equipment attributes screen, and the cross-
sections at different locations can be viewed by changing the depth slider on this screen:
Go to Equipment Database
2.6.29.4.3 Equipment database
Equipment Database
A database can be created to define and name commonly used equipment items. If this
database is empty, then a small selection of pre-configured tubing types are automatically
added. In this way equipment parts can be either model database specific or selected from a
common external database that can be shared between Well Objects, Resolve Models and,
depending on file location, between PC machines and users.
External Database
This is a common equipment database available for all RESOLVE models for that user: the file-
path is shown in the database screen.
Model Database
The model database consists of any equipment items added to the given RESOLVE model and
are specific to that model. It is possible to add the model database item to the equipment
database by selecting the item in the model database and clicking 'Add to external database':
July, 2021
RESOLVE Manual
User Guide 756
The external database is editable by selecting the equipment from the table and using the 'Edit
External Database item' and 'Delete from External Database' options. The model database
items can be edited from the data grid and not from the screen above.
Additional commands include exporting the given database to a file and importing an existing
database into the RESOLVE model are available.
Note that the common database is not to be confused with the manufacturer specific (ICV,
Generalised ICD, Nozzle ICD and Equalizer) database. The Well Builder imports this database
through REVEAL and management of this database is via the REVEAL interface.
Equipment parts in the RESOLVE well object are user defined and are not manufacturer
specific; the exception to this are the Equalizer and ICD types which are imported from the
REVEAL database. The ICD types (screen, ICV, ICD etc.) include both vendor specific and user
defined equipment parts. Please refer to REVEAL documentation sections for more detailed
information
Note - A REVEAL model may include various equipment descriptions. If an individual detailed
well description is exported from REVEAL (as an .xml file) and subsequently imported as a XML
file into the RESOLVE Well Builder it is necessary to ensure any user defined equipment in
REVEAL (e.g. ICV/ICD types) used within the detailed well description are first saved to the the
common database before importing the file.
Moving forward, the well builder object will be a platform for creating wells for a variety of uses
as the tools develop and evolve.
There are a number of equipment that are available with the RESOLVE well builder object.
Several equipment have correspondence with REVEAL equipment types when a detailed well
is exported from RESOLVE to REVEAL. However, some equipment do not have
correspondence in REVEAL but are present in RESOLVE only for visualisation or cosmetic
purposes. Therefore when exporting to REVEAL, only the equipment that is recognised by
REVEAL will be exported from this data object. The other available equipment are for
visualisation purposes in RESOLVE.
The table below lists the RESOLVE equipment type and the corresponding equipment in
REVEAL when a well description is exported. For the RESOLVE equipment, tubular attributes of
all string items e.g. Safety Valve, Mandrel, and Pump etc. including ID/OD and heat transfer
attributes can be entered directly in their attributes screen or set to be inherited from the
underlying tubing object (see section on Extended Tubing concepts). ICD components (ICV,
Generalised ICD, Nozzle ICD and Equalizer) are vendor specific have specific pre-set lengths
and must be set with these. Others such as the orifice and screens have a flow area per length.
The equipment in the table below are arranged in the order of equipment that have
correspondence with REVEAL and those that are present for visualisation purposes only. As the
tool evolves, the equipment purpose is likely to change, and this table will be kept up to date to
reflect these changes.
July, 2021
RESOLVE Manual
User Guide 758
e
n
t
n
a
m
e
DRequires a size (i.e. drill bit size). Can also attach Heat transfer attributes exported
rperforations to the drill region to create an Open
iHole completion
l
l
r
e
g
i
o
n
T
Tubing (Base or second) Base pipe or second tubing
u
b
i
n
g
C
Casing Casing
a
s
i
n
g
T
Coiled Tubing Coiled Tubing
u
b
i
n
g
C
o
i
l
e
d
P
Perforations Completed well section (“completed” = Yes)
e
r
f
o
r
a
t
i
o
n
s
IICV ICV
C
V
IICD ICD
C
D
S
ICV ICV
l
e
e
v
e
SScreen. Possibility of changing screen sizes and Screen. Possibility of changing screen sizes and
centering a 'user defined' screen to enter the flow entering a 'user defined' screen to enter the flow
rarea/length area/length
e
e
n
July, 2021
RESOLVE Manual
User Guide 760
M
Base pipe orifice Base pipe orifice
a
n
d
r
e
l
S
Base pipe with optional base pipe orifice Base pipe with optional base pipe orifice
a
f
e
t
y
V
a
l
v
e
B
Base pipe packer: to close the base pipe Base pipe packer: to close the base pipe
a
l
l
P
l
u
g
G
Gas separation Gas separation
a
s
s
e
p
a
r
a
t
i
o
Y
Requires/takes base pipe attributes Screen
-
t
o
o
l
X
Takes underlying tubing attributes Not exported – underlying tubing exported
m
a
s
t
r
e
e
C
Cement Not exported
e
m
e
n
t
G
Pump Not exported
e
n
e
r
i
c
P
u
m
p
E
Pump Not exported
July, 2021
RESOLVE Manual
User Guide 762
S
P
_
p
u
m
p
E
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
S base pipe exported
P
d
i
s
c
h
a
r
g
e
h
e
a
d
E
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
S base pipe exported
P
P
u
m
p
I
n
t
a
k
e
E
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
S base pipe exported
P
m
o
t
o
r
E
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
S base pipe exported
P
m
o
t
o
r
b
a
s
e
p
l
u
g
E
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
S base pipe exported
P
M
S
U
E
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
S base pipe exported
P
P
r
o
t
e
c
t
o
r
July, 2021
RESOLVE Manual
User Guide 764
s
e
a
l
W
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
E base pipe exported
G
M
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
u base pipe exported
l
e
S
h
o
e
T
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
u base pipe exported
b
i
n
g
H
a
n
g
e
r
P
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
u base pipe exported
p
j
o
i
n
t
X
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
O base pipe exported
v
e
r
U
p
X
Requires/takes underlying base pipe attributes No corresponding REVEAL equipment: underlying
O base pipe exported
v
e
r
D
n
Note: the system validation will treat empty/missing data as an error condition when performing
actions including Export and Run commands.
Input connections
None
Output
Validation
At any time during the Well Building process we can click on the icon in the Menu Bar
commands area. A warning message may appear to indicate our well is not completed and any
errors listed.
In the RESOLVE well builder, all types of tubing terminate open inside the wellbore. This means
that whenever a tubing, second tubing or coil tubing terminate, the end is open such that fluid
can flow from the annulus/casing into the tubing. Additionally, if the tubing definition begins
partway down the wellbore (and not at the top), then the upper end of that tubing will also be
July, 2021
RESOLVE Manual
User Guide 766
In modelling applications if it is desired to plug or close any end of the tubing/second tubing/
coiled tubing, then this can be achieved by adding a Ball Plug to the end or the top of the tubing.
The tubing plug type can be changed (Top or Bottom) in the ball plug attributes screen, and its
location (base pipe, second tubing etc.) can also be set:
The exported REVEAL file will have equipment that terminates (effectively plugs) the tubing at
the location indicated.
To export to a REVEAL well description, go to File | Save As from the Well builder interface
menu bar and choose 'REVEAL XML files' as the file type:
The Well Builder by default uses the concept of extended tubing to encapsulate the ID/OD,
roughness and heat transfer attributes of the base pipe or second tubing string. All completion
equipment “jewellery” items e.g. Pump, SC-SSSV, Mandrel and Packer etc. added to the well
schematic will inherit the tubing ID/OD, roughness attribute values of the underlying tubing
equipment item.
The extended tubing approach offers advantages as it enables ID/OD changes over the well
schematic strings to be recorded (and hence changed) in one place (i.e. the tubing equipment
model instance) and this makes it possible to rapidly evaluate ID/OD change for complex well
descriptions which may include many equipment parts.
SC-SSV 3 ½ is added to a well model description @ 500 ft MD. A 3 ½ inch 10.3 1b/ft tubing is
run from 100 ft to 1000ft on the same base pipe. The exported REVEAL well description will
indicate @ 500 ft MD an equipment (node) with
In this way the tubing (ID/OD, heat transfer) attributes correspond to the tubing are run through
the Safety Valve.
Note that this 'Extended tubing' option is on by default, however it is possible to turn it off by
unchecking the 'Use Tubing attributes' option as shown below:
This will allow the user to directly enter the equipment attributes (ID, OD, roughness and heat
July, 2021
RESOLVE Manual
User Guide 768
transfer) rather than using the values of the underlying tubing equipment. Unchecking the box
means that it is not necessary that the underlying tubing equipment 'overlaps' (in depth) with the
completion jewellery item (safety valve in the above example).
Multi-String
Dual string completions are supported. The concept here is to effectively run a virtual string
guide from mud line to MD. To create a dual string completion select any equipment item
marked as stringID = equipment base on the Data Grid and change the stringID setting to
equipmentSecond. The equipment will automatically appear on the second string and the
second string guide will then appear. The stringID selection (also available in the various
equipment attributes screens) can be used to move equipment items between the base pipe
and second tubing string at any time. If no equipment items have stringID = equipmentSecond,
the second string guide will disappear.
Some parts e.g. ICV types can also be set to StringID = inflow/annulus in accordance with the
corresponding REVEAL Well Builder equipment attribute sections.
Note- Y-tool (splitter) parts are available for purely illustrative purposes on the well schematic but
are not necessary to indicate the second tubing string. They are also functionally ignored i.e. do
not represent a flow path split by any REVEAL well description model generated through export
methods including the File Save as REVEAL .XML file command.
Completion/Perforations
Completed sections are indicated by the use of the Perforation Equipment item. As for all
equipment items the Perforations equipment supports an identical set of attributes required by
REVEAL hence connection factor and Gravel pack addition to CF is supported. Open-hole
(Drill Region) sections of the Well may also be completed in addition to separate sections of
Casing/Liner. Non-contiguous completed sections are indicated by the addition of Perforations
(completion) icons for each corresponding section.
Export formats
The well geometry can be exported as one or more of the currently supported export formats by
going to File | Save As from the Well builder interface menu bar
The Petroleum Experts REVEAL (.XML) file represents a REVEAL well description file and
may be imported/exported by REVEAL (IPM version 9 onwards) through the Import/Export
command found on the Detailed Well Description input screen and corresponding Open Server
commands.
1. REVEAL uses a cross sectional equipment survey vs depth representation of the Well and
the WellBuilder RDO uses a top down physical equipments vs measured depth (across
adjacent strings) approach. On Import from REVEAL The Well Builder RDO will substitute
an annulus profile along the well with a matching casing within hole profile (except where
base pipe OD = annulus ID i.e. no annulus exists). This matching profile is artificially
generated and will not represent the physical reality (as the hole size and casing ID/OD data
is not available) but may of course be adjusted by the user after import. This means that any
open hole section represented by an annulus in REVEAL is interpreted as as a cased hole
(in the RDO).
July, 2021
RESOLVE Manual
User Guide 770
2. In the RESOLVE well builder (RDO), depth references are defined by Equipment (MD
bottom) relative to start coordinate location. For exporting to REVEAL, datum reference
information (Mud Line) and primary depth reference (ZMD) are not transferred directly as
REVEAL input attributes instead they are translated as an (automatically computed) offset
representing 'the start of the well' along the deviation survey through the MD on parent
lateral attribute for the main bore lateral(Lateral 1 ). The locations of the equipment in the well
which are important for the pressure drop calculations are correctly accounted for.
3. REVEAL equipment is defined based on the dimensions entered for the base pipe, coiled
tubing and annulus
Base Pipe ID > 0 = Tubing
Base Pipe ID > 0 and Coiled Tubing ID >0 = Coiled Tubing
Annulus = on and Annulus ID > 0 = Casing
Annulus = on, Second Tubing = on, Second Pipe ID > 0 = Second Tubing
Note - The REVEAL Friction and Well Model options (e.g. Annulus, Coiled Tubing etc.) controls
(or overrides) the usage of underlying base,second and coiled pipe ID/OD data.. A dual string
detailed well description fully describes equipment on the base and second pipe. Second
Tubing option is selected = dual string , Second Tubing option not selected = single string.
3. The RESOLVE well builder (RDO) is directly integrated within REVEAL (from IPM11
onwards) i.e. it is possible to open a model within REVEAL view the detailed well description,
export it directly to the well builder (RDO) (i.e. without RESOLVE), make user edits/changes to
the well description and finally re-import the changes back into REVEAL.
At this stage a warning is issued Warning Due to different design methodologies (cross
section or tubular top down) equipment naming is not preserved. Note this (i.e.
REVEAL->RDO->REVEAL) will not impact the ID/OD and annulus profiles or the original
REVEAL equipment types and depths unless the user manually makes changes to equipment,
depths or ID/OD profile.
4. Any equipments impacting fluid path, heat transfer or supported (by REVEAL) will be
substituted by an equivalent type e.g. orifice, restriction or plain pipe. Note the WellBuilder
RDO equipment attributes screen will include sections for orifice, restriction or ICD etc. to
demonstrate the matching type in REVEAL e.g. a Safety Valve has an Orifice section where the
appropriate REVEAL specific attributes e.g. OrificeK (in this case) may be entered.
This command saves a bitmap image of the well schematic (more traditional printed reports are
also supported).
Clipboard support
It is possible to Copy and Paste data associated within a Well Object instance using the
commands available in the pop-up menu (available on clicking the right mouse button over the
object).
1. Select the Well Object instance to be cloned within the RESOLVE model canvas (or add a
new Well Object instance to the RESOLVE model. Double click on the object and build the
well description).
2. Ensure the Well Object instance to be cloned is selected and right click and select the Copy
command as shown above.
3. Select the Paste target i.e. another Well Object instance within the RESOLVE model canvas.
This can be a newly created well object instance.
4. Ensure the target Well Object is selected, right click and select the Paste command.
The main Well Schematic Data Grid supports copy and paste. This is useful to rapidly build a
complex well description including physical sub-assemblies or logical groupings with repeated
patterns; a lower completion comprising multiple isolated ICV types (e.g. packers, ICVs,
perforations).
July, 2021
RESOLVE Manual
User Guide 772
1. Double click on object to open it. Select the row records representing the equipment items
and select the Copy command
2. The complex well description can then be extended by repeated use of the Paste operation
3. Adjust the MD bottom attribute in the corresponding pasted rows representing each new well
equipment item to reflect the offset of the repeated pattern/sub-assembly and click 'Apply'
The data grid is not designed to have 'Microsoft-EXCEL'-type functionality since each row
includes links to databases and internal validation calculations. However to create well
descriptions it is often easier to have this functionality (e.g. adding values, repeated units, etc.).
To move forward, the RESOLVE well builder supports Copy/Paste from EXCEL. The rows of
interest from the data grid can be copied to the clipboard by selecting the rows and selecting
'Copy' as shown above; the data can then be pasted in EXCEL. Once the modifications are
complete, the data can be copied from EXCEL and pasted back into the data grid by clicking
'Paste' above.
Note: It is important that the number/format of the columns are kept consistent between
RESOLVE and EXCEL for this functionality to be used.
Entire well strings or sub-assemblies may be transferred between Well Object instances. When
using the Paste operation any previously non-existing Equipment items uniquely defined by
sub-type e.g. Safety Valve and description are automatically added to the Well description
model (equipment database).
Clone This feature can be used to duplicate 'logical groupings' of equipments along the well
profile e.g. select one or more equipments (Isolation packer, ICD and Completion/Perforations)
by holding the Control button and clicking (left mouse) to select each record in turn (from the
grid). Subsequent and successive presses of the clone button will result in repeated (Packer-
ICD-Perf) equipment(s) added logically below the last and this way facilitate the rapid building
of a multi-compartment, smart well schematic.
2.6.29.6 Well builder functions
The following functions for the well builder data object are accessible using a visual workflow.
Functions
Note: The functions should be called in order listed below i.e. datum information should be set
before deviation survey and deviation survey must exist before AddEquipment is invoked. It is
required that Drill Regions must exist over well survey depths before additional equipment is
added to the well description.
Clear
Initializes the selected WellObject (erasing any existing data including survey and equipment).
The required input is the name of the WellObject in RESOLVE. No value is returned.
SetGeneralInformation
This function sets general well header information. The required arguments are the name of the
Well Object in RESOLVE, followed by the values to be set (WellName, Field, Welllicense,
Date, Company, Analyst, Location, Platform). These are the parameters in the 'General
description' tab in the Well Object. No value is returned.
July, 2021
RESOLVE Manual
User Guide 774
SetDatumReference
The use of this function is optional, however this function initializes the selected WellObject and
sets its Datum Reference. The required inputs are the name of the Well builder object in
No value is returned.
AddLateral
The main bore lateral is automatically added to a well model. If multi-laterals are required this
function creates additional child laterals. The required inputs are the 'parentMD', i.e., the MD on
the main bore at which the lateral will be added along with the name of the parent lateral. No
value is returned.
July, 2021
RESOLVE Manual
User Guide 776
SetDeviationSurvey
This function sets the deviation survey for the selected WellObject to match a deviation survey
entered in a DataSet. The DataSet object must be configured as Preset table type =
DeviationSurvey as shown below:
The DataSet deviation survey data may be supplied as either of survey types XYZ, Relative
(MD, Inclination, Azimuth) or TVD (MD, TVD, Azimuth).
The first row of the DataSet data should always be configured as XYZ only representing the
StartX, StartY and StartZ coordinate of the well as shown above (this first row is not directly
added to imported table). The rest of the rows can be entered to represent the deviation survey.
The required inputs are the name of the WellObject in RESOLVE, the name of the DataSet from
which the deviation survey needs to be imported, the deviation survey type (whether Relative,
TVD or XYZ survey types need to be imported) the parent MD on the main bore (for multi-lateral
descriptions), and the lateral Name ( "Lateral1" indicates the main bore). indicate that this
deviation survey corresponds to the main bore (lateral).
No value is returned.
July, 2021
RESOLVE Manual
User Guide 778
GetDeviationSurvey
This function returns the deviation survey table from the values entered in a WellObject into a
DataSet. The required input is the name of the WellObject in RESOLVE. This function returns a
DataSet object.
AddEquipment
This function adds an equipment to the well model. The required inputs are the name of the
WellObject in RESOLVE, the subtype, description, stringIndex, MD, length, ID, OD and
LateralName. These are explained below.
Note that DrillRegion(s) must exist (@ Target MD) and be added prior to adding any equipment
items matching the subTypes.
Parameter SubType of type enumeration WellBuilderEquipmentType and can take the a variety
of values depending on the equipment to be added.
The description is any user description to be added to the equipment as a string.
stringIndex corresponds to the location of the added piece of equipment on the string. The
possible values are 0,1 or 2 (where 1 = equipment on second pipe, 2 = equipment as inflow
and zero = all other scenarios).
MD is top depth from ZMD
lateralName = "" or "Lateral1" indicates the main bore (lateral).
No value is returned.
July, 2021
RESOLVE Manual
User Guide 780
DeleteEquipment
This function deletes a piece of equipment from the well model. The required inputs are the
name of the WellObject in RESOLVE, the subtype, description, stringIndex, MD and lateralName
= "" or "Lateral1" indicates the main bore (lateral).. These inputs are the same as those for the
function 'AddEquipment' above, and their description is provided above.
No value is returned.
SetCompletionPerfAttributes
This function sets the completion attributes for those completions that have a connection to the
reservoir. This means that the AddEquipment function should be used before this function
where the Subtype must equal WellBuilderEquipmentType.Perforations
The inputs are the name of the WellObject in RESOLVE, the subType, description, CFMultiplier,
flowingRadius, nonDarcy, flowingAreaPerLength, turbulencefactor, damagePermeability,
damageLength, skin and CmpSkin.
The description is any user description to be added to the equipment as a string.
No value is returned.
July, 2021
RESOLVE Manual
User Guide 782
SetOrificeAttributes
This function sets attributes for an orifice in the WellObject. The inputs are as follows:
SetCompletionGravelPackCFAttributes
This function sets completion gravel pack attributes for existing Completion/Perforation
equipment. The AddEquipment function should be used before this function where the Subtype
must equal WellBuilderEquipmentType.Perforations
The inputs are the name of the WellObject in RESOLVE, the SubType, description,
gravelPackTurbulenceFactor, gravelPermeability, gravelPackLength, shotDensity,
perfEfficiency, perfDiameter, GPCalcTurbulence and GPModel.
No value is returned.
July, 2021
RESOLVE Manual
User Guide 784
SetWrkHeatTransferAttributes
This function sets various heat transfer attributes for equipment depending on the SubType
parameter; the SubType must equal Casing, Tubing or DrillRegion
The inputs are the name of the WellObject in RESOLVE, the SubType, description, stringIndex,
basePipeConductivity, heaterConductivity, coiledTubingConductivity, isolationConductivity,
htcToReservoir
No value is returned.
SetFrictionAttributes
This function sets friction attributes for equipment in the well object. The inputs are the name of
No value is returned.
SetICVTypeAttributes
The inputs are the name of the WellObject in RESOLVE, the SubType, modelLabel, stringIndex,
ScreenType, ScreenSize, icdIndex, databaseLabel, flowAreaPerLength, flowArea,
dischargeCoeffficient and gasSeparation.
ModelLabel is the label used for this instance in the model (e.g.ICD101)
StringIndex corresponds to the location of this equipment in the well. Possible values are 0, 1
July, 2021
RESOLVE Manual
User Guide 786
ScreenType is of type enumeration, and this sets the type of equipment. If the SubType is ICV
then the ScreenType can be weither Orifice or ICV. If the SubType is ICD then the ScreenType
can be either EqualizerTM, GeneralisedICD or NozzleICD. Possible values are EqualizerTM,
GasSeparator, GeneralisedICD, ICV, NozzleICD, Orifice, Screen and Tubing.
ScreenSize: this argument applies only if the SubType is Screen and is the size of the
screen. Set -1 for all other SubTypes.
icdIndex: This is the index of the equipment in the database list. Note that if the
databaseLabel is used (below), then this argument is unused (set = -1).
databaseLabel: This is the label of the equipment in the REVEAL database as it appears in
the WellObject.The databaseLabel overrides the icdIndex above. If the icdIndex needs to be
used, then the databaseLabel should be set to "".
For example, the database equipment "EQ Helix 40' 6 5/8" 0.4" is entered as follows "EQ Helix
: 40 : 6 5/8\" : 0.4". Note that the colon, white space and backslash characters are required. The
backslash ensures that the quote that follows does not correspond to the end of the description.
flowAreaPerLength: This is only relevant if the SubType is ICV and the ScreenType
No value is returned
DeleteLateral
This function deletes a lateral in the WellObject. The inputs are the name of the WellObject in
RESOLVE and the name of the lateral to be deleted. No value is returned.
CreatePNG
This function creates a bitmap (.PNG) image file from the Well Schematic and saves it at the
location indicated.
The inputs required are the name of the well object in RESOLVE and the file path (e.g.
"C:\wellreport.png")
No value is returned.
CreateBandedReport
This function creates a cross sectional report bitmap (.PNG) image file from the Well Schematic
and saves it at the location indicated.
The inputs required are the name of the well object in RESOLVE and the file path (e.g.
"C:\wellreport.png")
No value is returned.
CreateBitmapFromWellObjectXML
This function imports a WellObject description (.WXML file) into a well object in RESOLVE and
creates a bitmap (PNG) image file of the well object at the desired location. The inputs required
are the name of the well object in RESOLVE, the filepath to the WXML file (in double quotes " ")
and the filepath where the output image file (.png) should be saved. No value is returned.
ImportWellObjectXML
July, 2021
RESOLVE Manual
User Guide 788
This function imports a WellObject graphic well description file (.WXML) previously exported by
a WellObject. The inputs required are the Well Object name in RESOLVE and the filepath (in
double quotes " ") of the WXML file. No value is returned.
ExportWellObjectXML
This exports a WellObject description file (.WXML) which can be read by another WellObject in
RESOLVE. The inputs required are the Well Object name in RESOLVE and the filepath (in
double quotes " ") to the WXML file. No value is returned.
ImportRVLDetailedWell
This imports a complex well description file (.XML) previously exported by REVEAL. The inputs
required are the Well Object name in RESOLVE and the filepath (in double quotes " ") of the
XML file. No variable is returned.
ExportRVLDetailedWell
This function exports a well description file (.XML) that can be imported by REVEAL. This
command invokes a REVEAL application instance depending on 'Load external application on
export' option setting. This option is found in File | Options from the well builder user interface.
The inputs required are the Well Object name in RESOLVE and the filepath (in double quotes "
") to the XML file. No variable is returned.
ImportWellFromPROSPER
This function imports a downhole equipment survey from a PROSPER (.OUT) file . The inputs
required are the Well Object name in RESOLVE and the filepath (in double quotes " ") of the Out
file. No variable is returned.
ExportWellToPROSPER
This function exports a downhole equipment survey to a PROSPER (OUT) file . The inputs
required are the Well Object name in RESOLVE and the filepath (in double quotes " ") of the Out
file. No variable is returned.
ExportRVLDetailedWellFromWellObjectXML
This function imports a WellObject description (.WXML file) into a well object in RESOLVE and
then creates a detailed well (.XML) file of the well object at the desired location that can be
imported by REVEAL. The inputs required are the name of the well object in RESOLVE, the
filepath (in double quotes " ") to the WXML file and the filepath where the output file (.XML)
should be saved. No value is returned.
July, 2021
RESOLVE Manual
User Guide 790
2.7.1 JSONData
JSON (JavaScript Object Notation) is a very light text-based format for data exchange between
server and client which is easy to parse and generate. It's based on two structures: Object and
Array.
Object is a collection of key/values pairs which are entered within curly brackets. The keys
are always strings, whereas a corresponding value can be a string, number, array, boolean
,etc. When multiple key/values pairs are used, they are separated by a comma, as shown in
the example below.
Array is a list of values which are separated by a comma and the values are enclosed in
brackets.
Discretion of JSON format for GAP model in creation can be provided on request.
Visual Workflows are a way in which Resolve models can be controlled to implement field logic
and perform calculations available in Data Objects. These can be combined such that the field
control logic makes use of results from Data Objects calculations.
The idea of the visual workflows is that they represent a seamless transition of field logic from
plans into the model. As can be seen from the above screenshot, the workflow consists of
various elements which take decisions, perform assignments, perform calculations, and so on.
The systems can be made arbitrarily complex by including sub-flowsheets and subroutines.
Different workflows can be ran at different points in the model:
At the start of the run
Before every time step
After every time step, with an option to redo the solve, trigger global optimisation or terminate
the run.
At the end of the run
As part of the main solve, by adding a workflow icon to the main screen
The elements that are used to construct workflows are described on the following pages. If the
workflow is to manipulate variables from the client modules, these variables should first be
published in RESOLVE. This procedure is described in the Publish Application
Variablessection.
The purpose of this section is to provide a brief description of the elements available in visual
workflows in RESOLVE. Further detailed information on visual workflows including examples is
available in the Visual Workflows User Manual.
July, 2021
RESOLVE Manual
User Guide 792
The best way of learning about visual workflows is by following the examples the Visual
Workflows User Manual. These examples also show how the workflows integrate with the
calculations and properties that are exposed from RESOLVE data objects.
2.8.1 Elements
2.8.1.1 Palette
The various Visual Workflow elements are available from the palette ( ). These are the basic
blocks that allow the user to formulate a given piece of logic, run calculations on the models or
on Data Objects, extract results and take actions on the models.
The following section provides a basic introduction to the different elements available, and for
further details on the implementation, the use of workflows and detailed step by step examples,
please refer to the Visual Workflows User Manual.
Equivalent to an if...then decision. There are two forms, the individual decision element , and
the combined decision/action element .
The individual element acts as a junction in a visual workflow, which can branch out to a Yes
branch or a No branch based on the evaluation of an expression.
Expressions are entered on the left and right hand sides, along with a condition. Multiple lines
can be added with an 'AND' or 'OR' statement. When links are made from this element, the No
branch is drawn first, then the Yes branch. To change these assignments, enter this screen and
click the 'Decision' button.
July, 2021
RESOLVE Manual
User Guide 794
Please note, an equation can be used as an expression on the left hand side.
The combined element is provided to perform a set of assignments if a condition is met (then),
and a different set if not (else). The upper third is identical to the individual decision element
above. The lower two thirds of the pane are the same as an assignment element. The 'then'
assignments are performed if the expression has been evaluated as TRUE, and the 'else'
assignments if FALSE.
2.8.1.3 Switch
Equivalent to a 'switch' statement in computer code, this is a decision element with multiple
outputs which are followed depending on the result of the element's expression.
July, 2021
RESOLVE Manual
User Guide 796
Toolbar button:
The expression entered in the 'Evaluate' field is evaluated. The logic then looks down the list of
conditions entered in the pane below, and when one of those is evaluated to TRUE then the
logic routes to the element entered in the right hand column.
If none of the elements are evaluated to TRUE, then the logic routes to the 'default' element at
the bottom of the screen. If this is not entered, then the behaviour is not defined.
It is first required to connect all the possible outcomes of the switch element, and the 'route to'
and 'default' drop down lists will be populated with the names of the connected elements.
2.8.1.4 Assignment
Used to assign the values of variables. Assignments can involve arithmetic relations to other
variables. The mathematical function provided in the Command field below can be used within
the expression itself.
Variables for assignment are selected in the left hand column. Only writable variables are
displayed here.
It is also possible to execute commands from here. The commands are pre-defined by Resolve.
In the example above, a message will be displayed to the log window every time a well is
July, 2021
RESOLVE Manual
User Guide 798
unmasked.
2.8.1.5 Operation
Operations enable to perform calculations made available by Data Objects, as well as call
OpenServer.
July, 2021
RESOLVE Manual
User Guide 800
Detailed description of functions available for various data objects is given in the Data Objects
section of this user guide. Further information on these functions is also available in the Visual
Workflows Manual.
2.8.1.6 Sub-flowsheet
It is possible to embed a sub visual workflow as part of higher level visual workflow logic.
Toolbar button:
To create a sub-workflow, click on the corresponding toolbar button and then create the element
on the worksheet. To enter the sub workflow, right-click on the element and click on 'Open
SubFlowSheet'. This will display (the first time) a blank flowsheet with only a 'Start' element.
At this point the worksheet can be filled with the required logic of the sub workflow. The workflow
may have multiple termination points, which will need to be mapped to to higher level elements.
Alternatively, an existing workflow can be imported from an external file.
Once in the higher level workflow, connect the sub-flowsheet to the required elements. There
should be as many outwards connections as terminator elements in the sub-flowsheet. To map
the terminator elements of the sub-flowsheet to the higher level elements, right-click on the
element and click on 'Map internal to external'.
2.8.1.7 Loop
The loop element enables to loop over a variable and to repeat a set of elements/instructions
several times. A loop is defined by a variable, a starting and an end value, and a loop
increment. The variable is incremented by the given increment at every iteration, and the loop
July, 2021
RESOLVE Manual
User Guide 802
ends when the variable becomes higher than the end value.
While the variable is less than or equal to the end value, the workflow takes the branch 'Loop'.
When the variable becomes strictly higher than the end value, the loop exits and the workflow
takes the branch 'Continue'.
The workflow below shows how an Form element looks within a workflow:
The main elements that can be observed in the Form window are as follows:
1. Designer - the designer button opens the Form construction interface where the
custom form can be built using a variety of control elements such as: grid, list box,
combo box, buttons, charts, etc.
2. Form control mapping - elements defined in the Designer section will be displayed in
this table where they can then be mapped to workflow variables to pass data into the
form, or to use the form to assign values to workflow
variables. The Form appearance such as: color, hide or disable Form elements, etc., can
July, 2021
RESOLVE Manual
User Guide 804
NOTE: If a significant change is required in a form, a rebuild from code of that form is
triggered which is time expensive.
The form builder screen columns include entries to allow a control to be enabled/disabled,
and for the foreground and background colours to be set.
Form colours such as: background, button text, button background, etc., can be changed
directly via an assignment element where the targeted form variable value is set via the
corresponding "set equal to" value or
previously defined array variable value
3. Assign return value - name of the variable returned by the Form element. The return
value is controlled by Buttons in the form where the returned value will then be assigned
to the defined variable that can be then used as an input to other workflow procedures
2.8.1.8.1 Form Designer
Designer is internal interface of the Form element that is used to build the form itself. This interface
is used at the workflow construction phase, the created form is then saved and displayed to the user
when the workflow is executed.
The form Designer can be accessed by selecting the corresponding button in the Form window:
An empty form includes a standard 400x400 pixels work area. Selecting the gray square with the
mouse it will be possible to change overall form appearance: title, size and position on the screen
when it is displayed.
To add elements to the form right click in the construction space and choose Add in the displayed
menu. The list of elements that can be added to the form is shown below:
July, 2021
RESOLVE Manual
User Guide 806
The chosen element will be added to the form. Details of individual elements are further described
below.
2.8.1.8.2 Button
Appearance:
The Button appearance is shown in the figure below. If necessary it can be re-sized and re-positioned
with mouse or using numerical values in the element details frame (Position and size).
IsOK can be set exit direction fromt he From. Set to True and False to exit the Form in one of the two
directions - OK and Cancel respectively. If only OK and Cancel exits are intended from the Form, then
2 buttons can be added with corresponding settings.
RetrunInteger can be used if more than 2 exists are intended from the Form. The ReturnInteger
value from all the buttons defined in the Form is returned to a single variable, which name is defined in
the main Form interface:
July, 2021
RESOLVE Manual
User Guide 808
For example, if 4 buttons are created in the Form, then it is possible to set ReturnInteger for them
ranging from 1 to 4 and return it to a user defined variable. this variable will then be used in a Switch
or a series If...Then elements to define the direction in which the workflow will continue.
2.8.1.8.3 Label
Appearance:
The Label appearance is shown in the figure below. If necessary it can be re-sized and re-positioned
with mouse or using numerical values in the element details frame (Position and size).
Selection NA
Input Input text or variable name that will be displayed by
the Label. If variable is input then its value will be
displayed.
Writing "Text "+var1, where var1 is number 999 will
display Text 999.
Output NA
July, 2021
RESOLVE Manual
User Guide 810
Selection NA
Input Input text or variable name that will be displayed by
the TextBox by default (initially when form is
displayed).
Output Name of the variable to which user input will be
assigned to.
July, 2021
RESOLVE Manual
User Guide 812
Selection NA
Input The default state of the CheckBox. 1=On, 0=Off.
Output Name of the variable to which CheckBox state (1 or
0) is returned.
July, 2021
RESOLVE Manual
User Guide 814
Selection NA
Input Text or a variable name that will be displayed at the
title of a GroupBox.
Output NA
2.8.1.8.9 Grid
Appearance:
The Grid appearance is shown in the figure below. If necessary it can be re-sized and re-positioned
with mouse or using numerical values in the element details frame (Position and size).
The initially displayed frame only outlines the area in the form that will be allocated to the Grid. Scrol
bars will be automatically added if Grid cannot fit entirely in the allocated frame.
Selection NA
Input Default array or DataSet that will be displayed in the
Grid table.
Output Array or DataSet to which user input values will be
passed. Columns in the DataSet should be
predefined.
July, 2021
RESOLVE Manual
User Guide 816
2.8.1.8.10 Chart
Appearance:
The Chart appearance is shown in the figure below. If necessary it can be re-sized and re-positioned
with mouse or using numerical values in the element details frame (Position and size).
Selection NA
Input DataSet from whic data will be plotted.
Output NA
July, 2021
RESOLVE Manual
User Guide 818
July, 2021
RESOLVE Manual
User Guide 820
2.8.1.9 Iterator
The Iterator element allows looping through data objects ob the same type. It can be added to a
workflow using the following button:
The workflow below shows how an Iterator element looks within a workflow:
Iterator is similar to the Loop element with only difference is that Loop is designed to vary an integer
counter variable, then can be used as index in arrays or other elements. Iterator allows arranging
loops through similar data objects performing calculations with them, which sometimes is required in
workflows.
For example, it may be required to loop through a set of Tight Reservoir objects performing history
simulation for each one of them and updating IPR data in GAP model. This is where Iterator will be
used.
Interface of the Iterator is requires only 2 inputs: Type of the data object and counter variable name.
The counter variable name can then be used in other blocks of a workflow to perform routine task or
read parameters.
In the above example the type of the object is set as [PxTightOil]. Therefore, when addressing the
"tightReservoirs" variable (name is arbitrary) a full list of parameters and operations for TightReservoir
July, 2021
RESOLVE Manual
User Guide 822
2.8.1.10 Terminators
Terminators represent the end element of visual workflow logic. Depending on the context, they
may just represent the endpoint of the logic and have no further meaning. In other cases, the
different endpoints may raise different actions in the application that called the workflow.
In Resolve, some workflows have a fixed set of termination points which can not be added to. In
other cases, as detailed below, one or more terminators can be added to the worksheet.
Toolbar button:
Sub flowsheets can have as many terminators as required, and these can map to higher level
connections as described above.
Double-clicking on a terminator allows the label to be changed to something which makes
sense in the context in which it is used, e.g. 'GOR calculation successful'.
The terminator in a workflow application object indicates to the calling logic to which element
execution should switch when the workflow is completed. As an example, double-clicking on the
element brings up the screen below:
The drop down list is populated with a list of objects from the rest of the Resolve system. If the
workflow logic ends at this termination point, execution will switch to the object selected from the
list. If the label supplied is not found in the system, the execution will continue with no switch of
execution.
2.8.1.11 Subroutines
The Subroutine element allows encapsulating sub-segment of a workflow that performs a specific
task into single block. It can be added to a workflow using the following button:
The workflow below shows how an Assignment element looks within a workflow:
Subroutine is based on the Case Manager data object. This object is analogous to sub-procedures
in the programming. The element has a specific set of input and output variables and internal sub-
workflow (logic) that performs calculations and returns results.
If there is a specific fixed task that is routinely performed as a part of analysis it can be put into a
Subroutine. The Subroutine on its turn can be exported into a single file and then imported into a
various workflows.
July, 2021
RESOLVE Manual
User Guide 824
The main interface of Subroutine shows mapping between external and internal variables. These are
variables that are passed to internal workflow. The workflow can be imported using the Import button
at the bottom of the window. Once imported the list of "Internal model variables" will be updated and it
will be possible to map them.
The shown Subroutine has 2 internal variables, named "input1" and "output1":
It is possible to view or build workflow if it is not created using "View workflow/model". This button will
display underlying Case Manager interface, whihc is split between two tabs - Variables and
Workflows. The Variables tab is used to defined internal variables, which are later used in the
workflow.
The Workflows tab shows a standard Workflow Editor window, which allows building sub-workflow:
Please refer to Case Manager section for more details regarding the underlying interface used in
Subroutine block.
July, 2021
RESOLVE Manual
User Guide 826
Once the workflow is built it will be possible to return to the main Subroutine window and map internal
variables to external ones.
It is also possible to "Export" subroutine into an independent file with *.rdo extension. This file can later
be imported into another workflow that will require the same functionality.
2.8.2 Functions
2.8.2.1 Functions Toolbar
When a workflow is opened, the ribbon at the top of the screen contains buttons which can be used to
build and troubleshoot the workflow:
Add Workflow Author This option allows details of the workflow and its
or Description author to be inserted and saved for both Users to see.
Workflow Element The workflow element palette is used to add the new
Palette elements to the workflow. More information on the
different elements which are available can be found in
the Workflow Elements Section of this User Guide.
Link/Connector Used to connect logical elements together within a
workflow. To delete a connection between two
elements simply redraw the link in the same direction
for a second time. Further details in the Connections
section.
Return Up A Level When within a Sub-Workflow, this is used to return to
the workflow one level up. When in the 'top level'
workflow, this button cannot be used.
Magnify This will zoom in on the workflow. Furthe details in the
Zoom/Unzoom/Reset section.
Unmagnify This will zoom out of the workflow.Furthe details in the
Zoom/Unzoom/Reset section.
Zoom to Fit Zoom out to a view which shows all of the elements in
the current level.Furthe details in the Zoom/Unzoom/
Reset section.
Add Break-Point Break-points are used to stop the workflow at a certain
point while running and can be used for
troubleshooting purposes. To add a break-point select
this icon and then select the logic element which the
run is to stop at. When a break-point has been added
July, 2021
RESOLVE Manual
User Guide 828
the name of the element will turn red with "*" at the
start and end. To remove an individual breakpoint
simply select the Add Break-Point button and click on
the element for a second time. Further details in the
Breakpoints section.
Remove All Break- This will remove all of the break-points from the
Points current level. Any break-points within out sub-
flowsheets will remain. Further details in the
Breakpoints section.
Delete This will permanently delete any element.
Move Used to drag and drop elements around the workflow
screen.
Select Used to select elements. Selected elements will have
an orange line drawn around them to show they have
been selected.
Un-select All Used to un-select all currently selected elements
Convert to Sub- Will move all selected elements from the current layer
flowsheet into a new sub-flowsheet. The connections to and
from the sub-flowsheet will need to be entered
manually.
Cut Will 'cut' all selected elements to the clipboard.
Copy Will 'copy' all selected elements to the clipboard.
Paste Will 'paste' all elements from the clipboard into the
workflow.
Validate This button will run a validation on the workflow to see
if any variables remain undefined etc.
Edit Local User This screen is used to define local variables which can
Defined Variables be used within the workflow. Further details in the
Creating variables section.
Add/Remove Grid Used to visualise or remove the gridding system from
the background of the workflow.
Initialise Workflow Used to initialise the variables when testing the
workflow. Further details in the Executing workflows
section.
Run Use to run the workflow from the Start element of the
current flowsheet. Selecting run will run the workflow
until the end of the workflow or when a 'break-point' is
reached. Further details in the Executing workflows
section.
Proceed One Step This will take a step through a single element of the
workflow and then stop. If this is done on a sub-
flowsheet then the entire sub-flowsheet will be
performed. Further details in the Executing workflows
section.
Step Into Sub- When on a sub-flowsheet, this will 'step into' the sub-
Workflow sheet and allow the workflow underneath to be
stepped through one element at a time. Further details
in the Executing workflows section.
Stop This will terminate the current run of the workflow.
Connections are made between workflow elements by clicking on the 'connection' toolbar button
and then dragging between items in the worksheet.
Resolve will perform a validation before a connection is made, and will not allow invalid
connections. For example, only one connection is allow from a variable assignment element,
two connections are allowed from a decision element, and any number of connections are
allowed from a multiple decision element.
To remove a connection, drag over the existing connection. It will then be removed.
Toolbar button:
2.8.2.3 Zoom/Unzoom/Reset
Toolbar buttons:
The left hand button performs a zoom into an area of interest on the worksheet. A single click
will perform a fixed zoom centred on the point where the click was made. It is also possible to
make an area zoom by dragging out a rectangle over the required area.
The middle button performs an unzoom. A single click will zoom out by a fixed amount, centred
on the point where the click was made.
July, 2021
RESOLVE Manual
User Guide 830
The right button will zoom out to view the entire worksheet in the window.
2.8.2.4 Breakpoints
Toolbar buttons:
The left hand button allows a breakpoint to be set. If this is enabled, then a click on a workflow
element will set a breakpoint, which will be indicated by the label turning red, i.e.
Clicking on the element a second time will remove the breakpoint. The right hand button (above)
will remove all breakpoints from the worksheet.
When a breakpoint is triggered, the element will be displayed with a heavy border, and the
execution (as displayed in the Resolve calculation log) will be paused:
Go step by step through the workflow logic with the standalone execution buttons.
View variables and objects.
Remove the breakpoint/all breakpoints and continue with the run once satisfied with the
workflow logic.
run when an error occurs in the workflow. If an error occurs, execution will switch to an exception
handler block, which can be set by using the “Set a workflow exception point” functionality, to allow
execution to continue. As a result of the run, the variable 'Exception' will contain the reason for the
error.
In the example below, we used an assignment block (i.e. “Assignment”) which contains an error.
Nevertheless, despite the error in the workflow, the run has been completed with the error message
reported in the Exception variable (accessed from the Watch variable functionality), as shown below.
This section describes the various ways the system can be edited, aside from the addition of
items which has been discussed at length above.
Toolbar buttons:
Move. When this is enabled, elements can be dragged around the screen. Holding <shift>
down while dragging has the same effect.
Select. Selects an element or group of elements (if dragged over an area). A selected
element will be highlighted appropriately.
Unselect.
July, 2021
RESOLVE Manual
User Guide 832
Variables that are manipulated in visual workflows come from two sources:
1. Passed in from the Resolve. These can be variables that have been imported from one of the
client applications (e.g. GAP), or represent a quantity in Resolve itself, or have been defined as
'user defined' variables. Data objects are also effectively variables that can be operated on from
the workflow.
2. New variables can be defined which are 'local' to the workflow in question. These are
described in this section.
Toolbar button:
Working from the top to the bottom, the information required is:
July, 2021
RESOLVE Manual
User Guide 834
In addition, it is possible to define arrays of primitive types by checking the 'array' box and
entering the number of elements in the array (single dimension only).
Variables are added or deleted by clicking on the 'add', 'remove', and 'remove all' buttons.
Workflows are executed as part of the Resolve calculation logic. It is also possible to execute
workflows 'standalone' from the toolbar, and to step through the logic prior to a run being made.
If the workflow relies on dynamically calculated data from the client applications then clearly the
results may be invalid, but it may still be a useful debugging tool.
Toolbar buttons:
Going from left to right, these buttons perform the following actions:
Initialise. This initialises all local variables as if it were the start of a full Resolve run.
Run. If standalone, this will run the workflow to the end or to the next set breakpoint. If this is
invoked from a Resolve run (e.g. after a breakpoint has been triggered) then it will resume the
run or at least run to the next breakpoint.
One step. This steps to the next element in the workflow.
Enter sub-flowsheet. If the focus is currently on a sub-flowsheet element, then this will step into
that flowsheet and the new flowsheet will be displayed in the window.
Stop. If standalone, this simply stops execution. If this is invoked from a Resolve run (e.g. after
a breakpoint has been triggered) then it does not stop the Resolve run, but returns control to
the Resolvecalculation logic (i.e. it breaks from the workflow). To then stop the Resolve run,
the Resolve stop button must be pressed.
When debugging workflows, it is often useful to be able to view variable values, and how they
change, as the workflow elements are stepped over. This can be achieved by clicking on the
'view variables' button of the toolbar.
Toolbar button:
The variables are selected from the list on the left. Double-click or click the right arrow to move
the variable to the right hand side of the screen, where the value will be displayed if it is a simple
type variable. The workflow elements which refer to this variable are displayed in the bottom
pane of this screen.
If the variable is a complex type (see screenshot above) then the value will not be displayed but
the type will be. Double-clicking on this entry will then display a pop-up window with the
properties of the object, e.g.
July, 2021
RESOLVE Manual
User Guide 836
2.8.2.10 Import/Export
It is possible to import entire flowsheets from a binary source file, and also to export an existing
flowsheet to file.
Toolbar buttons:
The standard file extension is *.vwk. On importing a worksheet, the workflow manager checks
for variables that were used in the target worksheet which will need to be defined in the
imported worksheet. It also checks that termination points are consistent with the context in
which the workflow is working. For example, a workflow from the PostSolve section, with
multiple exit points, is not consistent with a workflow executed at the start of the run which has a
single exit point. In this case a warning will be displayed; it does not preclude its use in the new
context, it just means that some of the 'built in' behaviour of the imported worksheet will
potentially not make sense in the new context.
Applicability:
‘WellStability – Gas’ is only applicable for naturally flowing gas producers Initial bracketing
starts of 0.1 to 100 MMscf/day
‘WellStability – Liquid’ is only applicable for naturally flowing oil producers Initial bracketing
starts of 0.1 to 40000 STB/day
Workflow requirements:
The following modules and data objects are required to run the WellStability workflow.
1. OpenServer data object
2. GAP production system
3. Well Stability workflow module
The well stability workflow can be found from the registered workflows icon.
Workflow Inputs:
- Top node pressure
- WCT
- GOR
- Well Model Lift Curves
Workflow Outputs:
- Calculated minimum rate of stable production
Workflow principle:
July, 2021
RESOLVE Manual
User Guide 838
The minimum point of stability on the VLP can be found by observing a change in gradient. As
we move from the gravity dominated to the friction dominated region of the VLP curve, the
gradient of the curve will switch from negative to positive.
In the well stability workflow, the minimum point of stability is identified iteratively by finding two
rate points sufficiently close together, which capture this change in gradient i.e. bracketing the
two rate points.
Workflow description:
Upon opening the workflow, the internal variable ‘os’ will need to be mapped to the OpenServer
data object label – in this case ‘OS’.
Each step in the workflow is described following the numbering in the figure below:
July, 2021
RESOLVE Manual
User Guide 840
1. Initialise the calculation with the minimum and maximum rates. This interval is split in 4
intervals (5 points), and the derivative of the VLP will be calculated on those 5 points.
2. Loops through each of the calculation points of the TPDCalculator.
3. For each calculation point, set the well name and operating conditions.
4. Increment the iteration count
5. Loops through each of the calculation points of the TPDCalculator
6. Set the calculation rate of a given calculation point. The 5 rates use a linear distribution
between the current minimum and maximum of the search interval
7. Perform the TPD calculation
8. Loops through the calculation points
9. Looks at the VLP derivatives of two consecutive points
a. If one is negative and one is positive, then the point we are looking for is located
between these two points. Go to 10.
b. If not, move on to the next two consecutive points
10. Sets the new minimum and maximum of the search interval
11. If the range of the new search interval is below a given tolerance, the algorithm exits.
12. Outputs the calculated rate of stability to the Log tab.
Terminators:
B: The algorithm has successfully converged
C: The algorithm has not found two consecutive points with derivatives of opposite signs.
Calculation messages contain the status of the workflow calculation (1 = well masked / no flow, -
1 = no minimum, 0 = calculation complete) and the difference between the current operating
rate and the minimum rate of stable production (Delta Q = …). This is illustrated below.
Within the RESOLVE logs, in the ‘Events’ tab, the number of iterations and the minimum rate of
stable production is reported for each timestep.
July, 2021
RESOLVE Manual
User Guide 842
Workflow Requirements
In order to run the workflow, it is required to add the following to the model:
1. OpenServer Data Object
2. VLP Generation Workflow module
3. PxCluster should be setup and running. For more information, please refer to the Setting up
PxCluster section of this manual.
It is not necessary to add the GAP model to RESOLVE as this is done automatically by the
workflow.
Note: the workflow will not run if the overall RESOLVE model is run as it needs to create an
instance of GAP, which cannot be done at runtime.
Workflow steps
1. Selecting the GAP file
When the workflow is run, a first window shows up allowing the user to select the desired GAP
file. This is done using the 'Browse' button.
July, 2021
RESOLVE Manual
User Guide 844
3. Error reporting
Once the VLP generation is complete, the following screen reports any error of the
CaseManager. If there is an error for a given well, its VLP will not be imported into GAP.
If the Cancel button is clicked, the VLPs will not be imported into GAP. The generated VLPs are
nevertheless kept as *.tpd files.
4. Termination options
After the VLPs have been imported, the workflow is complete and the user has a choice of
saving the GAP file or not, and closing the GAP model or leaving it open.
July, 2021
RESOLVE Manual
User Guide 846
The separation conditions (also named path to surface) used in the models will be termed as
the 'reference separation conditions'.
In the field, the separation conditions may be different to the reference conditions, or they may
change with time. This is true of the main production process but also of the separation
conditions during well testing.
Field measurements such as well test rates are used to calibrate the models: rates are often
measured and reported at standard conditions and are therefore dependent on the field
separation conditions.
However when the field separation conditions are different to the reference conditions, using the
raw field data in the models will yield incorrect mass rates and incorrect calculations such as
pressure drops. This will lead to erroneous matching of the models and ultimately lower the
predictive quality of the models.
The solution is to correct the rates that are measured in the field, and calculate the rates that
would have been measured if the fluid had gone through the reference separation conditions.
This correction requires an EOS description of the fluid is termed 'PVT transformation'.
Objective
The objective of the PVT transformation workflow is to correct rates measured in the field to the
reference separation conditions i.e. the rates would have been measured under the reference
conditions. These rates can correspond to field measured totals or to well test rates.
The corrected rates can then be used in subsequent modelling activities such as well or network
matching, production allocation, reservoir modelling and history matching.
Workflow Requirements
To run this workflow, the total number of licenses required is:
- 1 RESOLVE license
Workflow inputs
When the workflow is run, the following form appears:
July, 2021
RESOLVE Manual
User Guide 848
1. EOS tab
The EOS tab allows the user to specify the EOS model of the fluid, the field and the reference
paths to surface. The drop-down menus show a list of the Path To Surface objects defined in the
RESOLVE model.
When a change is made to the input data in this tab, select 'Save Selected Paths to Surface'.
The Anchor Point tab allows the user to select the trusted measurement that will be used to
calculate the total mass rate. This measurement can be:
An oil rate or a gas rate
At a separator outlet or at a path to surface joint
A volumetric rate or a mass rate
Note that volumetric gas rates are assumed to be at standard conditions, while volumetric liquid
rates are assumed to be at the selected separator's pressure and temperature.
When a change is made to the input data in this tab, select 'Save Anchor Point Definition'.
3. Recombination tab
The Recombination tab is used to specify options related to the estimation of the produced
composition from oil and gas recombination. Three options are available:
From Composition: no recombination is performed and the supplied EOS is used as is.
User entered Total GOR: recombination is performed to a target Total GOR, which is the ratio
of total accumulated gas to stock tank oil
User entered In-Situ GOR: recombination is performed to an in-situ GOR. The gas and oil
measurements may be taken at any point in the system, and this should be supplied by the
user.
The well test rates to be corrected should be entered in this field. The columns of this table
depend on the selected inputs for the field path to surface, anchor point and recombination
July, 2021
RESOLVE Manual
User Guide 850
options.
Note that the separator pressure and temperature is an input and is allowed to change between
well tests.
Workflow results
Detailed results at the separator level are available from the 'Detailed Results' button.
July, 2021
RESOLVE Manual
User Guide 852
This is an essential capability for advanced integrated models as it keeps the fluid PVT
description consistent from one application to another while respecting the ideal PVT
requirements of each module used.
RESOLVE can use the lumping / delumping procedure from GAP itself if a GAP model is part of
the RESOLVE project. If there is no GAP model within the RESOLVE project, then it will be
possible to use the same lumping / delumping procedure that is available in GAP directly within
RESOLVE itself.
Further information on the different options available for the lumping / delumping procedure can
be found in the "Lumping / Delumping overview" section as well as in the "Example Section 2:
Connection to reservoir simulation tools", where an example is provided for REVEAL, Eclipse,
tNavigator, GEM and Nexus.
One of the most important elements to consider when an integrated model is setup is the
consistency of the PVT models used in the different connected applications.
Effectively, the fluid flowing through the system from the reservoir to the plant is the same.
However, when considering the modelling aspects, each module will have its own PVT
requirements: for instance, reservoir models are essentially focusing on volumetric aspects,
hence a black oil PVT model will therefore be suitable, whereas process models are focusing
on thermodynamic properties and will require a full compositional description of the fluid.
The main challenge is to be able to make these different PVT descriptions consistent
within the full-field model.
The techniques used to do so are described below, along with different examples.
For an integrated model linking for instance reservoir, surface network and process simulator
models, it will be important to consider the following elements when defining the way the fluid
PVT is going to be described:
Consistency The PVT description of one fluid needs to be consistent from one
model to the other: the fluid description in the reservoir simulator has to
be the same than the fluid description in the well models for instance: this
will ensure the PVT properties used to calculate the VLP curves for the well
are identical to the PVT properties used to calculate pressure drops in the
reservoir / surface network for instance
Mixing If several fluids are to be mixed, the PVT description of these fluids,
specifically when using fully compositional description should allow
them to be mixed. Recommendations on how PVT characterisations can
be handled when considering a full-field model can be found in the
'Recommendation for PVT characterisations' section
Accuracy In addition to these two elements, the PVT models used in each
application should provide the user with the best compromise
between PVT accuracy, model requirements with regards to PVT
and runtime
Effectively, if a reservoir / surface / process full-field model is setup, each of these three
applications will have different requirements when it comes to PVT modelling and the usage it
makes of it.
The text below illustrates the needs of each type of model with regards to PVT.
RESERVOIR LEVEL
PVT model At the reservoir level, the PVT parameters having the most impact are the
requirement parameters describing fluid expansion: oil, gas and water FVF.
For this reason, the main objective of a reservoir PVT model will be to
provide an accurate description of these parameters at different pressure
and temperatures
Possible types Based on the elements above, both a black oil or a fully compositional
of PVT model can be used in the reservoir model
description
Limitations If a black oil model is used, then the fluid compositions within the reservoir
are not monitored.
July, 2021
RESOLVE Manual
User Guide 854
PVT model At the well / surface network level, the PVT parameters having the most
requirement impact are the fluid densities, due to their large impact on pressure losses
along the wellbores / in the pipeline
For this reason, the main objective of a well / surface PVT model will be to
provide an accurate description of these parameters at different pressure
and temperatures
Possible types Based on the elements above, both a black oil or a fully compositional
of PVT model can be used in the well / surface model.
description GAP also allows for a third option, the compositional tracking option, where
all the calculations are based on the black oil PVT model, but the evolution
of the fluid composition is also calculated along the surface network. This
can obtain the best of both worlds: speed and accuracy of the black oil and
fluid composition from the compositional model
Limitations There are no limitations as such, however it is important to note that if a
compositional model is used, then both a "reservoir" type description or a
more detailed description of the composition can be used, as the impact of
the number of components on the runtime is less important than in a
reservoir model
PROCESS LEVEL
PVT model At the process level, a detailed description of the fluid composition at
requirement every point of the system is required
Possible types of Fully compositional model is used in the process model
PVT description
Limitations The description of the fluid composition has to be detailed for most of the
process models to be producing physically meaningful results
The diagram below summarises the different possible PVT model combinations that can be
used when setting-up a reservoir / surface network / process full-field model.
In order to make sure the PVT is consistent, different techniques can be used to connect these
different PVT models.
Data Passing This exchanges PVT data between two applications using the same type of
PVT description: for instance, passing PVT data from a reservoir model
using a black oil PVT description to a surface network model using black oil
PVT description
Black Oil This allows the exchange of PVT data between two applications, one using
Delumping a black oil and one using a compositional description (whether the
compositional model uses a lumped or detailed composition is irrelevant).
For instance, this could be used to pass PVT data from a surface network
model using a black oil PVT description to a reservoir model using a
compositional description
Lumping / This allows the exchange of PVT data between two applications using
Delumping different types of compositional description: for instance, this could be used
to pass PVT data from a reservoir model using a lumped compositional
description to a surface network model using a detailed compositional
description.
Insights on the technical aspects of this technique can be found in the
"Lumping / Delumping Technical Overview" section
July, 2021
RESOLVE Manual
User Guide 856
It is important to note that all these techniques will ensure that the PVT properties of the fluid are
the same, whatever PVT description is used: there is NO loss of accuracy when the data
exchange is performed.
In one single model, these techniques are very often combined to be able to maintain PVT
consistency throughout the model.
For instance, a RESOLVE model using a reservoir model with a black oil PVT description, a
surface network model with a black oil PVT description and a process model using a detailed
compositional description will use both the data passing and the black oil delumping
techniques, as illustrated below.
Another example could be a RESOLVE model using a reservoir model with a lumped
compositional description, linked to a surface network model using a lumped compositional
description and a process model using a detailed compositional description. The model will
then use both the data passing and the lumping / delumping techniques to ensure PVT
consistency across the applications.
All the possible combinations and the different techniques used to ensure PVT consistency are
described in the diagram below:
July, 2021
RESOLVE Manual
User Guide 858
When setting up a full-field RESOLVE model, the following considerations have to be taken into
account when the PVT models to be used are being defined.
All pure components isomers e.g. nC4 and iC4 should be kept separate to
allow identification and separation in the surface simulator.
More pseudo components are used for all fluids. This gives more flexibility when
characterising the equation of state for the fluid. The surface simulation may also
require more heavy ends to model the separation processes effectively.
This has to be considered carefully for the following reasons: a fully compositional
reservoir model will require a relatively small number of pseudo-components to be
specified in order to obtain the correct PVT variables for its calculations to be
performed. So does a surface network model. However, a process model will require a
more detailed description of the components present in the fluid to perform accurate
calculations.
It is therefore possible to see that the format of the EOS model used for one specific fluid
will need to be different for different applications.
One could use a detailed description of the fluid (i.e. suitable for the process model) in
the entire system, however, the larger the number of components, the slower the run will
be.
This is the reason why the lumping / delumping process has been developed: to be able
for instance to use a "simple" EOS model for the reservoir / surface network model and
make this description more detailed when passing the fluid composition from the surface
network to the process model.
This delumping process is able to keep the fluid properties constant between the two
fluid descriptions.
Since some of the process simulators do not at this time support Volume Shift, it is
common practice within surface facility modelling to use the Costald method of oil
density estimation as an alternative. This is available within PVTp as a direct
comparison to the calculated Equation of State value.
The lack of a blending function within the surface simulator may require the build
up of a composition which contains all the pseudos within the gathering network.
The idea behind compositional lumping / de-lumping is to have a methodology that is able to
pass from an extended composition (i.e. referred as de-lumped or "full" in the following text) to a
reduced one (i.e referred as lumped or grouped in the following text) and vice-versa
consistently, that is to say, preserving the quality of the characterisation.
This means that at any point in time the full and the lumped compositions will be equivalent and
representative of the real fluid. In general when creating two characterisations of the same fluid,
by definition they will not give the same answers.
However, lumping / de-lumping has to make sure that the important properties are consistent,
so that calculation speed and accuracy are both satisfactory.
In IPM this is achieved by means of the so-called "Lumping Rule", which is a piece of logic that
July, 2021
RESOLVE Manual
User Guide 860
defines the mechanisms to pass from the full to the lumped composition.
The "Lumping Rule" is created at the stage of building the EOS model using Petroleum
Experts PVT package PVTp.
PVTp has all the facilities to create and quality check the couple full / lumped compositions and
to create the "Lumping Rule".
In RESOLVE it is possible to import a "Lumping Rule", which is then used to generate the
lumped (or the full) composition when desired. It is then possible to decide whether to run the
calculations with the full or with the lumped composition.
The lumping / delumping options available in RESOLVE are detailed in the "Lumping /
Delumping Options" section, and worked examples using the lumping / delumping capacities
can be found in the "Worked Examples" section.
If the lumping / delumping capabilities of RESOLVE have to be used, this section will enable the
user to specify the fully compositional PVT descriptions available for each model.
This section can be accessed through the Options | Lumping / Delumping | Module Fluid
Characterisations and leads to the following screen being displayed:
For each pair of connected modules included in the RESOLVE model, a specific tab will be
created. This enables to have multiple lumping rules defined, one for each connection. For
instance, this enables to integrate two reservoir simulators with two different compositions and
lumping rules to GAP.
Setup This section allows the import of the fully compositional PVT description of
the fluid as defined in the module considered.
The format in which the fluid PVT description has to be imported is the
PVTp .prp format.
Once the PVT description has been imported, the Setup screen will be as
follows:
July, 2021
RESOLVE Manual
User Guide 862
In this particular case, the .prp file includes two compositions for the
reservoir fluid: a lumped and a full composition.
Import This enables to directly import the PVT description from the
module considered
Clear This enables to clear the current PVT description
Component If the lump rule for a fluid includes lumps defined by component name, then
name mapping it may be necessary to identify fluid names in the module in question.
for lumping /
delumping For example, a lump rule may specify that N2 and C1 form a lump called
'N2C1'. A Hysys module (for example) will refer to these components as
'Nitrogen' and 'Methane', and so prior to performing the lump procedure the
long (Hysys) names will have to be substituted by the short names.
July, 2021
RESOLVE Manual
User Guide 864
A description of the options available to define the EOS model can be found below:
General These options enable to define the type of EOS used in the model:
EOS Model Peng Robinson (i.e. PR) or Soave Redlich Kwong (i.e. SRK)
Optimisation None, Low or Medium
Mode Over the past few years, Petroleum Experts PVT experts
have been working on ways to speed up the calculation of
properties from an EOS model. Speed is one of the main
issues with fully compositional models and the options in this
field will define the speed of calculations.
The objective of this option is to speed up the calculations
without penalising the accuracy the results. The medium
optimisation mode enables to obtain the fastest EOS
calculation possible when using the IPM suite
Optimise When repetitive calculations are performed, this option can be
Repeat selected to reduce the number of compositional calculations
Calculations performed (increase of speed by up to 40 times). This option
is particularly useful when running with Black Oil
Compositional Lumping/Delumping: the determination of the
equivalent black oil model is done only when the composition
changes. If the composition does not change, that means that
the black oil properties remain indeed the same, hence it is
not required to use the EOS to re-calculate the black oil
properties
Reference This section defines the temperature and pressure at which the results are
Data for referenced
Standard
Conditions
Volume Shift This section enables to enable or disable the use of volume shift in the EOS
model, for both the full and lumped compositions
Lumping This option allows to activate the fully compositional Lumping / Delumping
capabilities.
July, 2021
RESOLVE Manual
User Guide 866
the fluids each have the same number of components, but the properties of
these fluids are different. This requires that each of the components be treated
separately in the combined fluid. In this case, the "Use Number of Components
as Key" must be switched to "NO".
If the "Use Number of Components as Key" is set to "YES" the blended stream
will have only three components.
If this option is set to "NO" then the blended stream will have nine components
Phase As a quick measure of phase the average molecular weight of a fluid is
Check calculated and compared a test value. If the molecular weight is less than the
test the fluid is assumed to be a gas. The test value can be entered on a file by
file basis using the edit box provided
Path to This option dictates the path taken be the oil and/or gas to surface.
Surface and It therefore sets the calculation method for all path sensitive variables such as:
Recycle GOR
CGR
Oil FVF
Oil API
Separator Gas Gravity
Accumulated Gas Composition
Stock tank oil API
The user can select either a straight flash to stock tank or to send the fluid
through a train of up to 10 separators.
The final option is to replace the separator train with a set of overall or stage K
Values
The 'More Lumping' button allows to access the Lump Option Dialogue, where Lumping Rules
and associated settings can be edited.
July, 2021
RESOLVE Manual
User Guide 868
Lump This section allows to select the Master Rule used throughout the RESOLVE
Rules model
Lumping This contains a table with the list of lumping Rules imported in the RESOLVE
Rules model.
By selecting 'Select', the Lumping Rule may be imported/exported, viewed or
edited:
In the Lumping Rules Summary Dialogue section the lumps are described (for
example, in the figure above the first lump is N2C1, which is given by the sum
of N2 and C1), giving the correspondence between the lumped and the full
composition.
At the top right of the screen a BIC Multiplier is reported, This is a multiplier to
the Bi coefficients of the lumped composition, which is a methodology
available to make sure that the lumped composition reproduces the same
saturation pressure as the full composition.
The 'Setup' button allows to define the logic behind each lump, for example:
July, 2021
RESOLVE Manual
User Guide 870
General This section allow to select the options for Lumping / Delumping.
Allow Lumping and Mode are the same as reported in the main EOS Setup
section: "Compositional PVT Models Description".
DeLumping This section allows to define the techniques used to Delump a lumped
composition, as described below:
Hold C1 This option makes sure that the C1 amount is preserved when
Group in passing from the de-lumped to the lumped composition. This
DeLumping is useful to quality check
Lumping This section allows to define the techniques used to Lump a full composition.
July, 2021
RESOLVE Manual
User Guide 872
Typically (although not restricted to), global optimisation problems arise from having the actual
main constraint of the system located in the process. The objective function is often also in the
process, while the field controls such as well head chokes or gas lift gas are located in the
surface network. This constitutes a global optimisation problems as the objective function,
controls and constraints are located in different systems which are modeled in different
applications.
In addition to this, applications that might be used in a RESOLVE model may contain in-built
optimisation capabilities. For example, GAP has a powerful optimiser; process models
generally also have optimisation functions. RESOLVE allows to make use of these capabilities
and to build a multi-level optimisation solution. This is often preferable as generic optimisers do
not necessarily know the particular responses of the variables they are changing, while
optimisers of underlying applications are specifically designed to cope efficiently with the
variables within their domain.
The way in which we use the available optimisation algorithms and other control functionnality to
solve a global optimisation problem is what constitutes the formulation of the problem. This
July, 2021
RESOLVE Manual
User Guide 874
formulation is the most important step towards solving a global optimisation problem efficiently
and robustly. Some of the key elements to consider when formulating a problem include:
identifying clearly the objective function, the constraints and the controls, and identifying whether
the problem can be split into sub-optimisation problems involving the capabilities of the
underlying applications.
2.10.1 Optimisation: Sequential Linear Programming
Sequential linear programming
The optimiser in RESOLVE uses a technique called sequential linear programming (SLP).
This is in contrast to the optimiser in GAP, which is a non-linear optimiser.
Generally, the GAP optimiser will be used to provide starting points to the RESOLVE optimiser,
and for this reason it is useful to have consistent constraints in GAP and RESOLVE.
When RESOLVE performs an optimisation, it starts by making a single "pass" through the
system which may include a GAP optimisation to provide the starting point. After this, linear
equations to describe the system response with respect to the objective function and constraint
equations will be derived by perturbing each of the control variables in turn. A linear optimiser
will then be run on the resulting set of linear equations to obtain a proposed new optimum. In this
optimisation, the controls are bound within appropriate limits to allow for the non-linearity of the
system.
The values for the controls so obtained will then be inserted back into the system. The process
will be repeated over several iterations until convergence between two iterations is achieved. At
each iteration of the optimiser the bounds for the control variables will be adjusted to reflect the
"linearity" of the system.
For more information on how to setup an optimisation algorithm using the SLP optimiser,
please refer to the Optimisation : Setup section and to Example 7.1 and Example 7.2.1
(UniSim), Example 7.2.2 (Hysys) and Example 7.2.3 (ProII).
2.10.1.1 Troubleshooting
The SLP optimiser is obviously most suited to mostly linear or slightly non-linear systems, but
the sequential nature means that it can handle systems that tend towards the more non-linear.
The optimiser is configurable through the parameters that can be changed on the "Optimisation
summary"screen. In general the defaults provided should be adequate and the optimiser should
adapt to the different systems; nevertheless there may be cases where the performance can be
significantly improved by changing the settings.
One can observe the performance of the optimiser as the results of the successive iterations
are displayed on the optimiser results view when an optimisation is run.
The control results and the function results tabs yield the most information.
The following elements can provide some assistance when analysing the performance of the
optimiser:
Introduction to GIRO
GIRO is designed to solve optimisation problems which involve Integer Variables.
One of the most common problems of this type in Oil/Gas Fields is presented in cases where
Wells/Manifolds Routing opportunities exist. While not limited to them, this type of problems is
July, 2021
RESOLVE Manual
User Guide 876
Most of the optimisation problems involving routing opportunities will also contain continuous
variables (Choke opening, Gas Lift Gas Injection, etc..) as well as multiple constraints. This puts
them into the category of Mixed Integer Optimisation Problems. Considering that most of the
Production Systems have a non-linear response and that the imposed constraints make the
problems non-convex, we are usually dealing with Mixed Integer Non-Convex Non-Linear
Optimisation Problems, one of the most complex optimisation problems to solve.
Due to their complexity, attempts to solve this type of problems (MINLP) directly (using one
single algorithm) have been unsuccessful (extremely limited at their best). This is no surprise
when considering that solving non-convex, non-linear optimisation problems (without including
the integer variables/routing) is already extremely challenging (in particular when dealing with
real physical systems not described by simple mathematical functions).
This is why MINLP are best approached by dividing the problem into two, having a Non-Linear
optimiser dealing with the continuous variables and constraints (for each case we want to
evaluate) while having an Integer Optimiser dealing with the global search of the optimum
routing.
This is the approach followed in IPM where GIRO can be combined with GAP’s powerful Non-
Linear Optimiser to tackle Mixed Integer non-convex non-linear Optimisation problems. GIRO
will then take care of the global search (routing) while GAP takes care of solving every sub
problem generated by each routing case requiring evaluation.
Having GIRO at the RESOLVE level allows also having more than GAP as underlying
application. We can have multiple applications making up the underlying model GIRO will be
evaluating. Moreover, we can also use RESOLVE SLP optimiser underneath GIRO, creating
three levels of optimisation.
GIRO is based on Genetic Algorithm (GA) principles, with the critical addition of a X-Over
Optimisation Algorithm (XOOA) to overcome the main limitation of traditional GAs when applied
to Oil/Gas Fields Routing problems, as explained below.
Genetic Algorithms are a family of computational models based on evolution. These algorithms
are applied to combinatorial problems with a large search space (typically millions of possible
combinations).
Each potential solution to the problem can be encoded using a vector-like structure. For
example, if we are dealing with a problem involving 6 wells where each well can flow to any of
three different manifolds, then each potential solution could be encoded as [a,b,c,d,e,f]. Each
letter adopts a number representing the manifold each well is flowing to.
The potential solution [1,1,2,2,3,1] would then represent wells 1, 2 and 6 flowing to the first
manifold, wells 3 and 4 flowing to the second manifold and well 5 flowing to the third manifold.
Various techniques exist to represent a potential solution using a chromosome-like structure but
conceptually they are all the same.
Initial Population
Step 1
The algorithm starts by selecting an initial population (typically chosen randomly)
Based on their fitness, a selection of members is done. The fitter a member is, the
more chances of being selected it will have. Selection Methods like Roulette are
used to weight the selection based on their fitness. The objective of this step is to
systematically eliminate the weaker members
Step 5 Loop
The algorithm starts again from step 2 using the new population until some stopping
criteria is met (e.g. only one member survives)
July, 2021
RESOLVE Manual
User Guide 878
Main Limitations of Traditional GAs when applied to Oil/Gas Field Routing problems
The most important step in a Genetic Algorithm is the creation of the new population (by the
‘chromosomes’ X-over between pairs of selected members from the existing population).
During this step, a new population is created which determines essentially what potential
solutions we will examine next. If a Genetic Algorithm is to become an optimisation algorithm,
then the next population to be evaluated should, in essence, be better than the previous. Hence,
within each new population, we are expecting to find better solutions. Additionally, we want to
find a solution as close to the best as possible evaluating as little potential solutions as possible
(efficient).
For the above to be true (the new population to be better than the previous one), exchanging the
chromosomes of two good solution should generate two new cases which are statistically better
than the original ones. This is true, at least in principle, when dealing with theory of evolution
(something GAs are inspired by). If we consider the original members to be two good quality
specimens of a certain species, by crossing over their chromosomes we have a good chance
of generating two new specimens of good quality.
However, when this principle is applied to wells/manifolds routing problems (in particular subject
to constraints and other conditions as described before), the generation of two new good quality
members becomes as likely as if we were simply choosing two new members randomly. That is
because of the interaction between routings (represented by each chromosome).
For example, one well flowing to a certain manifold may only be a good option provided that
manifold is routed to a particular separator. If the manifold is routed to a different separator,
routing the well to that manifold could be the worst option.
This kind of interaction between routings/integer variables (especially when constraints exist) is
not taken into account in traditional X-Over methodologies (where essentially the impact of each
chromosome in the overall fitness is implicitly considered independent). This is analogue to
linear and non-linear systems. This interaction can be interpreted as a non-linear routing. This
makes Traditional Genetic Algorithm of very limited use on this type of problems.
To overcome the limitation of Traditional Genetic Algorithms (when applied to the kind of
problems we have described), GIRO incorporates a specially designed X-Over methodology
which takes into account the interaction between routings. This interaction is inferred by
‘learning’ from the already evaluated cases and applying that learning to drive the subsequent
X-Overs between members.
Several other modifications to the traditional approach have been done in order to facilitate this
‘learning’.
This obtained ‘knowledge’ is used to drive the X-over in a controlled manner. The objective is to
provide guidance without fully eliminating the randomness of the search of new cases. The
application is analogue to any global search when there has to be a balance between searching
new areas as well as focusing on the good local areas.
The application of this knowledge can be controlled by the two Optimisation Parameters.
Parameter 1 controls how much influence the learning from previously evaluated cases will have
on the selection of new cases. Essentially how much of this acquired knowledge we want to
apply.
Parameter 2 controls the increase of this weight as more cases are evaluated. The objective is
to apply more and more knowledge as more cases are evaluated (and hence we learn more
about how the system behaves).
It is recommended not to change the default settings unless a full analysis of the response of the
system is carried out. Due to the randomness component of this type of optimisation, the
performance of GIRO for any particular problem can only be assessed when look at the results
statistically (i.e. results obtained when solving the same problem hundreds of times). This
means that unless the system allows such analysis then the default settings should be used.
Some Definitions:
Control A control variable is the reference element which can take different states.
Variable For example, in a well routing problem, the control variables are the wells (or
downstream joint associated to each well). The number of control variables
defines the magnitude of the vector which will be used to represent every
possible solution of the problem
States Each control variable is associated to different states. The state is used to
of a control identify the current option (e.g. Routing). For example, if a well can be routed
July, 2021
RESOLVE Manual
User Guide 880
variable to three different manifolds, the states of the control variable ‘Well’ are ‘to
manifold 1’, ‘to manifold 2’ and ‘to manifold 3’
State These are the variables required to capture the state of a control variable in
Variables a model. These are the variables which RESOLVE needs to control in order
to setup the model in a way that each potential solution can be evaluated. In
the case of a well routing problem, the associated variables would be the
pipelines mask command which need to be controlled by RESOLVE to
change the routing in a model
Values these are the values associated to a state variable. In the case of a well
of a State routing problem, the state variables can take the values of 0 and 1. 0
Variable indicates that a pipeline is unmasked while 1 would indicate that a pipeline
is masked
For more information on how to setup a routing optimisation problem, please refer to Example
7.4.1, Example 7.4.2 and Example 7.4.3.
This menu is only available if the system has been specified to run with optimisation from the
Options | System Options screen.
For more information on the RESOLVE optimisation function, go to the "RESOLVE optimisation
overview" section.
Controls, constraints, and objective functions can be selected from each of the clients modules
in the RESOLVE system.
For more information on the RESOLVE optimisation function, go to the "RESOLVE optimisation
overview" section.
When the Optimisation | Setup section is invoked, the following screen is produced:
For each client module in the system a tab is displayed at the top of this screen (i.e. in this case,
there are two GAP models, one production model and one water injection model).
To setup a control variable, select the tab at the top of the screen for the client module in
question. Click on the Edit button next to the "Controls" panel.
This will call up a screen that is specific to the application in question to allow the user to select
July, 2021
RESOLVE Manual
User Guide 882
The following snapshot gives an example of what this screen looks like for GAP.
The user guide section for each application driver will provide additional information related to
these screens for other applications.
Objective The objective function can be set from the top section of the screen displayed
Function above.
The procedure to do so is the following:
1. Select the piece of equipment to which the objective function is to be applied
using the drop-down box given
July, 2021
RESOLVE Manual
User Guide 884
There is no limit to the number of controls, constraints, or objective functions that can be
set up from here (although clearly the performance of the optimiser will be affected).
There is also no limit to the number of different models that the variables can be taken
from.
The panels will be displayed in grey or black depending on whether the variable (or
equation) has been set
RESOLVE allows a two-level hierarchy of optimisation. In other words, one optimiser can run
within the iterations of another optimiser.
There is no restriction on which optimisers are used, and the addition of the second level of
iterations is entirely optional. However, typically the user may need to meet constraints within a
process model while evaluating routing options using GIRO. In this case, the standard SLP
optimiser could be used as a second level optimiser which is fired at every evaluation of GIRO.
July, 2021
RESOLVE Manual
User Guide 886
By default, the second level of optimisation is not enabled (as it is in the above screenshot), so
that old optimisation models will run unchanged. Clicking on the 'Top level' or 'Second level'
buttons will bring up the optimisation summary screen for that optimiser, which enables the
selection of the optimiser type and the associated parameters.
When the model is run, the 'second level' optimiser, if enabled, will run at each iteration of the
'top level' optimiser.
The control variables and constraints are set for all optimisers in the optimisation setup screen.
It is up to the user to select, from the individual summary screens, which controls and/or
constraints are to be controlled or respected by the individual optimiser.
For example, the problem may involved multiple routing options which are to be controlled by
GIRO, with a process constraint which is to be met by an adjustment of the separator pressure.
The top level optimiser would be GIRO, and the summary screen may have the following
appearance:
The selected optimiser is 'GIRO', and the control variables selected are the integer routing
variables. The continuous control (the separator pressure) and the constraint are not selected.
July, 2021
RESOLVE Manual
User Guide 888
Input fields
Optimisation At the bottom of the screen are some global settings which relate to
modes of behaviour of the overall optimisation
underlying
applications
Retain optimisation When selected, this setting forces the optimisation of the individual
of underlying applications to fire at each iteration of the RESOLVE optimiser(s). For
applications... example, it is possible to allow the powerful GAP NLP optimiser to run
at each iteration of the RESOLVE SLP. This opens up very powerful
possibilities, but needs to be used with some care as RESOLVE should
In the first pass through the optimiser, GAP optimises the wellhead
choke settings to maximise production. Unless the above 'retain
optimisation...' setting is on, RESOLVE then freezes the calculated
choke settings and varies the separator pressure until the constraint is
met.
Once the constraint is met the original choke settings may not be valid
at the new separator pressure, in the sense that the system could
potentially produce more if re-optimised against the new separator
pressure.
The above setting and tolerance therefore allows the user to force the
system to re-optimise from scratch if the control variable(s) change by
more than the given tolerance. In the above example, the first RESOLVE
optimisation will be followed by a new GAP optimisation starting from
the final separator pressure calculation by RESOLVE
When not in In a standalone (non-forecast) run, the starting point of the optimisation
forecast, use can be given by an optimisation of the underlying applications (if
application possible - not all applications support optimisation, or some need to be
optimisations to set up specifically to perform optimisation)
provide starting
point
Starting conditions This option allows the user to specify whether the system setup obtained
of each after optimisation is to be kept as a starting point for the next optimised
optimisation when run, or if all the values of the control variables have to be reset to their
underlying values at the start of the run. This option is only enabled for a forecast
applications are run. Whether the underlying applications are optimising or not is
not optimising specified in the main schedule
July, 2021
RESOLVE Manual
User Guide 890
See the RESOLVE optimisation section for more information on how this optimisation works.
Note that multiple optimisers can be set up which can be nested, and this screen displays the
settings for only one of these optimisers. The nesting of optimisers, and the invocation of this
screen, is explained in the section on multiple optimisation.
The top-left part of the screen shows the functions and controls that have been setup (say, for
example, from the "Optimiser setup" screen).
For each category, variables or equations can be selected or unselected by clicking the check
box next to the item to add / remove them temporarily from the optimiser.
Only a single objective function is allowed: To activate it: check the box next to the
objective function.
The control variables have optimisation parameters (such as perturbation size) that may or may
not be set up by the user.
These can be setup by clicking on the "Edit Control Variables" button.
To add a schedule entry click on the Add button. The schedules run
concurrently and the end date of each schedule is entered in the field
below the schedule list. If this is left blank then the schedule will run to the
end of the forecast.
The set of variables and constraints that are "active" for each schedule
are those which are selected in the list on the left.
It is also possible to disable the optimiser for a time during the forecast
by clicking on the Disable optimiser for this entry box. This setting
again remains active for the duration of a schedule entry
Optimisation These values should not normally be changed, exception made of the
parameters "Optimiser" section that enables the user to decide which optimiser to
use: RESOLVE SLP, GIRO (i.e. See the "GIRO" section for further
information) or user-defined optimiser.
July, 2021
RESOLVE Manual
User Guide 892
It is important to set the suitable perturbations and initial bounds for each
control quantity to a suitable value, as RESOLVE optimiser does not have a
physical understanding about the controls that it is changing. The RESOLVE optimiser
has to be guided, and this screen is a convenient place to do this. Certain quantities
(such as wellhead chokes in GAP) have suitable defaults supplied, but this is not the
case in general
Perturbation This is the perturbation that should be applied to the control in order to
obtain its derivatives. RESOLVE attempts to find a suitable default for
any control, but this can be changed here
Initial bound For each iteration of the "SLP optimiser" the control variables are
bounded to control the optimiser. This gives the initial (i.e. first iteration)
bound. Subsequent iterations increase or decrease the bound
depending on the "linear error"
Minimum The perturbation applied to obtain linear gradient information is adapted
perturbation depending on the current bound of the control variable in question. If the
bounds become small enough then the perturbation could, in theory,
become small enough to be influenced by, say, the tolerance of the
application solvers in the RESOLVE system. This field allows to set an
absolute minimum on the perturbation that can be taken
Centre If this is set then RESOLVE will perturb this variable between (v-(d/2))
perturbation and (v+(d/2)), where d is the perturbation (the default is between v and v
+d). This is generally preferred when trying to get the best possible
gradient data, but it is also slower as RESOLVE has to make two
"movements" to perform the calculation. This may, however, be useful in
systems where the functions are very sensitive to the controls (especially
close to the optimum). Such systems require good quality gradient data,
otherwise convergence can be very slow
July, 2021
RESOLVE Manual
User Guide 894
July, 2021
RESOLVE Manual
User Guide 896
Users can write their own optimisers to tackle RESOLVE optimisation challenges.
This is done by way of a plug-in DLL.
User DLLs A DLL (plug-in) template and documentation can be obtained from
Petroleum Experts which explains how to code the optimiser. The
template is in C++. It is possible to use other languages, but it is up to
the user to translate the DLL entry points into the appropriate structure
Once the DLL is registered, the optimiser name will appear as an option under the "optimiser"
section of the optimisation "Summary"screen, from which it can be selected.
July, 2021
RESOLVE Manual
User Guide 898
During the run, it is for instance possible to enable or disable a given optimisation layer, or to
enable or disable certain controls and constraints. Note that it does not allow to define a new
control: all controls that may be used during the run should be defined initially in the Setup
Section.
RunOptimisers.IsOptimiserIterating
This property indicates whether the optimiser is currently iterating, or if the workflow is solved as
part of the final system solve.
RunOptimisers.MultiOptResults
The final resutls of a multi-layer optimisation are available using this property.
.Constraints[n]
Returns the value of the n-th constrained variable
.Controls[n]
Returns the value of the n-th control
.IntControls[n]
.NumConstraints
Returns the number of active constraints
.NumControls
Returns the number of active controls
.NumIntControls
Returns the number of active integer controls
.ObjFn
Returns the value of the objective function
RunOptimisers.Optimiser1
Setup and results of the first layer optimisation.
.Enabled
Enables (1) or Disables (0) the first optimisation layer. Note that it is possible to disable
the top optimisation layer while enabling the second optimisation layer.
.Results
First layer results. This has the same structure as MultiOptResults above
. Setup
Controls the active and inactive controls and contraints
RunOptimisers.Optimiser2
Identical to Optimiser1 above, for the second optimisation level.
2.10.4 Optimisation: Results
The optimisation results screen is displayed every time RESOLVE performs an optimisation. It
can also be invoked from the menu by clicking on Results | View optimisation results. It
contains information that can be useful for debugging optimisation runs.
July, 2021
RESOLVE Manual
User Guide 900
The body of the screen consists of four tabbed sections and a 'Save Results' section, as
follows.
Ctrl (control) This displays, for each iteration, the values that the control variables have
Results been set to
Fn (function) This is displayed in the above screen capture. It contains a list of the
Results results for the objective function and the constraint equations.
In the above example, the first row represents the value of the objective
function obtained by RESOLVE following a pass through the system (i.e. at
each iteration of the optimiser). The second row contains the value
predicted for this equation by the optimiser. The third and fourth rows
contain the error term calculated from the first two columns. This term is
used to calculate new bounds for the control variables at the next iteration.
The value in green represents the current optimum found by the optimiser.
A value in red (for constraint equations) represents a constraint violation. A
Each of the above tabbed screens allows the saving of a system state. The
control and function results screens allow the state of the system at a
particular optimiser evaluation to be saved. The overall results screen
allows the optimum system state to be saved. This state can subsequently
be re-applied to the system, either from the interface or from the event
driven scheduling
Save results In the same way that forecasts results can be saved to a separate screen,
stream optimisation results can be too.
A screen is displayed prompting to select a label for the new results
screen.
Next time the optimisation results menu item will be prompted, a screen
similar to the following will be displayed:
The left hand side contains a list of the currently saved streams. These
streams can be removed here (except for the current results).
The right hand side contains a summary of the optimisation run that was
performed.
July, 2021
RESOLVE Manual
User Guide 902
To display optimisation results, highlight the required results in the left hand
list and click on the display button
When several layers of optimisation are used, such as GIRO+SLP, additional results tabs are
created:
The first 'Optimisation' tab corresponds to the top level optimiser. The best result in this tab
corresponds to the overall optimum.
The second 'Optimisation' tab corresponds to the second level optimiser. The best result in
this tab corresponds to the best result of the second level optimisation for the last iteration of
the top level optimiser. As such, this result (and the associated control values) do not
correspond to the overall optimum.
The 'Multi-optimisation' tab: this tab contains the overall optimum and all associated values of
controls and constraints for the two layers of optimisation.
July, 2021
RESOLVE Manual
User Guide 904
In the diagram below, a job is submitted to the cluster from a submission node. The job is run on
the processor which has the lowest load. If there are no available processors, the job is held in a
queue until a processor becomes available.
Two types of clusters can be considered: "Windows" clusters, where all the cluster nodes are
running Windows operating systems, and "Mixed" clusters, where some of the cluster nodes are
running a different operating system such as Linux for instance.
Clustering software
RESOLVE includes a clustering software, PXCluster which is developed and distributed by
Petroleum Experts. PXCluster is a Windows clustering software. The case where the RESOLVE
models to be ran on the cluster include an element (such as a reservoir model) which is to be
ran on a remote Linux cluster is handled. Each RESOLVE job created by PXCluster will submit
the Linux job(s) to the Linux cluster, as per the settings defined in the RESOLVE model itself.
When multiple scenarios (i.e. "jobs") are submitted to the cluster, the following sequence of
events occurs:
Copies of all files that are required to run each client module are created. Each driver
is responsible for creating the copies that will enable the model to run on the cluster.
For instance, the GAP driver will make copies of the *.gap file, the Eclipse driver will
make copies of the *.data files. The files need to be located on a shared drive
accessible to all cluster nodes.
The jobs are then submitted in a single batch to the PXCluster queue. The jobs are run
on the nodes as ‘child’ RESOLVE models.
As each RESOLVE job finishes, the results are read by the master RESOLVE model.
If any more jobs are pending in the queue, these are executed as the previous jobs
complete.
July, 2021
RESOLVE Manual
User Guide 906
2.11.1 Architecture
The number of nodes in the cluster and the location of the master node can both be changed
"Dynamically". The shared directory that contains the "Configuration file" should normally be
located on the master node, but this is not a requirement.
In addition, there are two types of node. Both types of node run the same service
(pxcluster.exe).
Client nodes These nodes submit jobs, but do not run calculations
Computation These nodes can both submit jobs and perform calculations
nodes
Normally, there will be a pool of nodes which are used for calculations which can be accessed
from a (potentially) large number of client desktops. Client nodes and computation nodes can
be set up by the "Installer" or by hand from the "Configuration file".
The cluster is setup and tested using the PXCluster Console. The cluster console can be
invoked from the RESOLVE main menu (Wizards | Run PXCluster console) or from the
Windows Start menu (Petroleum Experts IPM xx | Install PxCluster).
July, 2021
RESOLVE Manual
User Guide 908
The 'Cluster nodes' section lists the nodes in the cluster, their operating systems, their status,
and how many jobs (i.e. submitted through PxCluster) are being run.
The 'Statistics' section is a list of the jobs that have been submitted from this node; jobs
submitted from other nodes are not included. The job list indicates the status of each job, where
the job is/was running, and the start and end time of the job (as applicable).
This utility can be used to test the cluster. The "Standard test job" button will submit a job
called "pxsleep" (i.e. this executable is included in the distribution). The job simply "sleeps" for
40 seconds before exiting. All jobs submitted using this button should appear in the job list and
be reflected in the node status list. If more jobs are submitted than available nodes then the
outstanding jobs will be put into a "pending" state and submitted as running jobs complete.
The "Launch OS configuration jobs" copies the application executables path (from the driver
configurations) from the local machine to all the nodes of the cluster.
PxJobs.exe is a command line job-query tool that interfaces the PxCluster’s API and returns the
status of a job when provided with a job ID. The job status is as follows:
There are two ways to input the job ID to the PxJobs.exe tool. The first is to provide the job ID
directly to the command line, such as:
Here we have shown the output of both the default and verbose modes of PxJobs.exe for an
Eclipse job running on the PxCluster.
The second input option accounts for situations where it is not convenient for the user to
continually supply job IDs to the tool. In this case, the user can provide a fully qualified path to a
PxSub.log file, if the job has been submitted to the cluster using the PxSub.exe command line
tool (please see the section on PxSub.exe for more details on how to do this). If this is the case,
then PxJobs.exe will read the PxSub.log file
July, 2021
RESOLVE Manual
User Guide 910
Again, the verbose mode can be switched on or off using the command line flag. This second
mode is convenient for automated checking of the job status in custom scripts.
This section deals with the creation of a network cluster, and also how that cluster is maintained,
and is organised as follows:
Installation pre-requisites
Setup utility
Adding and removal of cluster nodes
Manually editing the configuration file
Dealing with new builds of IPM
Before creating a cluster with PxCluster, the following tasks should be performed.
A master node should This should normally be a machine which is not removed from the
be designated network and is always on whenever the cluster is to be used. The
designation of the master node can be "Changed" later, as required
Select an initial set of Again, this set can be adjusted later
subordinate nodes
Create a shared This shared directory will eventually contain the configuration file;
directory with share clearly no critical data should be kept in this directory. The share
permissions set to should normally be physically located on the master node, but this is
allow full access to not a requirement
everyone
The shared directory Full rights of read and write access should be available for the
should also have full directory
read/write access to
everyone
Before proceeding, the IPM software should be installed and licensed on each node of the
cluster. This could be set up in a shared directory to allow all the nodes to see the software (i.e.
without the need for multiple installations), but there may be performance issues with this. It is
preferable to have a separate installation for each node.
IMPORTANT:
Note also that the installations should be symmetrical, that is each node should "see" the same
path to the IPM executables. It is important to bear this in mind when installing the software on
64-bit machines, for example, where the default installation directory (Program Files (x86)) is
different to that for 32-bit machines.
Once the "Pre-requisites" have been carried out, the cluster can be created or edited using the
PxCluster setup utility.
The PxCluster setup utility can be run from the Start menu (Petroleum Experts IPM xxx |
Install PxCluster) or found in the distribution (pxclusterinstall.exe).
July, 2021
RESOLVE Manual
User Guide 912
Section 1
Use the Configure button (1) in the top right corner to open the configure system
popup. The system will alert the user to the fact if changes are changes are
made then the cluster services may need to be re-started (or re-installed).
The user needs to set the main cluster reference directory. This needs to be in
UNC and allow all cluster machines to have full access to it.
As described above, the location of the shared cluster drive (i.e. described in
July, 2021
RESOLVE Manual
User Guide 914
the "Pre-requisites" section) should be entered into the edit field at the top of
the screen. The "browse" button can be used to locate the drive in the network
neighbourhood. The drive given here must be visible to all the nodes of the
cluster; this means that it will invariably require a UNC path or some common
drive mapping to represent the drive.
The "Test" button can be used to check that the configuration file (i.e. once it
has been set up) can be read.
Click "Next" to proceed to the next screen. If the environment variable has
been changed, a reboot will be required at this point (i.e. which the utility will
perform automatically). After the reboot, restart the utility and proceed to the
next step. Clearly, if the environment variable has already been set up the utility
will not prompt for a reboot. Press next to configure the individual machines
within the cluster.
In this screen, nodes are added to the cluster. A node must be defined as
being either a computation node or a client node (a node that will submit jobs to
the cluster).
If the cluster is to be built or edited, then the "build pxcluster.cfg" check box
should be clicked. This will enable the rest of the screen.
Add nodes to the cluster by entering the computer name, and clicking the right
arrow buttons. The 'Check' button can be used to verify that the entered name is
valid.
Remove nodes from the cluster by highlighting elements on the right hand list
and clicking the left arrow button.
A master node should also be selected among one of the computation nodes.
Highlight the required node in the right hand list and click the right arrow.
After the nodes have been set up (if this is required) click the "Next" button to
proceed to the next screen. It is then possible to manually edit the configuration
file. The specified nodes should have been added to the file.
The PxCluster service must be installed and then started. The buttons on this
screen can be used to do this.
The service is installed as an "automatic" service. This means that the service
will be started automatically when the system is started.
Note that the service will not start if the PXCLUSTER_ENV environment
variable is not set, or if the service can not see the pxcluster.cfg configuration
file (i.e. which should be located on the cluster shared directory).
July, 2021
RESOLVE Manual
User Guide 916
Enter the account credentials and press the Install button to install the PX
Cluster service.
Run the service by clicking the 'Play' button to start the service.
There may be an error message when attempting to run the service, there may
be a logon issue. This can be fixed by ensuring that the PX Cluster service has
the correct user name and password.
Right click on the PxCluster service and open the Logon tab, Ensure that
password is set correctly.
It should now be possible to press the start service button and verify that the
PxCluster service is running correctly.
July, 2021
RESOLVE Manual
User Guide 918
The user has already created the config file held on the central machine, the
next step is to install the PX Cluster service on the individual client machines
Start the “Install PXCluster” application from the Petroleum Experts XX folder
Use the Configure button (1) in the top right corner to open the configure system
popup. The system will alert the user to the fact if changes are changes are
made then the cluster services may need to be re-started (or re-installed).
This application is just being used to install and run the PxCluster service as the
shared directory has been created and the cluster configuration file has been
constructed.
Enter the appropriate user name and password in to the service installation
section and install and start the service. The user may need to ensure that the
service password is correctly set up as mentioned in the previous section.
July, 2021
RESOLVE Manual
User Guide 920
2.11.4.2.2 Troubleshooting
The PxCluster Information on why any service fails to start can be found under the
service fails "Application" section of the Event Viewer. This can be found under the
to start Control Panel | Administrative tools | Event Viewer.
The usual reasons why the service may fail to start are:
The PXCLUSTER_ENV environment variable has not be set up
The location pointed to by PXCLUSTER_ENV is not visible from the
node in question. Check that the location is valid, and also that the
share permissions on the directory in question are set to allow
everyone to use it. Remember that the PxCluster service runs under
the system account, and not the login account of the computer.
When IPM is uninstalled, it will automatically stop and uninstall the pxcluster service if it is
running. Clearly, this removes the node from the cluster.
When IPM is reinstalled, it creates the pxcluster.exe in the distribution but does not install the
service. This is a task that must be performed before the node can function in the cluster again.
The service can be installed and started from the "Setup utility" or "by hand".
July, 2021
RESOLVE Manual
User Guide 922
HANDSHAKE_TIME 20
JOB_TIME 5
DEBUG 0
OLDJOB 300
Every so often, the services on the cluster nodes re-read this file. The file can therefore be
edited "in place", and the configuration changes will eventually feed into the cluster.
By default, the ports are different for each node but this is not obligatory. These
ports should be opened through any firewall, although it is infinitely favourable
for the firewall to be removed altogether
CLIENT This is the same as the "NODE" keyword, but is used to set up a client, rather
than a computation, node
HANDSHAK This keyword gives the time (i.e. in seconds) between handshakes between
E_TIME the master and the subordinate nodes. The default time is normally fine, and
should normally not be made shorter than this
JOB_TIME This is the time between checks made on the current job queue. In the case
above, running jobs (both local and on other nodes of the cluster) are checked
every 5 seconds . Again, the default should not normally be made shorter than
the default
DEBUG This can be set to "1" to enter debug mode. In this case, debug logs will be
written to the directory pointed to by PXCLUSTER_ENV. The log files are
written by the cluster service, and are labeled with the name of the node on
which the cluster service is running
OLDJOB Jobs that have been completed are retained in the job list for a time given by
the value after this keyword (in seconds). Completed jobs are not processed in
any way, but are merely kept for reporting. After this time, they are removed
from the list
An existing cluster can be edited by adding or removing nodes, or changing the location of the
master node.
Before starting, it is recommended that no cluster jobs are running. Indeed, this is essential if
the master node is to be changed.
The cluster configuration can be changed either from the "Setup utility", or by manually "Editing
the configuration file".
Using the The setup utility should be started, and the "Next" button pressed until the
setup utility "nodes" screen is reached. Nodes can then be added or removed, and the
master node changed, in the usual manner
Editing the This is described in the "Edit the Configuration File" section
configuratio
n file
It is possible to run scenarios in batch on a Windows cluster. It is also possible to run batch
scenarios when one or more of the applications running under RESOLVE are running under a
different operating system (e.g. Linux).
The only application link that does not (necessarily) run on Windows and is developed and
supported by Petroleum Experts is Eclipse when run on Red Hat Linux. This section is
therefore devoted specifically to this link, although other application drivers that are developed
by third parties could also support a similar functionality. The developers of these links should
be contacted for more information.
To use Eclipse in batch scenarios the IntelMPI method of connecting RESOLVE to Eclipse must
be used, which is documented extensively in the "linux_executables_for_resolve_eclipse"
subdirectory of the installation directory.
In the first case, the "run factory" creates an Eclipse controller and Eclipse itself. The controller
is always on the same computer as the run factory, but Eclipse can be distributed onto a
different node of the cluster.
In the second case, the "run factory" submits a job to LSF to spawn a "run factory client" on a
(potentially) remote node. This client program then spawns the controller and Eclipse on this
same node.
July, 2021
RESOLVE Manual
User Guide 924
To use Eclipse in a batch scenario run, the second form of the connection must be
used.
Architecture
As described in the "Setting up a cluster" section, each node of the Windows cluster is running
a copy of the RESOLVE model. Each copy is pointing in turn to an instance of Eclipse on the
remote computer. If the remote computer is running the LSF run factory daemon
(mpirunfactory_lsf.exe) then, as each instance of Eclipse is required, the daemon will spawn the
new instance of Eclipse on an unknown node of the Linux cluster.
When a cluster job is submitted to a calculation node, it runs a copy of the RESOLVE model on
that node. The IPM programs (GAP, REVEAL, etc) may run as part of the model on the
calculation node, i.e:
These applications are designed to work with a user-interface. Depending on the type of cluster
used, the IPM applications will be run in interactive mode (application interface is visible) or
non-interactive mode (application interface is invisible):
If a local cluster is used (calculations distributed on the user's local PC cores), the
applications are run in interactive mode
If a network cluster is used, pxcluster is run as a service and the applications are run in non-
interactive mode
On a network cluster, in non-interactive mode, the nodes will probably need an adjustment of
operating system parameters to allocate the resources to a non-interactive desktop. The
following describes how this is done.
If no changes are made, then multiple jobs running on a single node, or single jobs with large
resource requirements, may fail. The cause of this problem is known to relate to the Windows
desktop heap setting.
On Microsoft Windows operating systems, the desktop heap is reserved for interactive windows
stations and non-interactive windows stations.
An active Windows station would be a graphical user interface that a user is running on the
desktop.
July, 2021
RESOLVE Manual
User Guide 926
This is for all Windows-based applications that are running on a Windows system. The desktop
heap is used for all objects (windows, icons, menus, etc....).
It is necessary to adjust the heap allocation for the non-interactive desktop. This should be
carried out with some care: setting the value too high can cause 'Out of Memory' errors to occur
when too many applications are running on the system.
Note: This solution contains information about modifying the system registry. Before making
any modifications to the Microsoft® Registry Editor, it is strongly recommended that you
make a backup of the existing registry.
HKeyLocalMachine\SYSTEM\CurrentControlSet\Control\Session Manager\SubSystems
5. Edit the section of the string that may read 'SharedSection=1024,3072,512' to the
following:
SharedSection=1024,3072,2048
Note: The third value can be larger than 2048, depending on how many applications are
required to run on the server. There is no exact formula to set the correct value, a suitable value
should be sought by trial and error. However, 2048 should be a good starting point and should
be suitable for most systems.
Once the <local> and <remote> names have been identified for the network location, simply
paste this into the configuration file in the PxCluster shared location, prefixing the line with
NETWORK, as follows:
If the cluster should access more than one shared drive, simply append the configuration file
with more network locations following the above prescription.
July, 2021
RESOLVE Manual
User Guide 928
Upon initialization, the cluster service will read the configuration file and map to these drives
using the local drive key. The connection persists for the lifetime of the cluster kernel and will
terminate when the cluster service is stopped.
Periodically, the PxCluster service on each node will re-read the configuration file and update
its mapped drives. To remove a drive mapping, simply remove the relevant line in the
configuration file and allow the cluster time to read it and remove the mapping. Using this
interface, drive mappings can be dynamically added or removed from the cluster during its
runtime.
Once a mapping has been successfully completed, the cluster can now read paths to models
placed in shared drives using the local drive key.
To find the currently mapped drives the cluster has access to, we can use the net use command
directly with PxSub, the PxCluster’s command line interface. Simply open a command shell as
administrator, change directory to the location of PxSub.exe (C:\program files\Petroleum
Experts\IPM 12) and pass the following command line to the cluster
including surrounding double quotes. (The redirection of standard output can be given a path of
your choosing, perhaps to the cluster shared location.) An example of this would be the following
command line:
The standalone cluster is started by clicking the following button from the console.
July, 2021
RESOLVE Manual
User Guide 930
When this is done, the user's computer name will appear as one of the 'Cluster nodes', as
shown below. The number of available CPUs is displayed and the local cluster is now ready to
be used.
July, 2021
RESOLVE Manual
User Guide 932
Data directory Specify a directory that will be used to store temporary files. All nodes of the
cluster should have read/write access to this folder. For a network cluster,
this should be entered in UNC format.
Max number of The maximum number of simultaneous jobs. Leave blank or enter 0 for no
simultaneous limit.
jobs
Priority Select the job priority for this batch. Jobs in a queue will be executed
according to their priority.
Write debug Debug log files will be written to the specified data directory
logs
Save run files Creates a RESOLVE archive for each scenario run. These will be saved in
the specified data directory.
These logs can then be viewed using the PxCluster job logging monitor. This is accessible from
RESOLVE via Wizards | Run PxCluster job logging monitor.
This section describes how to insert log messages in the Case Manager workflow, and how to
view the logs in the monitor.
2.11.8.1 Adding a log message
Log messages can be inserted at any point in the controlling workflow of the Case Manager. In
any workflow element, the 'Add Log' button enables to specify a pre-execution message and a
post-execution message. This is shown below for an assignment element, but is available for all
workflow elements.
It is also possible to output the value of a variable (at that point in the workflow).
Before running the cases, under 'Cluster options', it is required to check the 'Enable job logging'
checkbox.
NB: The screenshot below is from the Case Manager. The same 'Cluster options' are available
for the Sensitivity Tool, Crystal Ball, @Risk and Particle Swarm data objects in the 'Run &
Results' tab.
July, 2021
RESOLVE Manual
User Guide 934
pxcluster service the folder in which the logs are stored (locally if a local cluster is used, or
remotely in a network cluster).
The logs are listed and can be viewed by double clicking on them. The monitor is continuously
monitoring the log folder, and new log files will be displayed down the list as they are created.
The log files are available to view as soon as they have been created, even as the case is
running. To refresh the log file, double-click on it in the list or switch tabs.
The case name is displayed along with date/time information. When running the Sensitivity Tool,
Crystal Ball, @Risk or Particle Swarm, the case name itself includes the values of the input
variables.
The log file also contains the user defined messages in the form of 'Pre-execution log of' or
'Post-execution log of'.
Font Controls the font of the log message in the display window
Manage Enables to manage/delete the log files from previous runs. This is
only active when the monitor is not monitoring the log files folder
('Stop Monitoring')
Reset Clears the log entries from the list. This does not delete the actual
log files, which are still available from the dropdown box.
Stop Monitoring Stops the monitoring of the log folder (i.e. new log files will not be
added to the list). This is required in order to be able to manage
the log files.
Clicking on 'Manage', the following screen is prompted and enables to manage and delete log
files, as well as run the PxCluster console.
July, 2021
RESOLVE Manual
User Guide 936
PxSub.exe is a console application that submits jobs to the PxCluster service by interfacing its
native API. Its usage is displayed when it receives no arguments:
This demonstration will use the PxSleep.exe test job, (which sleeps a node for a user-specified
time) located in the IPM 12 distribution, to showcase the usage of PxSub.exe on a PxCluster.
To begin with, we will enter the command line for the test job, using the –c flag, and make it
sleep for 60 seconds. We will name the job “PxSleep” using the –d flag and allow it to be
carried out on any node in the cluster.
We can see the job has been carried out on node <EDI-ENG-NL2> and defaulted to one
processor. We can also see that the command line prints out job submission data.
We will now require that the job uses 5 processors, set with the –n flag, and runs specifically on
<EDI-ENG-NL2> node (this can be any node within the network), which we set with the –m flag.
July, 2021
RESOLVE Manual
User Guide 938
We can see that the job has run on the specified node and consumed 5 processors.
We will now require the job to be run under a specific user account. This is a useful security
feature when a job is required to be submitted on behalf of a specific user or when data should
only be accessed by an admin rights account, for example. This feature therefore protects the
integrity of prohibited content.
To set this mode up, place a file called “pwd.txt” in the shared directory of the PxCluster. Within
this file we should have the username and the password of the user we wish to impersonate.
The cluster will then read this file and submit the job using this account, despite the actual
account we are operating in. For instance,
Delving into the PxCluster log files (located in the cluster shared directory, with debug mode set
to 10 in the configuration file) we can see that the job was successfully submitted under a
different account.
In a final sleep test job, we prioritize the job, allow it to be carried out on more than one node,
and make it run exclusively (meaning that the run node does not have any other jobs running at
the same time). This last feature prevents memory intensive jobs being delayed by or delaying
other jobs.
The job submission data that is printed to the command line can be redirected and stored in a
file. To do this type >C:\path\to\PxSub.log 2>&1 into an administrator command prompt, which
redirects the standard output and standard error to a PxSub.log file. For instance
July, 2021
RESOLVE Manual
User Guide 940
This is a useful feature when combined with PxJobs.exe which can read these log files and
report the job status to the user.
2.12.1 File
The "File" section of the main menu performs the file management functions within RESOLVE.
New The current system is cleared and all connected clients are closed.
A new blank file is created
Open A RESOLVE case can be loaded from disk.
This will in turn load the client applications that this case refers to - there is no
need to have the client applications already open.
It will be required to have access to enough active licences for each application
used in the model to be opened
Save Saves the current RESOLVE model as a .RSL extension.
The connected applications references will be stored in the file to be reopened
when the case is reopened.
This only saves the RESOLVE file. It does NOT save the files from
the applications connected through RESOLVE. For instance, if a
RESOLVE model is used to link an Eclipse deck and a GAP deck,
using this option will only save the RESOLVE file, not the GAP nor the
Eclipse decks
Save As As above, except that it enables the user to specify a new file name and / or file
location
Broadcast This broadcasts a "Save Case" message to all the connected applications.
Save
For instance, if a RESOLVE model is used to link an Eclipse deck and a GAP
deck, using this Broadcase Save option will enable to automatically save the
GAP file and any other program with an interface (such as UniSim or Hysys)
under their respective names and locations.
In general, this does not apply to reservoir simulators which are generally run in
batch mode.
Close Closes the current RESOLVE model.
The connected applications references will be closed as well
July, 2021
RESOLVE Manual
User Guide 942
C:\ecl\home directory.
This file will need to be deleted before any further successful Eclipse
connections through RESOLVE can be established.
Please note that using the PVM protocol to establish a connection
between Eclipse and RESOLVE is not the procedure recommended by
Petroleum Experts, as PVM is not supported anymore.The
recommended protocol to establish a connection between RESOLVE
and Eclipse is the MPI protocol. Further information regarding these
protocols can be found in the "Eclipse Driver Configuration" section
Open This will open a case without opening the connected clients: the connections
(results will be displayed and the results of the run can be accessed, but a new run can
only) not be performed as the clients will not be active. This is useful as the opening
of client applications can be time consuming in cases where only the results
are required.
There are two options:
Allow client If the model associated with one program instance in
programs to RESOLVE is modified (e.g. change a GAP model from
reload if model case1.gap to case2.gap) then RESOLVE will cause
changes GAP to load case2.gap. It will then redisplay new icons
on the screen for the new model as required
Never reload In this case changes to the client program models will
client programs not be reloaded by RESOLVE.
If the model is modified and the file is saved, then the
client program models will potentially be inconsistent
with the icons displayed on the RESOLVE screen. When
the file is reloaded entirely, warnings may be generated
and connections lost. Care should be used with this
option
For instance, the archive file will contain the GAP, MBAL and PROSPER files when a GAP
model is used in the RESOLVE model.
When double-clicking on the .rsa file, all the files contained in the .rsa file will be extracted to the
current directory considered.
This option is extremely useful to back-up files or to send files through ftp or email.
The procedures to create a new archive and extract an existing archive are detailed below:
Archive When a model is loaded into RESOLVE, File | Archive | Create can be
creation invoked.
The main list displays all the files in the model, arranged by module, with the
main RESOLVE file at the top.
The exception to this is if a model is being run remotely such that RESOLVE
cannot "see" the files in question (i.e. for instance an Eclipse model running on
a Linux machine). In this case, the files can not be added to the archive.
Additional files can be added to the archive (e.g. Word documents, Excel
spreadsheets) by clicking on Add baggage file. This invokes a multiple
selection file dialogue - select the file or files that are to be added to the archive.
July, 2021
RESOLVE Manual
User Guide 944
Files that could not be archived because they were stored remotely will be
displayed with a cross in the list of files contained in the archive.
Once the archive has been extracted, the option exists to immediately open the
RESOLVE file in question. By default, RESOLVE will open all the client
application files from the directory to which they were extracted
2.12.1.3 File Preferences
The "File Preferences" section in RESOLVE allows the user to set system-wide preferences:
these preferences will be respected for every file that is setup and used in RESOLVE.
These file preferences can be opposed to model preferences that are only setup for one
specific model.
The following screen appears when selecting the File | Preferences section.
Data This is the directory in which RESOLVE will open the file browser when performing
July, 2021
RESOLVE Manual
User Guide 946
Autosave This option allows backup files to be made during RESOLVE runs in case of a
options failure in the simulation (e.g. due to a network or license failure).
When the model is next loaded, a screen will appear asking whether the backup
model or the original model should be loaded. It is also possible to change the
frequency of generation of the backup files
Write This option allows to generate a debug file including a detailed description of the
thread actions performed by RESOLVE.
debug file This is can be useful to troubleshoot a problematic case, in which case the log file
should be sent to Petroleum Experts
2.12.2 Drivers
The "Drivers" section of the main menu enables to setup the connections between RESOLVE
and the connected applications.
Two main options are available in this section of the main menu.
Register This invokes a screen that allows to register, unregister and obtain a
drivers summary of the driver properties for specific applications.
See the "Driver Registration"section for more information regarding how to
register the drivers
Auto-Register This option enables to register the latest set of drivers that are supplied by
Latest Drivers Petroleum Experts.
User developed drivers will not be registered with this function.
This is equivalent to going to the "Driver Registration"screen and clicking on
"Auto-register"
Drivers need to be registered every time a new major version (not build) of RESOLVE is
installed, so that the most recent drivers for each application can be loaded. The first time
RESOLVE is opened following the installation, the user will automatically be prompted to
register the drivers.
Once RESOLVE has been installed and the drivers registered, there will be no further
need to re-register the drivers unless a new version of RESOLVE is installed
In order for RESOLVE to use a driver it must be registered with the RESOLVE system.
A new installation of RESOLVE will automatically detect and load new drivers (or more recent
drivers than the currently registered set); however, there are some circumstances in which one
may wish to register the drivers manually, and the procedure to do so is described below.
App. Module This is the name of the driver as it will appear in RESOLVE when an instance is
created. It is generally setup to be the name of the application considered
DLL Path This is the path to the dynamic link library that implements the link between the
application considered and RESOLVE
Application This is an identifier given to RESOLVE, supplied by the driver, that determines
Type the nature of the application that is being linked. It is used by RESOLVE to
guess the "upstream" and "downstream" components of a system. This guess
can be overridden by the user before the system is executed so this identifier
is not crucial to the running of RESOLVE
July, 2021
RESOLVE Manual
User Guide 948
Command buttons
The following functions can be performed through this screen:
Register This can be used to register a new driver. When this is pressed, a screen will
be presented allowing to browse for the required driver. Once selected,
RESOLVE will check that the file has the correct format for a RESOLVE driver,
and will then add it to the above list
Unregister This function enables to unregister a driver that has been registered
previously.
A driver needs to be selected in the driver list for that function to work
Configure This button invokes a screen that allows application specific settings to be set
up. For example, the configuration screens for the GAP and Reveal drivers
are very similar, and include:
The path to the local executable of the application considered
The application startup timeout, which is the length of time RESOLVE will
wait for the application to start before calling an error. This has to be
defined for both normal and cluster startups (i.e. for instance when several
machines, organised as a cluster, are used to run different RESOLVE
scenarios for instance.)
A driver needs to be selected in the driver list for the function to work
Properties This option invokes the screen below. It displays a summary of the driver
properties and supported features. A driver needs to be selected in the driver
list for the function to work.
The supported features can be described as follow:
July, 2021
RESOLVE Manual
User Guide 950
Auto- This option enables to automatically register the standard set of drivers
Register included within RESOLVE
In order for RESOLVE to use a data object it must be registered with the RESOLVE system.
A new installation of RESOLVE will automatically detect and load new data objects; however,
there are some circumstances in which one may wish to register a data object manually, and the
procedure to do so is described below.
To register a RESOLVE data object, go to Drivers | Register data object or library on the
main screen.
Object This is the name of the data object as it will appear in RESOLVE when an
instance is created.
DLL Path This is the path to the dynamic link library that implements the link between the
data object considered and RESOLVE
Version The driver version number
Command buttons
The following functions can be performed through this screen:
Register This can be used to register a new data object. When this is pressed, a
screen will be presented allowing to browse for the required dll file.
Once selected, RESOLVE will check that the file has the correct format
for a RESOLVE data object, and will then add it to the above list. User
data objects will be added to the field called 'User'.
July, 2021
RESOLVE Manual
User Guide 952
Unregister This function enables to unregister a data object that has been
registered previously. A data object needs to be selected in the driver
list for that function to work
Write variable list Writes a list of all data object input variables. This file is then be
for IFM variable imported into Model Catalogue to define variables which will be
tracking tracked.
From IPM 9, it is possible to register user-created workflows such that they can be packaged
with the IPM installation and distributed. Petroleum Experts also provides some pre-packaged
workflows with the installation, and a description of their objectives can be found by clicking on
the 'Properties' button below.
A new installation of RESOLVE will automatically detect and load pre-packaged workflows.
However it is also possible to register user-developed workflows manually, and the procedure
to do so is described below.
To register workflows, go to Drivers | Register Visual Workflow on the main screen. The
following screen will be displayed:
Register This can be used to register a new workflow. When this is pressed, a screen
will be presented allowing to browse for the required workflow. Once selected,
RESOLVE will check that the file has the correct format for a RESOLVE
workflow (.vwk), and will then add it to the above list
Unregister This function enables to unregister a workflow that has been registered
previously. A workflow needs to be selected in the list for the function to work
Properties Displays the workflow properties that can be entered when registering the
workflow
When the 'Register' button is invoked for the first time a prompt appears to setup a new
directory where user workflows can be stored. This should be a local drive where the user has
read/write access.
After setting the default directory, the workflow properties are entered in the registration screen.
The workflow can be selected from any location, and the file will be copied to the directory setup
previously. All fields are required before the workflow can be registered:
July, 2021
RESOLVE Manual
User Guide 954
A pre-registered workflow can be loaded into RESOLVE from the option on in the toolbar by
clicking on the workflow and then clicking on the RESOLVE canvas to place it:
2.12.3 Wizards
IT setup These wizards will assist the user to set up RESOLVE to perform IT type tasks
wizards such as connection to remote applications.
Engineering These wizards will assist the RESOLVE user with some specific engineering
wizards task, such as setting up a voidage replacement scheme for instance.
July, 2021
RESOLVE Manual
User Guide 956
information
Perform GAP / This option allows the running of the REVEAL
Eclipse Validation schedule to verify that the rates produced for each
well can be handled by the surface network model:
This is similar approach to the "Perform GAP /
Eclipse Validation" section.
Simulation -> This option allows a decline curve to be generated
Decline curve from the simulation results of an already run model.
The decline curve can then be used as proxy at
reservoir level for further analysis.
GIRO Optimiser Allows to evaluate the behaviour of the GIRO
Performance optimiser for a particular problem as well as
sensitising on the most suitable GIRO optimiser
settings: Refer to the "GIRO Optimiser Performance"
section for further information
Execute Allows to perform one specific OpenServer
OpenServer command and check its validity: Refer to the "Execute
command OpenServer Statement" section for further
information
This invokes a utility program distributed with the IPM suite that allows the configuration of local
and remote connections to Eclipse 100 and Eclipse 300 on the Windows platform.
More information on this utility can be found under the section dealing with the configuration of
the Eclipse driver to connect to Eclipse instances.
2.12.3.2.2 PXCluster Console
The PXCluster console enables to configure PxCluster, a clustering software that has been
developed by Petroleum Experts, exclusively for use with the Petroleum Experts IPM
software. It allows multiple nodes to be grouped together into a cluster, to which IPM jobs can
be submitted.
For more information on PxCluster, please refer to this section of the manual.
2.12.3.2.3 PXCluster Job Logging Monitor
When running the Case Manager or any of the objects making use of the Case Manager
(Sensitivity Tool, Crystall Ball, @Risk, Particle Swarm) using PxCluster, it is possible to create
logs of each case. This may be useful for debugging purposes, in the event that some cases
fail. Log messages can be inserted and triggered at any point in the controlling workflow of the
Case Manager.
The job logging monitor enables to view and manage the logs files. For more information,
The simulator injection wells that are to inject the voidage volumes required can be left
unconnected in the RESOLVE system. They can then be controlled by the simulator
alone.
The voidage can be injected from an injection system modelled in GAP. To do this the
voidage requirement must first be calculated and then applied as a constraint on the
GAP injection model(s). This is normally carried out with a Visual Basic script. The
advantage of this approach is that the water injection system is physically
modeled: the user therefore has a way of checking that the injection rates
required to achieve the voidage replacement targets can physically be
achieved by the current injection system.
The voidage replacement wizard objective is to assist in setting up the second option.
The utility works by writing a visual basic script that controls the model by calculating the
voidage volumes required and applying appropriate constraints on the injection system. For this
to work the reservoir simulator variables must be accessible from the script: this is currently only
possible in REVEAL and the Petroleum Experts implementation of the Eclipse link (i.e. the
standard driver that comes with the IPM installation).
In order to perform voidage replacement the following elements must be included in the system:
An example system, with a reservoir simulation model connected to a production and injection
system, is shown below:
July, 2021
RESOLVE Manual
User Guide 958
The logic followed by the voidage replacement script at each timestep for such a system is as
follows:
The reservoir calculates the inflow performances for all the wells - producers and
injectors. These inflow performances are then passed to the production and injection
systems.
Before either network is solved the well data from the last simulator timestep is
analysed. The reservoir and surface volumes from the producing wells are calculated
to obtain an effective FVF for the producers. Similarly, the volumes for the injection
wells are used to find an effective FVF for the injectors.
At this time the total reservoir volume produced and the total reservoir volume injected
are obtained.
The production system is then solved. This enables to obtain the surface volumes
produced from the production system and the FVFs calculated earlier are used to
calculate a required injection rate to meet the voidage requirement.
The injection rate just calculated is optionally corrected with the difference in the
cumulative reservoir volumes obtained in the third step. These differences arise
because the voidage is calculated explicitly, i.e. using FVF data from the previous
timestep.
The injection rate is now applied as a constraint on the injection system model and the
injection system is then solved. Note that as the injection is a constraint the network will
not necessarily inject all the required rate (i.e. the manifold pressure may not be high
enough). The script will output at each timestep the percentage of the voidage
replacement target that has been achieved.
The voidage replacement script can be setup from the Wizard | Voidage Replacement -
generate a VB script section.
In order to setup a voidage replacement scheme using this wizard, the following procedure can
be used:
July, 2021
RESOLVE Manual
User Guide 960
Step 1 Select the reservoir considered and its main producing phase in the left hand side
panel of the screen: this is a list of all the reservoirs in the system that support
scripted voidage replacement.
Note that it is not obligatory to set up voidage replacement for all the reservoirs: it
is possible to set it up for as many or as few as needed.
The reservoir phase will be used to determine whether to capture the gas or the
liquid rates in the surface network production system.
Additionally, fallback FVFs can be chosen: these are the default FVFs that will be
used by the script as a starting point of the calculation
Step 2 Add a well group using the Add button on the top right hand side of the screen.
This well group will include all the wells (i.e. producers and injectors) to consider
for one specific voidage replacement scheme.
To add wells to this well group, click on the grey square next to the well names.
The selected wells will be highlighted in red.
In the case considered here, production for Well1 and Well1_GL will be
compensated by injection from Water_Inj1, Water_Inj2, Water_Inj3 and
Water_Inj4.
For the script to work it is essential that the production wells of each group are
coupled to a single production network and the injection wells of each group are
coupled to a single injection network. Production Network and Injection Network
fields display the name of the production and injection networks in question.
If the wells selected are coupled to different production or injection networks then
the corresponding field will be blank. The field will also be blank if no selection
has been made. If the field is blank when the script is generated an error will be
generated. Note that different voidage groups can be coupled to different
production and injection networks
Step 3 Select the equipment to constrain in the network: this will specify which element of
the injection model is to have the voidage injection rate applied as a constraint.
It is necessary to choose a piece of equipment that will constrain all the injection
wells in the voidage group: in the example above all the injection wells are linked
to the same injection manifold, which connects to no other wells, and so the
constraint can safely be applied to this manifold. Other pieces of equipment that
could be used for this are injection manifolds, groups, or even single wells
Step 4 Define the voidage fraction to be used: voidage fractions higher than 1 can be
entered, leading to the volume of fluid injected in the reservoir measured at
reservoir conditions being higher than the volume of fluid produced from the
reservoir measured at reservoir conditions.
Select the voidage method used: the normal method is to calculate a required
injection rate to replace the volume that has been produced. It is also possible to
calculate a production that will balance an injection rate. In this case, the logic of
the script will be reversed
Step 5 Define the period of time over which the voidage scheme has to be applied
Once the voidage scheme has been setup, the following additional options are available:
Include correction This option enable to keep track of the voidage volume required through
on voidage time.
injected / If this option is selected, a correction to the voidage injection will be
produced based performed that is based on the total reservoir volumes that have been
on cumulative injected and produced up to this point. The reason why this might be
voidage necessary is because the FVF calculations are performed explicitly, i.e.
using the data from the previous timestep. This means that small
deviations can build up between the total voidages produced and
injected, which this option attempts to correct
Replace This option enables to specify whether the Visual Basic script created will
current script replace any existing script or whether the existing script will just be
appended with the additional voidage related sections
Include logging This option enables to specify whether script logging statements will be
statements displayed.
in the script code These logging statements provide information regarding the voidage
fraction achieved at each timestep for instance
Log warning if This option enables to specify whether warning statements have to be
injection system displayed when the voidage fraction requested is not achieved
does not achieve
voidage amount
Enable error This option enables to generate an log message if a script error occurs
handling in script during the run
Edit script on exit If this option is selected, the voidage script created will be displayed
when the voidage setup screen is closed
A detailed description of the script structure can be found in the "Voidage Replacement -
Script"section.
2.12.3.3.1.3 Voidage replacement - Script
In this section the script written by the voidage replacement wizard is described and analysed.
An understanding of the details of the script is not essential although it could be useful in
debugging or to tailor the script to specific needs.
Please see the "Scripting"section for more information on how the Visual Basic script is called
from RESOLVE.
July, 2021
RESOLVE Manual
User Guide 962
The following voidage strategy is to be implemented: the entire production through wells
PROD1 and PROD2 has to be replaced by water injection in wells W-INJ and G-INJ.
The first section written by the voidage wizard is the "Declarations" section.
These are the variables that will be used by the voidage script calculation:
July, 2021
RESOLVE Manual
User Guide 964
1. This sets up an "error handler" for the script routine. If an error occurs then execution will not
stop but a warning message will be logged at the end of the routine (part 11).
2. The first part of the routine (parts 3 - 6) needs to be run before the production system is
solved. This "if" statement will only be "true" if this routine is being called before the
production system is solved (i.e.If ModuleList contains the string "produce", which is the
label of the production system).
3. This sets up "fallback" values for the formation volume factors. The FVFs are calculated
using data from the previous simulator timestep: if this is the first timestep the simulator
may have to rely on these default, user input, values.
4. These lines calculate the total downhole production from the group of wells and the total
surface production, and hence calculates an effective production FVF. In this case the
reservoir fluids are oil and water, although FVFs with respect to gas production can be
calculated also.
5. This is performing the same function for the injection wells. Here the injection fluid is water
although gas is also allowed. An effective injection FVF is calculated and, for convenience,
the ratio between the production and injection FVFs.
6. This calculates the cumulative downhole production and injection, which can be used to
adjust the injection in the next timestep (part 9).
7. The next part of the routine is called after the production system is calculated and before the
injection system is called. This line ensures that parts 8 - 11 are only called after the
production system is solved.
July, 2021
RESOLVE Manual
User Guide 966
8. This calculates the surface rate from the production wells as calculated by the GAP
production system to be applied to the simulator for the next timestep. It then uses the
FVFs calculated above to determine the correct surface total injection rate for the injection
wells.
9. This adjusts the injection calculated in part 8 to account for the current error between the
voidage produced and the voidage injected (the error arises from the fact that the FVFs
are being calculated on data from the previous timestep). Note that the dT here is the
previous timestep length and so this scheme may not work well in adaptive timestepping
models. Note also that this is optional, and is governed by the setting in part (7) of the
"Voidage Replacement Wizard"screen.
10. This takes the required surface injection rate calculated in parts 8 and 9 and applies it as a
constraint on the GAP injection model. The piece of equipment specified in the
OpenServer tag string must represent an item that can constrain all the injection wells in
the voidage group together. In this case "J3" is a manifold that is common to all the wells: it
could equally be an injection manifold, a group, or even a single well if there is only a single
injection well in the voidage group.
11. The error handler set in part 1 will set an error number and description if an error occurs in
the script code. At this part any error is recorded to the RESOLVE log window.
This is called after the entire system has been solved and is used to check that the required
amount of voidage has actually been injected by the model.
If the injection manifold pressure is not high enough, or there are some other constraints in the
injection model, it is possible that the surface network, modelled in GAP, will not be able to
inject all that it needs to to maintain voidage.
The script is fairly self-explanatory: the initial "if" statement ensures that this code is only called
after the injection system is solved. Following this, both the injection rate that was applied as a
constraint and the rate actually injected are obtained and then compared.
Any discrepancy is logged to the calculation screen.
This is a simple function that is called after the system has completed all the solves and before
a new timestep is taken.
It is used simply to record the date at this timestep to allow RESOLVE to calculate the timestep
length at the end of the timestep.
July, 2021
RESOLVE Manual
User Guide 968
For instance, one can specify that a certain well has to be opened when the total system
production falls below the plateau rate.
This wizard does not edit a Visual Basic script as per the voidage replacement script, but will
automatically edit the event driven scheduling section of RESOLVE.
The procedure to setup this wizard will be demonstrated using the following example:
The RESOLVE archive file for this example can be found at the following location:
The Well1_GL is initially closed, and we want to know when will it need to come online to keep
the plateau production as long as possible.
Step 1 The drilling queue script can be setup from the Wizard | Drilling Queue -
populate event driven schedule section.
The following screen will then be prompted:
July, 2021
RESOLVE Manual
User Guide 970
This screen enables to define a drilling queue label, and the GAP module to which
it is related
Step 2 Select the system node that is going to be used as the constrained node: a
variable of this node will be monitored and the value of this variable could be used
to trigger one specific action.
In the case considered, this node will be the separator: the production at separator
has to be monitored and trigger the opening of Well1_GL when this falls below a
certain level.
Select the system nodes that are going to be used as control nodes: the status of
these nodes during the forecast will be changed based on the value of the variable
monitored.
In the case considered, these nodes are the wells Well1 and Well1_GL.
Step 3 Select the type and value of the target variable considered: in this case, it is the
liquid rate at the separator level. It needs to be maintained at 28,500 STB/d.
A tolerance of 15 STB/d has been set to avoid the well being put online if the
solver comes back with a separator production of 28,499 STB/d for instance,
which might just be due to the solver tolerances in GAP and will not be
representative of a lower system potential.
Select the nodes that are initially active in the model, here for instance the well
Well1, as well as the number of nodes to be activated each time the target rate is
violated.
July, 2021
RESOLVE Manual
User Guide 972
Once these steps have been fulfilled, the drilling queue will be setup and the event
driven scheduling section will be modified to accommodate the drilling queue
setup. A message to this fact shall be shown, select OK.
Step 4 The model can then be run and the following results observed:
Sometime after the initial system setup (i.e. Well1_GL closed) the system can not
produce the target rate of 28,500 STB/d, the Well1_GL well will be open.
It is possible to notice in the Events section of the RESOLVE output log that:
The Well1_GL is masked at the start of the prediction.
The Well1_GL is opened on the 01/04/2006, first date at which the
production is lower than the prediction target rate.
July, 2021
RESOLVE Manual
User Guide 974
This is to be a repository for wizards that relate to the user of the IPM-OS driver, for automating
batch OpenServer tasks.
There is currently only one wizard. This is for the batch generation of lift curves from a GAP
model.
The GAP model in question should be loaded into the field at the top of the screen.
Note: if the need is to generate lift curves in parallel, across the nodes of a cluster, then
it is essential that the GAP model is on a network drive that can be seen from all nodes
(with the same path structure), and that the PROSPER files referred to in the GAP model
are similarly visible.
Step 1 A new RESOLVE case is created. The GAP model is loaded into RESOLVE (as a
module in the main screen) and the underlying PROSPER models are extracted
Step 2 The PROSPER models are used to generate scenarios under the RESOLVE
scenario manager. Each scenario represents the generation of a different
PROSPER lift curve
Step 3 The scenarios are run and the resulting TPD files are imported into GAP (which is
again loaded into RESOLVE). The scenarios may be run sequentially, although
July, 2021
RESOLVE Manual
User Guide 976
this is not doing anything that could not be done equally easily from the GAP
application itself
The main objective of this calculation will be to check that the production rates
specified in the schedule section of the reservoir model can physically be produced
through the surface network in use.
This validation can be launched by using the "Play" button at the bottom left hand corner of the
screen, as illustrated below.
First of all, the Eclipse data file path can be used to point the RESOLVE model to use a different
data deck to the one specified in the model setup. This may be necessary as Eclipse decks are
often modified to remove the group controls that are to be tested with this tool.
During the calculation, RESOLVE will run the reservoir model based on the schedule defined in
the reservoir model itself: the Eclipse model will be run based on the SCHEDULE defined in the
Eclipse data deck.
At each timestep, the production rates produced from the reservoir based on the reservoir
model schedule will be passed directly to the surface network model in GAP. In addition, any
artificial lift quantities (ALQs) will be passed; the type of ALQ should be specified for each
Eclipse module in the table at the top of the screen. GAP will perform a solve network
calculation with no optimisation. The bottom-hole flowing pressure and tubing head pressure
calculated by GAP for each well will be stored in RESOLVE for each well at every timestep.
These pressures can then be compared with the equivalent pressures (BHP and THP, where
present) in the Eclipse model.
In many cases, a comparison of tubing head pressures is most useful. The THP calculated by
GAP will be a function only of the network downstream of the well heads, and so will be
independent of the well lift curves. This comparison thus gives an immediate indication of a well
rate can be produced.
If there are no lift curves in the Eclipse model then a comparison of BHP's can be made. The
GAP-calculated BHP obviously depends on the downstream network and the well lift curves.
If the THP/BHP calculated by the GAP surface network model is lower than the THP/BHP
calculated by the reservoir model, then the production rates specified in the reservoir model can
be successfully produced through the surface network in use. In other words it would be
possible, in principle, to choke the well back in order to match the THP/BHP recorded by
Eclipse. Nothing is then reported in the above screen.
If the THP/BHP calculated by the surface network GAP model is higher than the THP/BHP
calculated by the reservoir model, then the drawdown required to produce the production rate
calculated by the reservoir model can not be achieved through the surface network, and
therefore will not be representative of the behaviour of the real system.
For each well, the dates at which this situation will be encountered will be reported in the
"Flagged wells" section of the "GAP - Eclipse validation wizard". Wells can be flagged on the
basis of THP, or BHP, or both.
For instance, in the model illustrated above, the well prod2(PROD2) has a GAP BHP higher
than the Eclipse BHP on several dates in 2000. The THP is also reported, although the figure
calculated (-14.696) is indicative of an Eclipse model without lift curves. This will indicate to the
user that at these specific dates, the production schedule specified in the reservoir model for
that particular well is INCONSISTENT with the capacity / performance of the surface network
model.
All the results for Eclipse flowing pressures, GAP flowing pressures, and the differences
between the pressures, are stored in the main RESOLVE reporting data under a separate folder
July, 2021
RESOLVE Manual
User Guide 978
(GAP-Ecl validation):
To use this wizard, a pre-requisite is to run the entire model at least once with the "IPR logging"
feature activated. This allows the production data for the wells to be extracted and also the IPR
information passed from the numerical simulator. To activate the IPR logging feature, select "Run|
IPR logging" from the main menu. Once this is activated, run the model for a full forecast and save
the file.
To create the decline curves, Select Wizards | Simulation -> Decline curves. The interface below
will come up. Select Extract data from model to extract the already prepared simulation results.
This step should read the number of wells from the model and obtain the information
regarding production data from the wells as well as IPR data. The information can also be
inspected by selecting "Inspect" on the inteface above.
July, 2021
RESOLVE Manual
User Guide 980
Within the data section, any of the wells can be selected to view the production data as well as the
IPR data.
The next step is to give the run a label and Save run data as shown below.
July, 2021
RESOLVE Manual
User Guide 982
The above steps capture the information within RESOLVE. The next step is to now populate the
model with the decline curve proxies. This is done within the "Run model" interface. Select the data
set of interest and then "Populate model". If the IPR regression indicates an error, please ensure that
IPR logging was turned on before the forecast was run initially.
When this operation is done, the GAP model will now have tanks for the wells which have been
specified as "Decline curves" and populated with the information. The well models will also have the
production data and IPR data defined for them. In RESOLVE, the numerical simulator will be removed
as this is no longer required.
The model can then be run using the decline curves created.
2.12.3.3.6 GIRO Optimiser Performance
If the GIRO optimiser is being used for routing or integer problems, then this wizard can be used
to analyse its performance and understand the sensitivity of the model to the various input
parameters. This is a useful, one-off, first step that can be used prior to the carrying out of an
optimisation study. More information on the GIRO optimiser can be found here.
Typically, it will take some time for the wizard to run and accumulate all the required data. This is
normally something that can be run overnight, or in some idle period.
July, 2021
RESOLVE Manual
User Guide 984
Sensitivity These are the inputs that parameterise the model, and are typically the 'levers'
variables which can be used potentially to tune the performance of the optimisation. The
values shown in the screen above are good defaults for most studies.
When the wizard is executed, RESOLVE will make runs with all combinations of
these values. In the above case, this equates to 2x1x1=2 separate runs.
Results This section of the screen displays a scatter plot of the results of the individual
optimisations. The optimiser objective function is displayed in the left hand
axis; the number of iterations to obtain that result is displayed across the
horizontal axis.
If the mouse is held over a data point, a panel is displayed which gives the
input (sensitivity) parameters for the run in question along with the
corresponding value of the objective function.
To give an idea of the clustering of the points with respect to the different
sensitivity values, the points can be coloured according to the different values
of the sensitivity variables. This is achieved by changing the setting in the drop
down list at the bottom of the screen. All points with the same value of the
selected sensitivity variable will be drawn in the same colour.
July, 2021
RESOLVE Manual
User Guide 986
An OpenServer command or variable can be entered in the string section. This string must
correspond to a DoGet, a DoSet, or a DoCommand: this should be selected above.
Selecting the Evaluate button will run the OpenServer command considered.
The left hand side of the screen includes a list of the OpenServer commands available in
RESOLVE along with their arguments.
An "=" sign represents a default argument: if the argument is not set explicitly in the command,
the value shown will be used.
2.12.4 Options
The RESOLVE options section allows the setup of the global options of the RESOLVE model.
The menu is divided into the following input fields:
System This is the section that defines the type of model and run to perform, as well as
Options the options of visualisation (see System Options topic)
Lumping / This is the section that enables to pass from a black oil to a compositional
Delumping model or from a simple compositional model (i.e. for instance only using 5
pseudo-components to describe the fluid, as required by reservoir simulation
models) to a complex compositional model (i.e. with a higher number of
components, as required by process models) without losing any accuracy on
the fluid description and properties.
Standard This set of colours uses a single colour for all the connections
By Fluid This set of colours allows the colour of the connection to match
the type of fluid passed through the connection: for instance an oil
will be green, but the more the WC increases, the more blue will
be mixed with the initial green colour of the connection
Bitmaps This section enables to import a bitmap file and replace any of the icons from
the different modules used by a user selected picture
Connection When a run is performed (single solve or prediction), it is possible to display
Pop ups the results between any connection between the modules (i.e. sink - source) on
the system main window. The results can be displayed either as "current data",
"chart" or "Pie". Current data is applicable only for single solves while Chart
and pie chart options can display forecast runs. There is also option to
customise data displayed for the chart and pie chart options.
July, 2021
RESOLVE Manual
User Guide 988
This screen allows certain properties of the system display and runtime mode to be changed
When Options | System options is selected from the main menu, the following screen is
invoked.
The model properties that can be modified from this screen are separated in two sections:
Run Properties will affect the way the calculations are being run in the RESOLVE model
July, 2021
RESOLVE Manual
User Guide 990
Detailed descriptions of the options available for both these sections can be found below.
Run Properties
The following options are available:
However, in some cases where the client modules do not need to be re-
initialised, the reload procedure is just time-consuming and can be
skipped, therefore selecting the "Do not reload clients modules" option can
be recommended. If this option is used, the status of the models used at the
start of the prediction will be exactly the same as the status of the models
currently open on the machine
Parallelisation This option enables the user to select whether the RESOLVE calculations
are run sequentially or in parallel.
sequentially
Validation This option enables the user to choose whether the non-fatal validation
errors are displayed prior to launching a calculation.
By default, these warnings will be ignored.
If the user chooses not to ignore non-fatal validation errors, RESOLVE will
present a screen at run-time that will display any non-fatal validation errors
in the system.
Such errors would be for instance a node that is not connected to another
node: this is not going to stop the system from running, but could be a
reminder that the system is incomplete
Debug logging This option can be activated to create a log file with the data that is passed
between applications.
This is essentially used for debugging purposes
Reporting From IPM #7 onwards, a new reporting structure is used that enable to
store the results of the variables selected by the user in each module
separately to the connection results.
Should the user wish to return to the old reporting system, this can be done
by changing this option to "Use old reporting system"
Optimiser/ If the solver calculation in any of the connected applications fails for some
target solver reason, this option allows to either carry on running the calculations or to
failure stop the forecast
System View
The following options are available:
Title This controls the display of a system title on the main screen.
To give the system a title, enter the title in the text box supplied and select
the "Display Title" tick box. The font for the title can be changed by pressing
the Title Font button. The system title information is saved with the
RESOLVE file
Label fonts Press the Label Font button to change the font that is used to label the
icons on the main RESOLVE screen
Background A bitmap can be displayed on the back of the graphical view. Click on the
bitmap Background Bitmap button to browse for a bitmap to display.
The Clear button can be used to remove a bitmap
This is an essential capability for advanced integrated models as it keeps the fluid PVT
description consistent from one application to another while respecting the ideal PVT
July, 2021
RESOLVE Manual
User Guide 992
For more information on Lumping/Delumping, please refer to this section of the manual.
2.12.4.3 Process Independence in Resolve models
2.12.4.3.1 Introduction
The coupling between reservoir simulators and surface network applications (typically GAP), as
explained above, has typically worked through the exchange of surface volumetric rates. In other
words, the surface rate as calculated by GAP is then passed to the simulator to act as a control
mode for the simulator wells over the coming timestep.
The advantage of this method is that it is simple to implement for the user: the volume rates are
standard outputs of any simulator and so coupled models should run without any modifications
to the data decks.
A drawback, however, is that volumetric rates are calculated with reference to an internal model
process or separator train. This can lead to discrepancies between what the network 'sees' as
a unit volume and what the simulator 'sees'.
A solution to this is to pass masses between the applications, as has always been the case
when passing data to process models. The unit of mass is clearly independent of any process:
July, 2021
RESOLVE Manual
User Guide 994
PROSPER lift curves, which were previously set up with volumetric sensitivity variables, need to
be regenerated with equivalent mass-based sensitivity variables. The table below lists the
volumetric variables with their corresponding mass variables:
It is necessary to tell GAP that it will receive mass, rather than volumetric, IPR data. This is
achieved through the 'composition' tab of the data entry screen.
All the compositional simulator drivers that Petroleum Experts provide can supply mass
based inflow data. In all cases other than Eclipse this is done automatically with no intervention
required by the user.
In the case of Eclipse, it is not possible to calculate mass rates directly from the OpenEclipse
interface. Mole rates are available, so the EOS description is required to turn these mole rates,
through the molecular weights, into mass rates.
The EOS description is entered, as normal, in the form of a PVT include file into the Eclipse
data entry screen. In normal (i.e. volumetric) use, this is not an obligatory input. If it is not
present, then the downstream (network) composition will not receive compositional data and the
composition will be fixed by that which is already resident in the GAP model.
If it is present then the EOS data will be passed downstream. If there is a substantial number of
wells, this can slow the connection down. If the EOS data needs to be supplied to provide
molecular weight data for the calculation of mass rates, then there are two possibilities:
1. Provide a cut-down PVT file which only contains the molecular weight data, or
2. Right-click on the Eclipse icon, and select the option to not pass EOS data.
When the mass rate for a well is passed to the simulator, the simulator needs to convert the
July, 2021
RESOLVE Manual
User Guide 996
mass rate into a rate which can then be used to control the well for the duration of the
forthcoming timestep, as usual.
Generally, it is not possible to control simulator wells by mass rate: for individual well control,
volumetric rates are needed. These are obtained by extracting the equivalent volume rate from
the original well IPR for the mass rate which was returned from GAP.
It is assumed that the mass rates computed in the IPR are hydrocarbon mass rates only. The
mass rate that will come back from GAP is always hydrocarbon only. This means that if the
simulator IPR computes a mass rate which includes water, then mass-balance discrepancies
will occur.
All reservoir simulators that Petroleum Experts support, with the exception of Eclipse, will
always generate hydrocarbon only mass rates. Eclipse is different because the mass rates are
calculated by multiplying the component mole rates by their molecular weights. If the EOS
description of the simulator includes water, then this will be implicitly included in the mass
calculation.
This means that it is important not to include water as part of the Eclipse EOS description if a
process-independent model is to be set up.
The commands available from the "Edit System" menu are used to create and manipulate
icons on the main screen.
Most of these commands are also available when Right-Clicking anywhere in the graphical view
window.
Add Client This will invoke a menu with a list of the application models available to add
Program into RESOLVE.
The list is taken from the list of "Registered Drivers".
To create an instance, click on the application to load and click in the
graphical view window at the location where the icon is to be displayed
Add data This will invoke a menu with a list of all current data objects available for use
in RESOLVE. To create an instance, click on the application to load and
click in the graphical view window at the location where the icon is to be
displayed.
Link Enters "Link" mode for linking corresponding sources and sinks together.
Once in link mode, point the mouse cursor on the first element to connect,
click on the left mouse button and drag the mouse cursor to the second
element to connect. Release the left mouse button - A connection should
have been established between the two elements considered. It will be
displayed as a dashed line
Target Enters "Target" mode.
Once in this mode, it will be possible to establish a direct connection
between two modules that will act as a target solver.
Further information on that type of connection can be found in the "Target
Connections" section
Select Enters "Selection" mode for selecting icons for later manipulation (e.g.
moving).
Selection can be done per icon (i.e. by clicking into the icon) or by dragging
over a rectangle which will toggle the selection state of all the icons within
the rectangle. Selected icons are marked with a dark blue circle
Move Enters "Move" mode for moving icons. Icons can be moved individually by
clicking into the icons and dragging with the mouse, or collectively by
clicking near a group of selected icons and dragging to the new location
Delete Enters "Delete" mode for deleting client modules. Source and sink icons
can not be deleted individually as these are properties of the client
application cases. The client modules can be deleted in this mode by
clicking onto the main icons. Connections between sources and sinks can
also be deleted in this mode
Mask Enters "Mask" mode for masking connections. When a connection is
masked, it is grayed out on the screen and zero rates will be passed when
the model is run
Unmask Enters "Unmask" mode for unmasking connections (see above)
Disable Enters "Disable" mode for disabling connections or modules.
When applied to a connection, this has the same effect as masking except
that it can not be changed or scheduled once a run has started.
When applied to a module, all connections to and from the module are
automatically disabled. In addition, the module itself will not be controlled in
any way - no initialisation or timestepping in the module will be performed.
This means that runs can be performed where modules are switched in and
July, 2021
RESOLVE Manual
User Guide 998
The Connection Wizard may be used to generate connections between nodes in the
RESOLVE system.
This can also be achieved graphically, but this can be quite arduous when generating
connections over a large system.
The following options are available within the connection wizard screen:
Module lists The drop down list boxes at the top of the screen contain all the client
modules defined in the RESOLVE system.
Select the two modules for which connections have to be made from the
lists on the left and right of the screen.
The sources / sinks that correspond to the module selected are then listed
in the list boxes below
Sorting The lists of nodes can be manipulated in various ways. Options to modify
July, 2021
RESOLVE Manual
User Guide 1000
options the ordering of the nodes can be found below the list box.
The following actions can be performed to modify the organisation of the list
of nodes:
Nodes can be selected in the lists and removed by clicking the Remove
Selected button.
The Reset button will display all nodes sorted alphabetically - This is the
defaults sorting process.
The sorting can be reversed by clicking <Reverse Sort>
Filter - Display The checkboxes available in this section enable to specify which items
appear in the list of nodes.
By default, data providers and data acceptors (i.e. all items) are listed.
Click on the relevant checkboxes to apply a filter to the list, and then click
Apply
Add Individual Connections can then be made by highlighting the individual sources and
Connection sinks in the list and clicking on Add Individual Connection. If the lists have
been sorted and filtered to align the nodes that are to be connected, then
Add Connection by List can be used to form automatic connections
between the node lists. The resulting connections are displayed in the list
box at the bottom of the screen
Add This will automatically connect like-named items from the models displayed
connections on either side of the screen.
by name The match criterion can be case sensitive or insensitive
This target solver will enable to modify the system using a control variable specified by the user
so that the target variable specified by the user becomes equal to a certain fixed value or
expression.
For instance, a target connection can be established between a GAP and a Hysys model so
that the GAP separator pressure is adjusted to be consistent with the Hysys inlet pressure.
July, 2021
RESOLVE Manual
User Guide 1002
Once this has been done, establish a direct link between GAP and HYSYS
modules as illustrated below.
Target Section
The target section enables to specify a label for the target connection.
July, 2021
RESOLVE Manual
User Guide 1004
From the Main Menu select: Edit System | Set System State to access the Select State
dialogue (below) where previous optimisation results can be recalled and set in the underlying
models which were previously saved from the optimisation results screen. Note that this can
also be performed dynamically (i.e. during a run) from the visual basic script or event driven
scheduling.
In the following example there are two saved states labelled 'iter_4' and 'iter_6' that can be
passed to the underlying models.
A summary of the state to be recalled is given in the grid for the state selected in the drop down
list box. When the 'OK' button is pressed the variables indicated will be applied to the underlying
applications.
Note that certain variables may be optimisation variables for a child application. An example
would be a wellhead choke in a GAP model. There is no guarantee that the value of such a
variable will be retained by the child application in question: depending on other RESOLVE
settings (for example, whether it is allowing the underlying applications to optimise), GAP may
simply overwrite the choke setting with a calculated value as soon as the model is run.
2.12.6 Variables
An important part of the design of calculations within Resolve is the use of variables to control
and drive the calculation logic. This menu item contains the options for importing and creating
variables, which can then be used to create events and actions.
July, 2021
RESOLVE Manual
User Guide 1006
optimisation/ variable sets. For example, a GAP model may have a control variable,
imported such as separator pressure, set up which the user would like to use as a
variables simple reporting (plotting) variable. More information can be found in the
"variable transfer" section
User defined Allows a variable to be set up which is not bound to any external variable,
variables but which can be used to store data or as part of an initial state.
User defined Arrays of variables can be defined for ease of use in a visual workflow
arrays
2.12.6.1 Import Application Variables
Several RESOLVE options such as specific connections between modules (i.e. Target
Connections, Direct Connections between Instances) or advanced scheduling (i.e. Event Driven
Scheduling, Scenario Management) require some of the variables of the applications
connected to the RESOLVE model to be published in RESOLVE.
This will enable RESOLVE to have access to these variables, either to be able to monitor the
value associated to them or change the value of the variable itself.
In order to publish connected application variables, the following procedure can be followed:
The screen has a tabbed sub-section for every application in the system that supports the
"publishing" of its variables.
In this case, the GAP production and injection system both allow the exporting of their internal
variables. So does the Excel spreadsheet and reservoir model.
Any variables that are exported from an application will appear in the grid below the tab section.
Name This is the name of the variable that will be used to refer to the variable in
subsequent operations. This is normally set up by the user when the
variable from the application is published
Writable? Variables can be read-only or writable.
An example of a read-only variable might be the result of a calculation in an
application.
A writable variable would normally be an input to an application (e.g. a pipe
diameter)
Unit This is the unit for the variable, if it has one. Once the variable is set up, the
unit can not be changed
Add to Plot This enables to monitor the variable throughout the production forecast and
add it to the results as a variable that can be plotted
Edit Variables The variables are not actually set up in the above screen but in the screen
that is invoked from this button. The screen displayed is implemented in the
driver and thus depends on the application in question
When the OK button of this screen is pressed, these variables will be copied into RESOLVE.
The variable publishing screen will then have an appearance similar to the following and will list
for each application all the variables that have been published.
July, 2021
RESOLVE Manual
User Guide 1008
This screen allows variables to be copied between optimisation sets and exported sets.
Optimisation variables are those which are defined under the Optimisation | Setup screen, and
may represent an objective function, a constraint, or a continuous control variable (integer
control variables are not covered in this functionality).
'Exported' variables are set up from the Edit System | Import application variables screen and
are used in the control of the RESOLVE model (through the event driven scheduling or scenario
management) or for reporting.
In some circumstances it may be useful to be able to copy from one set to another. This is
carried out in this screen.
The module under consideration should be selected from the top of the screen. Only those
modules which support this functionality are displayed. After this, the screen is in two parts:
July, 2021
RESOLVE Manual
User Guide 1010
Note that some variables can not be made into optimisation variables, e.g.
non-continuous variables such as equipment mask states
Optimisation to Allows the copying of optimisation variables into the general exported set.
exported
For the module in question, the set of objective function, constraints, and
control variables is displayed in the left hand list. The required variable
should be highlighted and the arrow pressed to make the copy. Multiple
variables can be copied by selecting the parent item
:.
:.
User-defined variables are variables which are not bound to a particular variable in an external
source (e.g. external client application, data object). These variables simply hold data, and as
such can be used across different workflows to drive the calculation logic, or as the means to
set an initial system state.
The name of the variable that is to be created is typed into the first column. A unit, if required,
can be supplied in the second column. Finally, the variable can be added as a reporting
variable by checking the box under the 'add to plot' column.
well1_mask
well2_mask
well3_mask
...
welln_mask
which represent the mask states of a group of wells can be turned into an array of dimension 'n':
well_mask[0,1,2...,n-1]
July, 2021
RESOLVE Manual
User Guide 1012
This new array variable can then be used conveniently in a visual workflow with an index to
represent the well number:
The variable also exposes a property called 'Count', which returns the number of elements in
the array (n):
The variables that are available for grouping into an array are listed on the left hand side. They
can be filtered by applying a wildcard filter (at the bottom of the list) to display only those
variable names which contain a certain combination of letters, e.g. *_mask. The list can also be
ordered by highlighting thte required items and clicking the up and down arrows to the side of
the list.
To create an array, click the 'create' button and give the new variable a name ('Mask' in the
above example). The variable will appear in the right hand list. Highlight the new variable, and
those variables on the left (in the required order) that will comprise the array, and then click the
right arrow button. The array will be populated, as shown.
Entire arrays are elements of arrays can be removed by clicking the 'delete' button.
2.12.7 Events/Actions
The set of options underneath the Events/Actions menu is used to define events and
July, 2021
RESOLVE Manual
User Guide 1014
corresponding actions in the Resolve calculation logic. Before events and actions can be set
up, it is necessary to define variables on which operations can be performed. These variables
then become the building blocks of the higher level logic used by Resolve.
There are currently two ways by which events and actions can be defined: the building of visual
workflows and event driven schedules.
The initial state of a Resolve system is defined by the settings of a selection of user defined
variables. The screen has the following appearance:
The rows in the left-hand column have drop down lists which contain the user-defined variables
set up previously. A variable can be selected and assigned a value for its initial state, as shown.
This then becomes useful when defining visual workflows or event driven schedules that are
executed at the start of the Resolve run. Instead of 'hardcoding' a value for a client variable
(such as a variable from GAP), it can be assigned to an initial state variable. Different scenarios
can then be run just by changing the initial state, e.g.
Illustration of how variables can be initialised with initial state variables (assignment visual
workflow element)
In the example above, the OGIP from GAP is set to the value of the state variable OGIP, and the
target constraint in GAP is set to the value of the TargetRate variable.
This has clear benefits when defining multiple scenarios from a single base scenario. This can
also be automated by way of the 'Scenarios | Sensitise on inputs' menu option.
The event management system in RESOLVE is used to implement relatively complex schedules
in a transparent, user-friendly manner.
These relatively complex schedules can for instance contain conditional events based on a
IF...THEN... structure.
Prior to IPM #5 it was necessary to write Visual Basic scripts to handle such situations. For
some very complex logic this is still sometimes necessary, but to handle many of these
situations in a more user-friendly manner an event management system has been implemented.
The variables that are required for the schedule have first to be published, or exported,
from the client applications.
This might include a GOR from a well in GAP, a compressor power from a plant model
(Hysys / UniSim Design), or a value from an Excel spreadsheet.
See the "Publish Application Variables" section to obtain further information on how this
can be achieved.
The variables published are then used to set up either an event driven schedule or a
visual workflow to drive the model when it is run.
July, 2021
RESOLVE Manual
User Guide 1016
Variables have to be published from clients applications prior to using this event driven
scheduling section.
This procedure is described in the "Publish Application Variables" section.
In simple terms, the screen allows the setting up of many conditions (i.e. which can be
aggregated together with AND or OR statements to form a single condition), such that when a
condition is "triggered" an action or several actions will be performed.
A condition is a statement of the form: "IF <variable> <condition> <value>"
The condition can be checked by RESOLVE prior to solving the system (i.e. Pre-solve), after
solving the system (i.e. Post-solve), or at the start of the run (i.e. Start).
This can be specified by using the schedule section at the top of the screen.
When setting up a condition, it is best to work from left to right across the screen.
Condition 1, This is the left hand side of the first condition (i.e. the <variable> section).
LHS When one of these cells is prompted, a drop down list appears with all the
variables that can be used as part of a condition.
This includes those variables that were published in the previous step as well
as permanent variables which are present in any RESOLVE forecast, such as
the timestep count.
The permanent variables are described below.
When a variable is selected the unit of the variable will appear in the "Unit"
column
Condition 1, This is the right hand side of the first condition (<value>).
RHS When one of these cells is prompted, a drop down list appears containing all
the variables with the same unit as the variable selected for the LHS.
One of these variables can be selected to form the RHS.
Also, simple arithmetic expressions can be defined. For example, any of the
following are allowed entries in the right hand side assuming that the LHS has
a variable with units for GOR:
Well2:GOR
Well2:GOR + 200
Well2:GOR / 2
Well2:GOR + Well3:GOR
It is important to note that when defining the RHS with simple arithmetic
expressions, the RHS should NOT be defined in parenthesis. If the
parenthesis are defined, then this will lead to a validation flag raised when
exiting from the screen as shown below:
July, 2021
RESOLVE Manual
User Guide 1018
Condition 1, Four conditions are implemented: less than (<), greater than (>), equal to (=),
<condition> and not equal to (<>)
Additional Having set up a single condition, it is possible to AND or OR additional
conditions conditions to this to form an aggregated, single condition.
If one select AND or OR from here, the second condition rows will be enabled
Action The action button opens the "Action screen", in which the action to perform
when the condition is executed is defined. The button will be highlighted in
green if an action has already been defined
Times to Normally a condition will be triggered once and then discarded, not to be
execute executed again. This is the default case, as shown. If required, this can be
changed such that a condition can trigger several times. This implements a
logic such as: every time the production falls below a threshold (e.g. up to 10
times, say) perform an action (e.g. bring on a well). An example of this is given
in the "Event Driven Scheduling - Example" section
This will close well A when GOR reaches the limit of 1000. Well B will be
closed when time is exactly 40 days after Well A had been closed.
The Time variables Tn (for both Presolve and PostSolve) are both initialised
at a value of 0.
To determine in the logic if a condition has taken place, used the "Cond"
variables described below. If a condition has triggered more than once, the
variable will be set to the time that it was last triggered.
Cond1 -> These are integer variables that specify how many times a condition has been
Cond200 triggered
(For both
Pre and
Post-Solve)
Examples
The following example entry forces an event to take place only after the previous condition has
been respected for a specific number of time:
This would be set up as follows:
Condition 1: if GOR > 1000 then close well A
Condition 2: if GOR > 1500 and Cond1 = 1 then close well B
This will close well B when the GOR is higher than 1500 and the first condition has been
respected (i.e. well A has been closed).
July, 2021
RESOLVE Manual
User Guide 1020
The event action is the action that is performed when a condition has been triggered.
When the Action button for a condition is pressed, the following screen is presented:
Change This section enables to modify the values of variables as long as they are
variables writable.
In the above example,the first action specifies that the mask state of Well1
should be changed to "1", i.e. the well is to be closed (masked).
Note that expressions can be entered here: the second action is asking that
the gas rate constraint for well 3 should be the gas rate constraint for well 4,
plus 10 MMscf/day
Open/shut Here, individual connections can be closed or opened.
connections The two client models are selected from the drop down lists in the first two
columns and then the required connection in the third column.
Finally, the status of the connection (i.e. close / open) has to be selected.
In the case of the reservoir simulator - surface network coupling model, closing
a connection is equivalent to closing or shutting a well
The following additional options are available at the bottom of the screen:
Re-take pass If this option is checked, then RESOLVE will iterate at the current timestep
through after performing the actions set up in this screen, rather than proceeding to
system... the next timestep immediately.
This has the advantage of avoiding a "hole" in the plateau rate for instance
when specifying that a well has to be opened when the production falls below
plateau: if this option is selected, as soon as the production falls below
plateau, the well will be open and the timestep at which the drop of production
had first be observed will be re-run, with the new well being activated
...and force a If this is checked as well as the above option, then a full RESOLVE
RESOLVE optimisation will be performed with the new settings
optimisation
Set system If a system state has been saved (e.g. from the optimisation results screen)
state then the drop down list will be populated with available states. If a selection is
made, then when the action is performed the state will be applied to the
system. This might be, for example, the results of a previously performed
routing optimisation.
Note that, as the screen says, the change is made before other changes on
the action screen are applied. This could mean that variables relating to the
set state will be overwritten by other variable changes
Rank actions
This allows specifying a ranking scheme for the actions set up on this screen.
See "Ranking of event actions" for more information
This screen allows to conditionally perform actions by ranking them according to an additional
variable.
The actions should have already been set up in the "Event actions"screen, and the variables
July, 2021
RESOLVE Manual
User Guide 1022
have all been published as described in the "Publish Application Variables" section.
The screen is invoked from the "Event actions" screen and has the following appearance:
The list at the top of the screen is a list of those actions that were defined in the previous screen.
From this list one can select those actions that are to be performed based on a ranking. After
they have been selected then click on the Add button.
The actions to be ranked will appear in the lower section of the screen.
Those actions that are not selected for ranking will not then be conditional, i.e. they will always
be performed.
After the actions have been added to the lower list, for each action one must select a ranking
variable from the drop down list of the second column.
In the above example, the closure of each well is ranked according to the GOR of the well.
No check is made that the ranking variables are of consistent type (e.g. if quantities are
represented in different units), so care should be taken when setting this up.
By default, the one action with the highest value of the ranking variable will be executed. The
order to execute actions option can be used to invert this. In the case considered, if the "Highest
First" option is selected, the well with the highest GOR of the two wells at the time the condition
is triggered will be the first one of the two wells to be closed.
If the "Lowest First" option is selected, the well with the lowest GOR os the two wells at the time
the condition is triggered will be the first one of the two wells to be closed.
It is possible to make more than one action execute by changing the Count setting: if the count
setting is set to 2, then the two first ranked actions will be executed as soon as the condition is
triggered.
This example will use the event driven scheduling tool described in the previous section.
July, 2021
RESOLVE Manual
User Guide 1024
We will use the system illustrated above and perform the following actions:
When the total oil rate falls below 200,000 STB/d, close all the production wells
35 days after this, open all the wells again
At least 40 days after this, and if the overall GOR exceeds 1,500scf/STB, close
the well with the worst (highest) GOR
At a given date, open up all the wells again
30 days later, close all the connections, starting with the highest GOR well and
finishing with the lowest.
For this example, the event driven scheduling screen will have the following appearance:
July, 2021
RESOLVE Manual
User Guide 1026
Action 3 - At least 40 days after the second action if the overall GOR > 1500 scf/STB
Note here that the wells are closed by closing the connections in RESOLVE. This is equivalent to
masking the wells in GAP as was done in the previous actions.
July, 2021
RESOLVE Manual
User Guide 1028
July, 2021
RESOLVE Manual
User Guide 1030
July, 2021
RESOLVE Manual
User Guide 1032
After the run is complete, the events tab of the log window has the following appearance,
showing the sequence of event that was performed during the RESOLVE forecast:
2.12.7.2.3 VB Script
RESOLVE implements a scripting control that allows for instance complex event-driven
schedules that cannot be setup using the in-build scheduling options to be performed during a
RESOLVE run.
To use this facility, a basic knowledge of Visual Basic and programming will be required.
July, 2021
RESOLVE Manual
User Guide 1034
Examples of how scripting can be used are included in the sample files.
To create or edit a script, go to Events/Actions| VB script | Edit from the main menu.
Several functions (e.g. "entry points") are called by RESOLVE at predefined parts of the
simulation.
Use the drop down lists at the top of the screen to select the function to implement.
Declarations This is not a function that is called, but this section allows variables to be
set up and instantiated with initial values
PreSolve This is called by RESOLVE just before the solve command is sent to a
group of connected modules, after the data has been passed between
the applications. The argument to the function (ModuleList) is a string
that is a delimited list of the modules that form the group, the delimiter
being a tilde "~" character.
The argument can be used to check exactly which part of the RESOLVE
system is currently being solved
PostSolve This is also called with a ModuleList argument. It is called after the Solve
command and after the data has been passed back to the receiving
module
Start Called at the initialisation of the simulation
Finish Called at the end of the simulation, to allow post-processing, tidying of
data, etc.
StartofTimestep Called at the beginning of every timestep
EndofTimestep Called at the end of every timestep
GetNextTimestepL Routine that enables to obtain the length of the next timestep when
ength(CurrentTime, running the forecast using the adaptive timestep mode
LastTimestep,
ProposedTimestep
, MaxTimestep)
RESOLVE properties
The following list describe the different RESOLVE properties that can be queried from the
script:
RESOLVE functions
The following list describe the different RESOLVE functions that can be called from the script:
LogMsg(string) Outputs a message to the RESOLVE log window during the run
Save() Saves the current RESOLVE file
SaveAs(filename) Saves the current RESOLVE file as filename
The next three functions may or may not be implemented for a particular link (it is up to the
July, 2021
RESOLVE Manual
User Guide 1036
The next functions are used in conjunction with the above to perform error checking.
Debugging
If a debugger is available, then breakpoints can be added and the code stepped through.
The exact functionality depends on the nature of the debugger that is installed.
2.12.7.2.3.2 "Script" Section
The script menu gives access to the scripting facilities in RESOLVE. These allow for instance to
dynamically query variables and perform changes to the system as the run proceeds (for
example, a well in a simulator can be closed when the saturation of water in nearby grid blocks
exceeds a limit).
Edit This invokes the script editor. This is described in more detail under
the"Scripting : an introduction" section
Enable This allows the user to toggle the scripting facility on or off so that runs with or
Script without the script can be compared
Execute This enables to test the script functionality without taking the time to make a full
function run.
This menu item produces a drop down menu consisting of the four functions
that can be implemented in the "Script" (PreSolve, PostSolve, Start, and
Finish).
Select the function that is to be executed.
If a debugger is installed, it will be possible to debug into the function, or
alternatively one can popup message boxes to track the progress of the
function call
If PreSolve or PostSolve are called, then an argument specifying the modules about to be (or
just) solved must be passed. They are passed in the form of a string delimited by tilde("~")
characters.
For example, in the system below, there are three possibilities for the string that is passed:
"Model A", "Model B~Model C~Model D" or "Model E".
If RESOLVE detects that there is more than one possibility, it will display a screen with a drop
down list of the strings. If there is only one possibility, the function will be called with this string.
Import / These options allow to export a script file to an external file that can later be
Export imported into a different RESOLVE case.
This is useful for swapping scripts between different files
Porting IPM 4 This section leads to a description of the potential modifications that have to
> IPM 5 be made to a script that was created with IPM 4 to use it in subsequent
versions.
July, 2021
RESOLVE Manual
User Guide 1038
There are a couple of small differences between the way the scripting is implemented in IPM #5
and consequent releases compared with IPM #4 which may become important when running
files that were developed and saved in IPM #4.
It is not anticipated that the situation will change again in future releases.
The functions "PreSolve" and "PostSolve" are called with an argument list (ModuleList in the
example below):
Sub Resolve_PreSolve(ModuleList)
End Sub
This module list is a tilde ("~") delimited list of modules that are currently being solved for, and
so many scripts that implement PreSolve or PostSolve have a protection at the start (the "if"
statement in the example above) to prevent the logic from being called at the wrong point in the
timeline.
Consider the following system, in which a model "Model B" is feeding a model "Model A".
Model B could be a reservoir simulator, Model A could be a surface network GAP model.
In IPM #4, PreSolve and PostSolve would be called once only with the argument "Model
A~Model B" as the solve was carried out per connection.
In IPM #5, PreSolve and PostSolve are called twice. The first time the argument will be "Model
B", the second time it will be "Model A".
The arguments reflect the calculation order as written to the calculation screen when the model
is run. If, therefore, the user want to apply some logic to Model A in the PreSolve function the
user will have to protect against the function being called for Model B using the "instr" function,
as shown in the example above.
In IPM #4, PreSolve and PostSolve would be called once with the string: "Model A~Model
B~Model C".
In IPM #5, PreSolve and PostSolve are called twice. The first time the argument will be "Model
B~Model C". The second time the argument will be "Model A". As before, the user will have to
guard against the logic of the script being executed twice if the user do not wish this to happen.
Note that the calculation order (Run | Calculation order from the main RESOLVE menu) can
affect the number of times that Pre/PostSolve are called.
2.12.8 Schedule
2.12.8.1 Schedule Setup Workflow
In cases such as these it is very common to require scheduling of the run to be performed.
July, 2021
RESOLVE Manual
User Guide 1040
This scheduling can vary considerably in its complexity, from simply requiring a simulator well to
be opened at a certain date to the implementation of drilling queues and conditional actions.
Step 1 The basic schedule is entered from the Schedule | Forecast Data menu item.
Here the start and end date of the run are entered as well as the timestepping
model; this could be a simple timestepping or an adaptive mode. Refer to the
"Timestep Control Setup" section for further information.
If the start date entered is before the start date of any of the client applications,
these applications will be started during the run as part of the RESOLVE
schedule.
For example, if the RESOLVE forecast starts on 01/01/2000 and a reservoir
simulator is not due to start until 01/01/2001, then RESOLVE will automatically
close all the simulator wells for the first year of the forecast
Step 2 The schedule control type has to be selected: three possibilities to control the
schedule are offered:
In a RESOLVE system, RESOLVE is the master controller and all the attached client modules
are slaves of the RESOLVE process.
As such, RESOLVE coordinates the timesteps of the time-dependent applications and ensures
that the applications are appropriately synchronised.
Refer to the "How does RESOLVE works" section for more information.
The RESOLVE timestep lengths (i.e. the times between synchronisation of the modules) are set
from the following screen:
July, 2021
RESOLVE Manual
User Guide 1042
Global This section enables to enter a start date for the RESOLVE run.
Options This start date can be selected from the client modules (i.e. if they are time
dependent) by clicking the Select from client modules button. This brings up
a list of client modules with their respective start dates, as illustrated below.
There is also the option to offset an entire schedule list by clicking the Offset
schedule list button and entering a new start date: all the schedule records in
the list will then be offset appropriately
Schedules On the left hand side is a list of schedules that run concurrently when a
prediction is performed.
The buttons at the bottom of the list allow schedule records to be added,
removed and inserted, or the entire list can be cleared.
When a schedule in the list is highlighted, that schedule timing and duration is
displayed on the right hand side of the screen
Timestep This section enables to select the type of timestep used during the prediction
setup run, with the following options being available:
Timestep This can be fixed, in which case the timesteps are of fixed
mode size and do not change with the prediction, or adaptive, in
which case timesteps will vary depending on the
performance of the system.
July, 2021
RESOLVE Manual
User Guide 1044
Length / timestep.
Initial For the adaptive timestep mode, this is the length of the
timestep initial timestep. This should be set to a fairly small value to
allow RESOLVE to increase the timestep length if it is
appropriate to do so
Individual For bi-directional links (e.g. the links between simulators and GAP where GAP
Application"s is returning operating data to the simulator) the option exists to make GAP
Optimisations optimise at every timestep, or at a fixed interval, or not at all
Debug If an adaptive timestep mode is selected, then debug messages will be written
Information to the log window during a prediction run. These messages will display how
the timesteps have been calculated (see the "Adaptive timestep section" for
additional information)
The dynamic link between client applications established through RESOLVE is an explicit link: it
basically calculates the performance of the system at one specific point in time, and then
assumes that this system performance will remain constant over the length of the RESOLVE
timestep. This is achieved for instance for a reservoir simulation / surface network link by fixing
either the production rate, wellhead pressure (i.e. WHP) or bottom hole flowing pressure (i.e.
BHP) of the wells during the length of the prediction timestep.
This explicitness can lead to potential errors in the understanding and modelling of the system: if
for instance large changes in GOR occur between two RESOLVE timesteps, the influence of
these variations can only be captured at the end of the next fixed RESOLVE timestep.
The adaptive timestep option enables to monitor for each connected pair of client modules the
change in an user-defined variable between timesteps.
The RMS (i.e. Root Mean Square) variation over all the connections is then calculated.
The multiplier is constrained within the limits set on the screen below.
The timestep length is also constrained within the limits on the screen below.
This therefore enables to increase the length of the timestep if the system is stable and
decrease the length of the timestep if the system is rapidly evolving, enabling to
reduce the potential explicitness error.
If there is more than one module connection, the smallest multiplier over all connections
will be used.
If the adaptive timestep mode is chosen within the "Timestep control setup screen", the
adaptive timestep options button will be available, and will lead to the following screen:
Maximum and The adaptive timestep scheme will not allow the timestep to vary outside
Minimum these limits
Timesteps
Timestep The adaptive timestep scheme calculates a multiplier on the previous
Growth timestep when it calculates a new timestep.
Multipliers These options forces the multiplier to be set within the limits specified.
July, 2021
RESOLVE Manual
User Guide 1046
2.12.9 Optimisation
The commands available from the optimisation menu are those that are used to set up the data
for the RESOLVE optimiser:
Objective Function
Control variables
Constraints
This menu is only available if the system has been specified to run with optimisation from the
Options | System Options screen.
For more information on the RESOLVE optimisation function, go to the "RESOLVE optimisation
overview" section.
2.12.10 Scenarios
2.12.10.1 Scenario Manager Overview
The scenario manager facility of RESOLVE enables to setup and keep in memory different
scenarios.
This is a very powerful feature of RESOLVE: the scenarios stored here can be submitted to a
cluster for parallel, batch processing. This allows many scenarios to be evaluated at once and
opens the door to other tools (e.g. IFM) to run probabilistic forecasts on coupled models
through RESOLVE.
More information on batch processing with clusters can be found in the "Running Scenarios on
a Windows cluster".
These scenarios can be setup using the event driven scheduling options or visual workflow
options. One of the main advantages of using these optons will be to be able to rapidly pass
from one scenario to another without having to re-enter all the event driven scheduling or visual
workflow data. Moreover, each scenario set of results will be saved separately, as described in
the results section. It will then be made possible to directly compare results from different
scenarios.
When selecting the Scenarios| Browse/Edit option from the main RESOLVE menu, the
following screen will be displayed.
Scenarios This section lists all the scenarios setup by the user. When newly accessed, it
displays "No scenarios".
In order to setup scenarios, just right click on the "No scenarios" item and a drop down menu
will appear.
July, 2021
RESOLVE Manual
User Guide 1048
From the "Scenario management" screen, there are three ways of creating a new scenario:
Add an empty This will pop up a screen which allows a label for the new scenario to be
scenario entered (by default, the label will be "scenario n"). The scenario that is
created will have no data; in other words, if it is run, the basic model will be
run with no event driven scheduling nor visual workflow.
Add a copied The scenario to be copied should be highlighted in the scenario list. After
scenario the "Add copied scenario" option is selected a screen will be popped up
to allow the new scenario to be labeled. The resulting scenario will be an
exact copy of the original scenario
Add the current This will copy the current event driven schedule into the scenario manager,
schedule again allowing the new scenario label to be set up first
However a scenario is generated, it will normally be necessary to setup the scenario data e.g.
to use event driven scheduling or visual workflow manager.
2.12.10.3 Editing a scenario
From the "Scenario management" screen, a particular scenario can be edited by highlighting
the entry in the scenario list and double-clicking on it.
This invokes further information for the various segments of the scenario in question. The
various options available are:
Enable/Disable This allows to define the scheduling options for whichever event/action
method is to be used for the scenario run i.e. either the event driven
schedule or the visual workflow option. The interface below allows to
select/deselect the options.
July, 2021
RESOLVE Manual
User Guide 1050
Initial state This section allows to set the initial state of the system.
Pre-load events This allows to set some global model changes to the different scenarios to
be run e.g. executing some event driven schedules but for two separate
GAP models. This can be done via two separate scenarios and by using
pre-load events. The use of this is explained under the scenario example
dealing with "Changing global model data".
EDS This calls up the event driven scheduling section with drop down link to the
Start, Pre-solve and Post-solve sections. The EDS can then be used to
define the scenario in exactly the same way it was used to generate a
stand-alone forecast.
The event driven scheduling screen obtained through the scenarios section
is different to the screen used to enter the schedule data for a single run.
This difference is the existence of the "Pre-load actions" button.
The use of this is explained under the scenario example dealing with
"Changing global model data".
Workflow This calls up the visual workflows section with drop down link to the Start,
Pre-solve and Post-solve and finish sections.
From the "Scenario management" screen, individual scenarios can be deleted or a range of
scenario lists or entire list can be cleared.
To delete an individual or multiple scenarios, the entry in the scenario list should be highlighted
and the "Delete this scenario" or "Delete a range of scenarios" option should be selected
via a right-click.
To clear the entire scenario list, the "Clear all scenarios" option should be selected via a right-
click.
2.12.10.5 Performing a sensitivity
2.12.10.5.1 Sensitise on inputs
The 'Sensitise on inputs' feature is used to automatically create a set of scenarios by defining a
range of sensitivity values for a given set of input parameters. These input parameters must be
user defined variables, and the feature is not available if there are no defined user variables.
The following window is displayed. The sensitivity cases are defined for each variable by a
minimum value, a maximum value, the number of cases and a distribution of the values between
the minimum and the maximum. Note that scenarios will be created for all possible
combinations of the sensitivity variables. Hence the total number of cases generated will be the
product of the number of cases defined for each variable.
The scenarios are created as a copy of the Current schedule, or as a copy of an existing
scenario. The values of the inputs are then defined in the 'Initial state' section of each scenario
created.
It is assumed that each scenario contains the required logic to pass the sensitivity variable on
to the physical models as required, through a workflow or event driven scheduling for instance.
July, 2021
RESOLVE Manual
User Guide 1052
Target variable The output variable to be plotted. All variables which have been published
and which can be plotted in a scenario plot are available. It is possible to
select up to two different variables, which will be plotted on the left and right
vertical axis.
Initial state The sensitivity variable to be plotted on the horizontal axis
variable to plot
Other state Select the date at which to plot the required results, and the value of other
variables and sensitivity variables (available when the sensitivity was performed on two or
date at which to more variables).
plot results
This section includes three examples to illustrate different applications of the scenario
management in RESOLVE.
Basic This illustrates how to change a single variable in one of the client
application models
Changing global This illustrates how to use different model files (i.e. "realisations") for
model data different scenario runs
Changing a This illustrates how to run different scripts with different scenarios
script
2.12.10.6.1 Basic
This is a very simple example. The idea is to set up two scenarios in which GAP is run with two
different separator pressures - 100 psig and 150 psig.
First of all, the GAP separator pressure variable would need to be published into RESOLVE -
See the "Published Application Variables" section to obtain further information on how to do so.
From the scenario management screen, the new scenarios should be generated by right
clicking and selecting the "Add Empty Scenario" option twice. The scenarios can be given
logical labels, e.g. "100psig" and "150psig":
July, 2021
RESOLVE Manual
User Guide 1054
Double-click on the "Start" section of the Event driven scheduling section of the 100psig
scenario (i.e. these are directives which are executed only once at the start of the run). In this
case, the separator pressure is to be set at the beginning and retained for the entire run. There
is no condition for the separator pressure change; in this case it is always required.
To indicate this to RESOLVE enter the condition <if timestep = 0> as shown.
Click on the corresponding action button, and enter the action to change the separator pressure
as shown below:
This screen can then be OK'd. The "Action" button of the parent screen should appear
highlighted to indicate that the action has been set. This screen can also be OK'd.
The procedure can then be repeated for the other scenario for the other pressure.
An alternative way to do this would be to generate the first scenario (as above) and then copy
this scenario. The copied scenario could then be edited, and only a single edit (the value of the
separator pressure) would be required to create the new scenario.
This can be accomplished as follows. A new scenario should be created (as in the "Basic"
example) and edited.
July, 2021
RESOLVE Manual
User Guide 1056
The event driven scheduling screen contains a button labeled "Pre-load actions".
As soon as the set of models used in the RESOLVE project is reloaded to run one specific
scenario, then the changes specified in this screen through the use of OpenServer variables
will be performed, therefore changing the setup of the models considered before the scenario is
run.
The variables are set just before the models are loaded into RESOLVE, and so in this case the
GAP file will be switched from the one saved with the RESOLVE file to the one shown above.
If the runs are sequential (i.e. one following the other, not distributed on a cluster) it may be
necessary to ensure that model(s) are reloaded after the changes in the table are applied. This
step is performed by highlighting the models of interest in the list at the bottom of the screen.
Note that any RESOLVE OpenServer variable can be used here. This feature should be used
with care: clearly, the new GAP model should expose the same sources and sinks as the one
that it replaced.
2.12.10.6.3 Changing a script
It is sometimes necessary to run different RESOLVE scripts, or different versions of scripts, for
different scenarios.
It is not possible to dynamically load different scripts into the model from the scenario manager.
However, the following procedure can be applied to accomplish the same thing.
Consider a case where scenario1 is to execute one script and scenario2 is to execute another.
This can be implemented in a single script as follows:
if (scenario_variable = 1) then
else
if (scenario_variable = 2) then
end if
end if
It is then only necessary to create the variable "scenario_variable", which can be set by the
scenario manager, and which can be read by the script.
This can be done with the use of dummy variables. Excel is a useful repository of these,
although there is no reason why unused GAP variables (i.e. in dummy pieces of equipment) can
not be used.
July, 2021
RESOLVE Manual
User Guide 1058
Create an Excel Double-click on the icon and OK the resulting screen to activate the Excel
icon in the application. There is no need to connect the Excel sources and sinks to
RESOLVE model anything, and there is no need for the Excel model to point to a named
Excel file. For clarity, the Excel model can be given a label such as
"dummy"
The scenario
variable must be
published from
Excel, i.e.
The value of this This can be achieved by using the following RESOLVE OpenServer string:
variable can be Resolve.Module[{dummy}].PubVar[{scenario}].value.
interrogated in
the script The script code then becomes:
scenario_variable = cint(DoGet("Resolve.Module[{dummy}].PubVar
[{scenario}].value"))
if (scenario_variable = 1) then
else
if (scenario_variable = 2) then
end if
end if
2.12.11 Run
2.12.11.1 "Run Menu"
The menu items under this section are used to control a simulation run as it proceeds.
Validate This performs the "pre-processing" of the run without doing the run itself.
It can be used to perform a simple validation of the system
Start Commences a run (i.e. from the beginning of the forecast or following a
pause).
If it is starting from the beginning of the forecast, RESOLVE will validate the
system beforehand and display any validation errors
July, 2021
RESOLVE Manual
User Guide 1060
Single Step Enables to run a single-step of the forecast: the step will be run and then the
forecast will automatically be paused
Single If an optimisation run is performed (i.e. either standalone or as part of a
Iteration forecast) then this provides the option to just perform a single iteration of the
(optimiser) "SLP" optimisation routine.
This is useful for debugging optimisation runs, as a table of the linear functions
generated by RESOLVE for the objective function can be accessed
Stop Terminates a run
Pause Pauses a run.
During a simulation, the client applications user interfaces are disabled -
pressing pause will re-enable the applications and allow the user to view
results, etc.
Run This enables to launch runs for the different scenarios saved in the "Scenario
Scenarios manager". They can be run sequentially or distributed onto the nodes of a
cluster. Go to the "Running Scenarios" section for further information
Edit "Composition tables" are required when compositional data is being passed
composition between applications: they will enable to map the compositions from one
tables application to the other . This is necessary as each application will have its
own set of compositions which may have unique names across the RESOLVE
system (e.g. methane in one package may be referred to as CH4 in another)
Edit RESOLVE works out a "Calculation order", working upstream to downstream
calculation according to the flow of data.
order This screen can be used to visualise the calculation order that RESOLVE has
determined, and enables to change the calculation order if the user decides
that a particular model calculation depends on the results of another model.
For instance, if the RESOLVE models contains a script that takes the results
from one model and applies them to another model, both models cannot be
solved simultaneously and the calculation order will have to be modified to
enable the script to work successfully
Edit loops If RESOLVE detects that there are loops in the system, then this will present a
screen that allows adjusting how the loops are solved.
See the "Edit loops" section for further details
Debug Turns the debug logging on or off.
Logging The debug logging will generate a detailed record of all the data that is
passed between applications during a run which will then be saved with the
RESOLVE file. It allows cases to be debugged remotely without necessarily
having access to all the drivers or applications
Debug file Utility functions to view the above debug file or to export it as an ASCII file
view / export /
clear
IPR Logging Turns the IPR logging on or off.
The IPR logging will enable to store in RESOLVE all the inflow performance
curves passed from the reservoir models to the well models. This is extremely
useful for troubleshooting purposes, especially to analyse whether for instance
a convergence issue is coming from the quality of the IPRs passed or from the
surface network calculations
This screen is invoked from the Run | Edit Calculation Order menu item.
A numerical reservoir simulation model (i.e. modelled with Petroleum Experts REVEAL
package in this case) is connected to a production and an injection GAP model.
One of the objective of this model is to perform a voidage replacement scheme: the production
from the production model will be obtained by a RESOLVE script and then this will be applied as
a constraint, by the script, on the injection model.
July, 2021
RESOLVE Manual
User Guide 1062
When this system is first modelled in RESOLVE, it will determine that the reservoir model needs
to be solved before the data is passed up to the network models, which can then be solved
simultaneously.
When the calculation order screen is first invoked, it will have the following appearance,
specifying that the first element to be calculated is the reservoir model, followed by the
production and water injection surface network models, which are solved simultaneously.
Current This displays the "groups" of modules that are solved simultaneously.
calculation In the above example, the "reservoir" model is solved first, and then the
July, 2021
RESOLVE Manual
User Guide 1064
This screen is invoked from the Run | Edit loops menu item.
It is also possible to access that screen by right-clicking on the icon of the connection
considered in the graphical view section.
The issue of how a system which contains loops is solved is discussed in the "Methodology"
section.
In this example, the RESOLVE system contains three models (Production, Main_Process, and
Splitter) which forms a loop.
This setup is extremely common for instance when considering gas lifted systems: the gas
produced is passed from the separator in the surface network model (i.e. GAP in this case) to
the process model (i.e. UniSim Design in this case). The process model calculates the amount
of gas available for gas lift and gas re-injection purposes and passes this volume to an Excel
spreadsheet. The Excel spreadsheet includes a user defined relationship that splits this volume
of gas into gas available for gas lift and gas available for gas re-injection.
The gas available for gas lift is then send back to the surface network GAP model, creating a
loop.
Several iterations are required around this loop for the model to converge: the user will have to
setup the loop to specify for instance how tight the convergence through this loop will be.
Select loop This contains a list of the loops in the current system.
In this example, there is only one
Loop elements In a loop, RESOLVE needs to be told which model to solve first when solving
the loop. It will have a guess at which one to solve first (a reservoir simulator
will be solved before a surface network simulator) but this section enables
the user to alter that order
Method Two methods are available to solve loops within RESOLVE:
July, 2021
RESOLVE Manual
User Guide 1066
CGR
WGR
Auto-init If this is checked, then just before the first module of the system is solved the
initial inflow data will be passed from the upstream node to the downstream
node as set in this table
Start value If "Auto-init" is not set then when the loop solve is started, the value entered
here will be passed to the downstream model for initialisation purposes.
This is only available for Qwat, Qoil, and Qgas target variables, as this is the
data passed by RESOLVE
Init? button If "Auto-init" is not set then this will cause RESOLVE to interrogate the
upstream node for the inflow value that is currently entered for the target
variable in this system.
In this case, "Model Production, Out-1" will be queried to determine its liquid
rate. This will then be set as the "Start value".
Overview
Scenarios consist of a set of "Event driven schedules" which have been stored in the "Scenario
manager".
The event driven schedules allow a RESOLVE coupled run to be controlled with a series of if ...
then ... else directives.
Here are some examples of the type of scheduling that can be achieved through the event
driven scheduling options of RESOLVE.
At <date1> open well1. At <date2> close well2.
If the water cut in well1 exceeds 0.8, change the separator pressure in the GAP model.
If the GOR at the GAP separator exceeds 5000 scf/STB, bring on a new compressor in
the Hysys plant model.
This run should use a new realisation of the Eclipse reservoir model contained in the
data file NEWMOD.DATA.
Open a set of pieces of equipment in a "drilling queue".
July, 2021
RESOLVE Manual
User Guide 1068
As can be seen, the scheduling is very open ended and should be able to handle most field
events.
A set of these schedules represents different model scenarios. These can be run from the main
RESOLVE menu: Run | Run scenarios.
Running scenarios
When the "Run scenarios" menu item is invoked, the following screen is displayed:
Select from the list those scenarios that are to be to run (i.e. by default, all are selected). The
scenario labels are those applied to the scenarios when they were set up in the scenario
manager.
details.
"Windows" This is a cluster where all the cluster nodes are running a Windows operating
Cluster system. This is managed by PXCluster, a clustering software developed by
Petroleum Experts and distributed with the IPM tools. Please refer to the
Setting up PXCluster section for further information.
"Mixed" This is a cluster where the cluster nodes are running either Windows or
Cluster another operating system such as Linux for instance. The setup and usage of
this type of cluster is described in the "Running Scenarios on a Mixed Cluster"
section
Before submitting the scenario jobs, the nodes on which the jobs will be run, maximum number
of nodes and other options are configured in this screen.
See also the note on running RESOLVE cluster jobs in windowed or non-windowed modes.
2.12.12 Results
View Forecast Results This enables to access the RESOLVE forecast results in a
(table) TABULAR form.
Refer to the "Tables of Results" section for further details
View Forecast Plots This enables to access the RESOLVE forecast results in a PLOT
form.
Refer to the "Plotting the Results" section for further information
View Forecast Plots This enables to open another plotting window, in addition of any
(new window) plotting window already opened.
Refer to the "Plotting the Results" section for further information
Generate report This generates a report of the results of the current run. A screen will
appear with options as to whether to report to the clipboard or a file,
and whether to 'delimit' the entries in the report with some character
(e.g. a comma), or to format the columns with a fixed width
July, 2021
RESOLVE Manual
User Guide 1070
View Scenario Results If RESOLVE has been used to run several sets of scenarios, this will
(table) enable to retrieve the results of the different scenarios in a
TABULAR form - as all the scenarios results will be automatically
saved once the scenarios have been run, it will be possible to
compare the results from the different scenarios in the plot sections
for instance.
Refer to the "Tables of Results" section for further details regarding
how to manipulate these results
View Scenario Plots If RESOLVE has been used to run several sets of scenarios, this will
enable to retrieve the results of the different scenarios in a PLOT
form - as all the scenarios results will be automatically saved once
the scenarios have been run, it will be possible to compare the
results from the different scenarios in the plot sections for instance.
Refer to the "Plotting the Results" section for further details
regarding how to manipulate these results
View Scenario Plots This enables to open another plotting window for scenario results, in
(new window) addition of any plotting window already opened.
Refer to the "Plotting the Results" section for further details
regarding how to manipulate these results
View Optimisation Invokes the "Optimisation Results" screen
Results
View IPR log results If the IPR logging option has been selected prior to the run, then this
section will enable to access the IPR log results: these will consist of
a set of inflow performance curves provided for each well at each
timestep.
It will also contain information regarding the well operating points for
each timestep (i.e. the intersection between IPR and VLP)
View loop results Invokes the loop results screen
Log Window Displays the log window.
The log window can be written to by script commands (i.e. see the
"Scripting" section for more information), which is potentially useful
for debugging purposes.
The log file is saved with the RESOLVE file - this menu item can then
be used to display the currently saved log file
This section , accessible through Results | View Forecast Results (table) or Results | View
Scenario Results (table) enables to access the RESOLVE forecast results in a TABULAR
form.
For each forecast run, the results presented will be the following for each client module:
The variables selected by the user prior to the run by right-clicking on the client module
icon and selecting the "Output Variables" section.
The left hand side drop-down box enables to select which results to be displayed, for instance
here the results for the "Well 2" in the "Production" network model for the current run.
enables to plot these tabular results - It is equivalent to using the "View Forecast
Plots" section described below
enables to create a secondary plot window - It is equivalent to using the "View
Forecast Plots (new window)" section described below
enables to save the results of the current run.
Once selected, the following screen appears: use the "New Stream" section to save
the results of the current run in memory. Once this is done, then these results are kept
July, 2021
RESOLVE Manual
User Guide 1072
within the RESOLVE model and can be consulted / compared to other runs.
This section , accessible through Results | View Forecast Plots or Results | View Scenario
Plots enables to access the RESOLVE forecast results in a PLOT form.
In order to generate a plot in this screen, the following procedure can be used:
On the left hand side of the screen, select the element to consider - for instance here
Well 2.
July, 2021
RESOLVE Manual
User Guide 1074
The variables available for Well 2 are displayed in a list in the bottom right hand corner.
Select one of these variables - here Liquid Produced - and click on the sign at
the bottom right hand corner of the screen. The screen below will appear, which
enables to select multiple streams - here for instance the Well 2 / Separator / Well 1
and Well 1_GL elements are selected: the plot will then plot the liquid produced for all
these nodes.
July, 2021
RESOLVE Manual
User Guide 1076
The different streams that are plotted are described on the right-hand side of the screen -
the user can decide which ones to display in the plotting area by using the tick boxes
next to each one of these streams labels.
enables to access the plot editing screen illustrated below: this screen will enable to
define the format of the plot in order to tailor it to the user's needs.
enables to automatically come back to the initial plot format - this is particularly useful
when the plot has been zoomed IN or zoomed OUT and the user wishes to come
back to the original display
enables to suppress one or all the streams displayed on the plot
enables to save the results of the current run
enables to save the setup of the plot under a specific name: this is extremely useful
as it enables the user to define one specific plot and to automatically reload it as
soon as a new run has been performed without having to redo the plot setup.
automatically reloads any plots that have been saved
This screen, which can be invoked from the Results | Loop Results menu item, contains the
results for the loop iterations that RESOLVE performs when solving a loop.
July, 2021
RESOLVE Manual
User Guide 1078
Target Variables The list at the top contains all the target variables for the RESOLVE run.
Highlight the target variable for which the results have to be viewed
Results section The grid on the lower half of the screen contains the results for each pass
of the calculation over the loop (iter #1 and iter #2 in the screenshot
above).
If this window is closed after the run, it can be reopened using this option.
2.12.12.7 Log Window
When a calculation has finished running, information time spent in each module and other
information (including reported messages) is available in the Log Window:
July, 2021
RESOLVE Manual
User Guide 1080
If this window is closed after the run, it can be reopened using this option.
2.12.13 Window
This section provides menu items to manage the main RESOLVE display windows as follow:
Cascade enables to cascade the view : one window is located on top of another but
slightly shifted so that all the windows are visible.
Tile enables to tile the main visualisation screen of RESOLVE in equal sections
for each windows: all the windows will be entirely visible, but the scale of
each window will be adapted to fit the size of the screen.
July, 2021
RESOLVE Manual
User Guide 1082
2.12.14 View
This section provides menu items to display / hide:
The toolbar (at the top of the main window)
The status bar (at the bottom of the main window)
The infoviewer (to the left of the main window)
July, 2021
RESOLVE Manual
User Guide 1084
2.13 Appendix
OpenServer is the mechanism by which external applications can interact and control all the
programs in the IPM suite (PROSPER, GAP, REVEAL, PVTP, MBAL, and RESOLVE)
For example a VBA macro in Excel can be used to open, interrogate, and run IPM programs.
The OpenServer functionality that is in RESOLVE operates on the same principles as other
programs.
For more information, see the OpenServer manual that is distributed with the IPM suite.
There follows a brief overview of the functionality of the OpenServer with specific reference to
RESOLVE.
External OpenServer macros can be written to control RESOLVE. These macros can be written
in any language that supports automation, for example: Visual Basic, VBA, VBScript, C++,
Java, Matlab.
Most typically they are written as VBA macros in an Excel spreadsheet; the "OpenServer
example macros" are in this form. An OpenServer macro can call three different functions /
subroutines on the program (RESOLVE) it is controlling.
These are:
retval = DoGet("tagstring") : interrogates a variable in the application (for example,
the schedule start date).
DoSet("tagstring", "value") : sets a variable to the value "value".
DoCmd("tagstring") : performs a command (for example, load a file, perform a run).
In each case, tagstring is a "." delimited string that refers to the variable or command in
question.
For example, to get the start date in RESOLVE, the following string will be used:
retval = DoGet("Resolve.Schedule.StartDate.DateStr")
will return the date in the form of dd/mm/yy (depending on the international settings).
DoCmd("Resolve.Run"),
and so on.
In each case the tagstring starts with the application name (in this case, RESOLVE).
When getting or setting a variable, the rest of the string is a delimited list of child variables until
we get to the required variable (DateStr is a "child" variable of the "StartDate" property, which is
a child of the Schedule data, and so on).
RESOLVE supported OpenServer variables are documented in the "Module Variables"
section.
When executing a command, the string simply refers to the command to be executed: these
are documented in the "Commands" section.
There is a quick way to find an OpenServer tagstring if the variable is part of the user interface.
In this case, go to the required screen and press <Ctrl> and Right Click over the variable in
question.
A screen will appear with the variable tagstring which can then be copied to the clipboard.
Empty variables
Variables that are not set (or are blank) in RESOLVE return a large number (3.4e34)
when the OpenServer queries them
July, 2021
RESOLVE Manual
User Guide 1086
Certain properties can be interrogated or (depending on whether the collection is read only) set
for all collections.
These are:
DebugFlag Sets or clears the flag that determines whether debug data is
saved during a run.
Example syntax: DoSet("Resolve.DebugFlag", 1))
IsRunning Returns whether or not RESOLVE is currently performing a
forecast or optimisation run
IsError Returns the error status of the last run that was performed, i.e.
whether the run terminated successfully or not.
ErrorMsg If the run failed with an error, this returns the failure message
EnableOptimisation Sets or clears the flag for performing an optimised solve or
forecast. Finer control of optimisation runs (e.g. setting or
removing controls and constraints from the problem) is
available through the Module data items or the Optimiser
Schedule data items (see below)
Module A collection of module data items.
To index a given module, the alias or label of the item can be
used: e.g. Module[{Reservoir}].
For more information on Module variables, see "Module
variables"
Driver A collection of drivers registered with the RESOLVE
application.
See "Driver variables"
Schedule, ScheduleList The schedule data, as accessed from the "Schedule" screen in
the interface.
See "Schedule variables"
ModLink A collection of individual module connections, containing data
such as the calculation order and adaptive timestepping
sensitivity.
See "ModLink variables"
Connection A collection of connection data items.
See "Connection variables"
Properties The RESOLVE preferences.
See "Properties variables"
Optimiser The optimisation parameters.
See "Optimiser variables"
OptimiserSchedule In a forecast run these variables control the way the problem
can be changed as a function of time.
See "Optimiser schedule variables"
AdvancedScheduleStart / These refer to the various sections of the "Event driven
AdvancedSchedulePreSolve schedule". More information can be found in the "Event Driven
/ Schedule Variables" section
AdvancedSchedulePostSolv
e
Scenario See "Scenario manager variables"
VarLink This collection refers to the collection of "Variable links" in the
model. See "Variable link variables"
Results The results variables. This top-level variable is retained for
backwards compatibility. From IPM #6, the results can be
obtained from the "Module variable". Regardless of this, the tag
strings for these variables depend on the case being run and
can always be determined by using the <Ctrl> <Rclick> method
Runtime variables
The following variables are top level variables that can be accessed during a RESOLVE run.
For example:
DoGet("Resolve.CurrentTime") may return 36525.0, while as
July, 2021
RESOLVE Manual
User Guide 1088
Module Collection
There are no variables specific to this collection. Index individual items by number (Module[0])
or label (Module[{Network}]).
The collection is read only. Module lists can be manipulated (created/deleted/linked) by calling
an appropriate "command".
Module Item
XPos / YPos The position of the module icon on the main screen
Alias The label / alias given to the module when it was created
Driver Data pertaining to the registration information of the driver (see
"Driver variables")
SrcSnk A collection of source/sink child data for the module (see "SrcSnk
variables")
OptCtrl A collection of optimisation control variables specific to this module
(see "Module Optimisation variables")
OptConstraint A collection of optimisation constraint equations specific to this
module (see "Module Optimisation variables")
OptObjFn The objective function (if present) for this module (see "Module
Optimisation variables")
ShowChildren Determines whether the module child icons are displayed
(expanded) on the main screen
ProducesComposition Returns whether the module produces compositional / EOS data
s (read only)
BaseComposition For a compositional module, returns the base set of compositions
(read only) that will be used in the run
CalcOrder The calculation order in the RESOLVE system: as specified in the
"Calculation Order" link
StartDate (read only) The start date of the module (which may be different to the
RESOLVE start date).
This could be the start date of a reservoir simulation run, for example
MsgBuffer When a run is performed, messages (which may be warnings or
may be purely informational) are written to the log window. This
string allows OpenServer access to these messages.
All the messages output over the entire run are output to a single
buffer.
For example:
Resolve.Module[{Production}].Results[{CurrentRun}][{W1}]
[1].GasProduced will return the gas rate for item "W1" in module
"Production", timestep 1, for the last run that was performed.
The label returned from this can be used as the <variable> in the
OpenServer strings shown at the start of the section
DisableFlag Read-only - returns whether a module is currently disabled or not
July, 2021
RESOLVE Manual
User Guide 1090
Commands
If RESOLVE does not find the module variable in the above list, it will pass the string on to the
driver which controls the application in question. These "variables" are thus considered internal
to the application driver.
If a variable refers to a particular module but can not be found in the "standard module variable
list", RESOLVE will pass the string on to the driver which controls the application in question.
In the case of IPM products (REVEAL / GAP), the string will be passed on to the OpenServer
of the connected applications if it is not processed by the driver.
For example:
Resolve.Module[{GAP}].MOD[{PROD}].INFLOW[{comp1}].IPR[0].ResTemp
To obtain file names from the applications, it is preferable to use the OpenServer
"command" MakeProjectFile().
The OpenServer syntax below is not standardised, while the interface that is used by
MakeProjectFile() is.
For the same reason, equipment items and variables should be obtained from the Equip
collection of the "module variable".
The following sections relate to the Internal driver variables for the following applications:
GAP
Reveal
Hysys
UniSim Design
Eclipse (all types)
Excel
July, 2021
RESOLVE Manual
User Guide 1092
SaveSnapshots This will cause GAP (when run predictively) to always save prediction
snapshots for subsequent debugging. If it is not set, then snapshots will
be saved in GAP according to the internal setting of the GAP model
DebugTables If the wells are set to perform IPR regression (when connected to a
reservoir simulator), then this flag indicates whether the IPR tables
should also be written (to enable debugging after the run)
UseGradient If the wells are set to perform IPR regression (when connected to a
reservoir simulator), then this flag indicates whether the rate gradient
from previous timesteps will be used to calculate the GOR and water
cut (or CGR and WGR for gas/condensate wells)
CompItem e.g. Resolve.Module[{GAP}].CompItem
This returns (or sets) the name of an item in GAP that will be used to
obtain composition names for the entire model, prior to a run. If it is
blank then each connected item will be interrogated in turn to get the
component names, and these sets of names can be different for each
item.
Label Returns the label of the source or sink. For a well, for exa
would be the well label
Type The following item types may be returned:
Well
Inj Manifold (injection manifold)
Separator
Separator sub-stream
Inflow
Source (GAP source)
Sink (GAP sink)
Total lift gas (lift gas for the entire system)
Unused lift gas
Well lift gas (lift gas icon specific to a given well)
User joint (additional joint selected by the user for output)
InlInj (inline injection)
Phase Read-only variable. Values can be:
Liquid
Water
Gas
Condensate
Unknown
This variable refers to general information on the contents of the REVEAL icon in RESOLVE.
July, 2021
RESOLVE Manual
User Guide 1094
FluidPackage variable:
Resolve.Module[{Hysys}].FluidPackage[0].Label OR
Resolve.Module[{Hysys}].FluidPackage
[{label}].Composition.NumComponents
Label / Name These both return the name of the fluid package
NumComponents The number of components defined for the fluid package
CompName[n] The name of the component of index n (0 <= n < NumCo
Server.SolverStatus e.g. Resolve.Module[{Hysys}].Server.SolverStatus
This can be set to the following:
0 - disable solver
1 - enable solver
As mentioned, dynamic OpenServer changes to the model should now be carried out through
the "Equip" or "PubVar" interfaces of the RESOLVE "Module " variable.
An example of how these are used is in the OpenServer example macro: HysysOpenServer-
legacy.xls.
FluidPackage variable:
Resolve.Module[{UniSim Design}].FluidPackage[0].Label OR
Resolve.Module[{UniSim Design}].FluidPackage
July, 2021
RESOLVE Manual
User Guide 1096
[{label}].Composition.NumComponents
Label / Name These both return the name of the fluid package
NumComponents The number of components defined for the fluid package
CompName[n] The name of the component of index n (0 <= n < NumCo
Server.SolverStatus e.g. Resolve.Module[{UniSim Design}].Server.SolverStatus
This can be set to the following:
0 - disable solver
1 - enable solver
As mentioned, dynamic OpenServer changes to the model should now be carried out through
the "Equip" or "PubVar" interfaces of the RESOLVE "Module " variable.
An example of how these are used is in the OpenServer example macro: UniSim Design
OpenServer-legacy.xls.
The tag strings for all Eclipse internal driver variables are the same, regardless of the "flavour"
of the driver that is being used. Any differences between the various Eclipse drivers are noted
below.
FileName
The name of the data file to be opened. It does not have to be a local file, or even a
Windows file.
HostName
The name of the host on which Eclipse is to be run. This is redundant if connecting to a
Linux machine running the LSF daemon.
Linux
Set to non-zero if the data file resides on a remote host that is running Linux and the
Petroleum Experts connectivity daemon (mpirunfactory_xxx.exe).
ControllerHostName
This is the name of the Linux computer that is running the run factory daemon -
mpirunfactory_scampi.exe or mpirunfactory_mpich.exe.
PortNumber
The port number that the run factory daemon (above) is waiting on. This is the port
number that was specified on the command line of the run factory executable
when it was started.
WellManagement
Controls the flag for whether Eclipse controls the well scheduling (on/off) or the
connected application (normally GAP). Note that this refers only to the on/off state of the
wells, and not to well or system constraints (for example) which are handled at the GAP
level.
The possible values are:
0 - Eclipse
1 - Connected application (GAP)
DoWellManagementDebugFile
Flag to control whether a well management debug file is written by the Eclipse link driver.
WellManagementDebugFile
The name of the well management debug file that is to be written if the above flag is set.
AbandonShutInWells
The action to take if GAP closes a well on a timestep.
July, 2021
RESOLVE Manual
User Guide 1098
Temperature
The reservoir fluid temperature.
In E300 models, an attempt will be made to read this from the RTEMP keyword in the
PVT data file.
TemperatureUnit
The unit of the above temperature.
0 - deg F
1 - deg C
2 - deg R
3 - deg K
DebugLevel
Controls the severity level of the messages that are echoed to the RESOLVE logs. Only
messages higher in severity than this value will be output.
The values are:
0 - MESSAGE
1 - COMMENT
2 - WARNING (default)
3 - PROBLEM
4 - ERROR
5 - BUG
OutputTo
Governs where the log messages are output.
The possibilities are:
1 - to the log window
2 - to the output window
3 - to both windows
IPRModel
The IPR model to be used in the run.
0 - Eclipse PI (do not use - legacy only)
1 - Calculated PI (based on well block pressure)
2 - Calculated PI (based on drainage pressure)
If the drainage option is selected, then the following options become relevant:
DrainageMode
The method of calculating the drainage pressure.
0 - not used (legacy)
If value 1 (the diffusivity method) is used, then the following options are
applicable:
ImportGrid / GridDebugFile
Specifies that the grid is to be imported, and gives the name of the
file that is used to hold the grid data.
In practice, the grid should already have been imported into
RESOLVE before this option can be used. The process of setting
up the drainage regions can not currently be automated.
If value 2 (the relaxation method is used, then the following options are
applicable:
RelaxationTime
The time (in days) for which the build up calculation should be run. A
nominal value of 2 days is the default.
NumberOfPoints
The relaxation time can be broken down into several sub-timesteps
which are then logged separately in the log file. This can be useful
for debugging.
LogTransient
Governs whether the buildup points are written to the log window or
not.
NewtonLevel
Set to:
0 - standard explicit coupling (recommended with one of the improved IPR
options described above)
1 - Newton-level coupling.
The Newton coupling is not recommended except for exceptional cases.
This is described in more details in the section on the "Eclipse driver".
NewtonTolerance
The tolerance to be reached before convergence is signalled.
PlotSummary
Set to non-zero to plot the summary vectors from the Eclipse model in RESOLVE.
July, 2021
RESOLVE Manual
User Guide 1100
The following are so-called "advanced options" and should not normally be changed:
GLRThreshold
The GLR at which a liquid well is considered to become a gas well (and vice versa). This
affects the control mode only: if a well is being controlled with a liquid rate and it "turns
into" a gas well, then the control mode will be changed to gas rate.
GLRUnit
The unit for the above quantity.
0 - scf/STB
1 - Mscf/STB
2 - m3/m3
AutoSwitch
Turns the switching from liquid to gas wells (as described) on or off.
SmallTimestep
Eclipse is forced to take a small timestep when a block-pressure IPR is generated as
described in the "Eclipse Advanced Options" section. The size of the small timestep can
be changed here if required.
Name
The name of the group
Respect
Governs whether the control under this group in the Eclipse model should be allowed.
Normally (and by default) all group controls are removed so that control can be taken by
GAP. However, there are some circumstances where group control may be useful: this is
described in the "Group Control in Eclipse" section.
Dynamic data that can be returned from Eclipse groups, e.g. Resolve.Module
[{Eclipse}].Group[{Mygroup}].GOPR
The following variables can be used to return values relating to groups from Eclipse during a
run. These variables can therefore be accessed from the "RESOLVE script" to make dynamic
decisions about field management. Note that the group "FIELD" (case sensitive) can be used to
get field information.
These are the most commonly required variables. Other variables are available. A
complete list can be obtained from Petroleum Experts on request.
It is possible to dynamically set group data in an Eclipse model from an OpenServer string
(such as that shown above).
In normal use, RESOLVE will remove Eclipse group controls so that control of
the individual wells by GAP will not be interfered with. This behaviour can be
"overridden" for cases where not all the Eclipse wells are connected to
corresponding GAP wells. Eclipse group controls for the group(s) in question MUST
be respected to allow these OpenServer strings to set variables; otherwise, RESOLVE
will simply overwrite the changes.
As above, the group "FIELD" can be used to set field data. However, this can only make sense
if there are no connections to external modules.
July, 2021
RESOLVE Manual
User Guide 1102
To control the Eclipse Groups on a predefined limit, it is required that the corresponding
keyword is entered in the Eclipse deck. For example, if the objective is to run the Eclipse model
on a standalone basis and it is required to limit the production of gas to a certain limit, then the
GPRDLIM may be used in the Event Driven Scheduling. However for this to be successful, it is
required, that the GCONPROD keyword be defined on the basis of GRAT in the Eclipse deck.
Name
The name of the well.
Type
The type of the well.
1 - oil (liquid) producer
2 - oil injector
3 - water injector
4 - gas injector
In Eclipse 300 (older versions), gas injectors also sometimes return "0". This was a bug
in Eclipse.
Ctrl
The control mode for the well. For producers, the list is:
0 - BHP
1 - THP
2 - Oil rate
3 - water rate
4 - liquid rate
5 - gas rate
For injectors, 0 and 1 are the same and:
2 - fixed rate
Dynamic data that can be returned from Eclipse wells, e.g. Resolve.Module
[{Eclipse}].Well[{name}].WOPR
The following variables can be used to returns values from Eclipse during a run. These variables
can therefore be accessed from the "RESOLVE script" to make dynamic decisions about field
management.
The following return IPR A and B coefficients. It is important to note that these are not
calculated by RESOLVE but reported directly from Eclipse.
WOPIPRA, WOIPRB - well oil IPR A and B coefficients
WGIPRA, WGIPRB - well gas IPR A and B coefficients
WWIPRA, WWIPRB - well water A and B coefficients
These are the most commonly required variables. Other variables are available. A
complete list can be obtained from Petroleum Experts on request.
The following variables can be used to access completion data from an Eclipse well during a
run.
July, 2021
RESOLVE Manual
User Guide 1104
The following variables can be used to access Eclipse system data during a run.
s = DoGet("Resolve.Module[{Eclipse}].System.Date")
MsgBox(cdate(cdbl(s)))
CPU - the CPU time (in seconds) taken by the Eclipse run
Elapsed - the elapsed time since the start of the Eclipse run
NCOMP - the number of components in the Eclipse run. In a E100 run, NCOMP will
equal 2, and the component names will be "BLACKOIL" and "GAS".
UNCONV - the unit system used in the Eclipse model. 1 = Metric, 2 = Field, 3 = Lab
This variable refers to general information on the contents of the Excel icon in RESOLVE.
FileName
Sets the Excel filename to open.
Macro
Sets the name of the macro to run (under the "Macro" tab of the Excel data entry screen)
PassDateToMacro
Sets the flag that indicates that the date of the current timestep should be passed to the
macro when it is "called".
2.13.1.2.2.9 IMEX/GEM internal driver variables
The following uses "IMEX" and "GEM" interchangeably, unless specifically noted.
This variable refers to general information on the contents of the IMEX icon in RESOLVE.
HostMode
0 - use a specified machine to run IMEX on
1 - run the simulator on a cluster
ClusterMode
If HostMode is set to 1, then:
0 - use PxCluster
1 - use LSF
HostName
July, 2021
RESOLVE Manual
User Guide 1106
If HostMode is set to 0, then this is the computer on which the simulator should run. This
should be an empty string to use the local computer.
SimExeName
The IMEX executable. If this is an empty string, RESOLVE will use the executable from
the "configuration settings".
FileName
The name of the IMEX dataset.
OSType
For a remote run:
0 - Windows architecture
1 - UNIX or Linux architecture
RshCmd
The remote command string (using SSH).
WorkFolder
The working folder, i.e. the folder in which intermediate and data passing files are to be
located
LogFile
Set to non-zero to write the simulation output to a log file (rather than opening up a
command window)
Multiprocessors
Set to non-zero to indicate a multi-processor, parallel run
NumProcessors
The number of processors to be used in a parallel run
IPRModel
0 - block IPR model
1 - corrected IPR model
nWell
The number of wells in the model
nGroup
The number of well groups in the simulation model
nSector
The number of sectors in the simulation model
Examples
DoGet("Resolve.Module[{IMEX}].nWell”)
DoGet("Resolve.Module[{IMEX}].nGroup”)
The well specifier in curly brackets can not be replaced with an index as can be done with the
other modules. (In other words, the syntax "Resolve.Module[{}].Well[0]... is not allowed).
However, IMEX well numbers can be used with a "#" character, e.g.
DoGet("Resolve.Module[{IMEX}].Well[{OIL1}].OilRatSC")
DoGet("Resolve.Module[{IMEX}].Well[{#1}].OilRatRC")
July, 2021
RESOLVE Manual
User Guide 1108
If there is no user group defined in IMEX data set, group name FIELD is used to represent all
the wells defined in IMEX data set. Otherwise, a user given top node name can be used.
Example:
DoGet("Resolve.Module[{IMEX}].Grup[{Gather-1}].OilRatSCPrd")
DoGet("Resolve.Module[{IMEX}].Grup[{FIELD}].OilRatRCPrd")
A sector name with ‘Entire Field’ or sector number with 0 is used to represent the all the active
blocks of the reservoir model. Please refer to the IMEX user manual.
Example:
DoGet("Resolve.Module[{IMEX}].Sect[{#0}].OilInPSC")
DoGet("Resolve.Module[{IMEX}].Sect[{Entire Field}].OilInPSC")
It is possible to query individual well layer data in IMEX. In addition, executive commands can
be carried out to, for example, perform workovers.
Layer count
July, 2021
RESOLVE Manual
User Guide 1110
Examples
DoGet("Resolve.Module[{IMEX}].Well[{#3}].nLayer)
Examples
DoGet("Resolve.Module[{IMEX}].Layer[{#3}][{1}].LaySta”)
DoGet("Resolve.Module[{IMEX}].Layer[{#3}][{1}].OilRatSC”)
Note: Above layer rates represent average rates. A negative layer rate indicates
the layer is backflowing with respect to its well type: for a producer this would be
an injecting layer or for an injector this would be a producing layer.
Examples
Example 1
In the script for voidage replacement, replace the summation of well rates by using group rates:
Qres = Abs(CDbl(DoGet("IMEX.Grup[{FIELD}].BHFRatRCInj")))
QWsurf = Abs(CDbl(DoGet("IMEX.Grup[{FIELD}].WatRatSCInj")))
End If
July, 2021
RESOLVE Manual
User Guide 1112
Example 2
Set the field oil production constraint to 1/1000 of current field mobile oil in-place and also make
the constraint be limited between 7.5 MBBL and 15 MBBL:
Next
End If
These are all options that can be set from the interface in the "data entry screen" of PSim.
FileName
Sets the PSim file name (.dek file) to be run.
DirName
Sets the directory name in which intermediate and output files are stored. This is ignored
if AutoDir (below) is set.
AutoDir
If this is set, then RESOLVE will generate a unique output directory name for every PSim
run which is performed from RESOLVE.
HostName
This is the name of the machine on which PSim will run.
RestartFile
The restart file name for the PSim run.
PSimVersion
A string that represents the PSim version to run, e.g. "2006.00.01.14". If this is not given,
PSim will run the version specified in the "driver configuration" screen.
GlobalProdWellControl / GlobalInjWellControl
These are the global control modes for production and injection wells, which can be
overridden on a well-by-well basis (see below).
IsLinux
This can be set to indicate that HostName (above) points to a Linux machine.
July, 2021
RESOLVE Manual
User Guide 1114
IPRModel
This can be set to:
0: block IPR
1: corrected IPR (recommended)
If the corrected IPR is selected, the pre-calculation welltests must have been performed.
This is currently only possible by using the interface.
DoSmallTimestep
Set this flag to force PSim to perform a small timestep after every synchronisation, as
explained under the "Advanced options" screen.
Well Collection
The number of wells in the PSim model can be obtained by querying the PSim well collection:
nwell = DoGet("Resolve.Module[{PSim}].Well.Count")
Name / Label
(read-only) Returns the name of the well
Welltype (read-only)
Returns an integer which indicates the PSim well type, e.g. -3 for a water injector, as
documented in the PSim manual.
ControlModePrd / ControlModeInj
These are the control modes for a well, as distinct from the global control modes found
under the CaseDetails variable. The following values can be used:
1: BHP control
2: Rate control
3: THP control
HasLiftCurve (read-only)
This is set if the well in question has a lift curve table associated with it.
THPTable (read-only)
The THP table associated with a well (valid only if HasLiftCurve returns a non-zero
value).
IsLinked (read-only)
Flag to indicate whether a well is linked in the RESOLVE model.
IPRPreCalculated (read-only)
Flag to indicate whether the IPR pre-calculation has been performed for this well.
N2 / CO2 /H2S
The impurity content of the production (in ppm).
BHP / THP
Bottom hole and tubing head pressure from the simulator.
ResRate / CumRes
The reservoir (downhole) rate and the cumulative downhole production/injection in oilfield
units (RB/day and RB).
HasStarted
Indicates that the well has started, i.e. has had a finite production or injection since the
run started.
Any module within RESOLVE may have an objective function, constraints, and/or control
variables set up.
July, 2021
RESOLVE Manual
User Guide 1116
Name
The name of the element, e.g. "Well1 WHP".
Unit
The measurement unit of the quantity for the optimisation element.
LowerBound
The lower bound of the control (if applicable through BoundsMask)
UpperBound
The upper bound of the control (if applicable through BoundsMask)
Perturbation
The perturbation to apply to the control, in the appropriate unit (note that this changes
dynamically during the run depending on the current trust region of the SLP: this quantity
is the first pertubation).
InitialBound
The initial "trust" bound to apply to the control, in the appropriate unit. As with the
perturbation, this changes dynamically during the course of the run.
CentrePerturbation
Flag to tell RESOLVE whether to perform a centre-based perturbation on the control
(rather than the usual, default, forward perturbation). See the "optimisation" pages for
more information.
MinimumPerturbation
The minimum perturbation that the control can use. The perturbation is adapted to the
current "trust region"; this prevents the perturbation becoming too small.
LimitType
The type of the constraint:
0 - less than
1 - equal to
2 - greater than
These variables correspond to the sources and sinks exposed by a parent module.
SrcSnk collection
There are no variables specific to this collection. Index individual items by number (SrcSnk[0])
or label (SrcSnk[{Well1}]).
The collection is read only. Items can be manipulated by calling an appropriate "command".
SrcSnk item
XPos / YPos
The position of the icon on the main screen
July, 2021
RESOLVE Manual
User Guide 1118
2 - injector
3 - undefined
Connection
Return connection data if IsConnected is true.
Label
The label of the item to which this item is connected.
SrcSnk
A further collection of children of this item (as described on this page).
2.13.1.2.2.13 PubVar variables
These variables correspond to the variables that have been exported, or "published", from the
module in question.
Note that it is also possible to perform the exporting from an OpenServer "command".
The PubVar variable is a collection of data structures for each variable that has been exported,
as described below.
Variable (read-only)
The name of the variable. Once this is known, it can be placed between the curly
brackets ({}) that come after "PubVar" to refer to a variable.
Unit (read-only)
The unit of the variable in question. RESOLVE does not know anything about the unit
systems of the connected modules: indeed, the unit returned here may not exist in
RESOLVE unit database. If a value is returned for this variable, therefore, it will always be
in this unit.
Writable (read-only)
Indicates whether the variable is writable or not in the module.
Value (read-only)
The value of the variable at the current time.
These variables allow the "equipment" that each module supports to be queried from RESOLVE
in a way which is independent of the module in question.
The collection of the equipment as supplied by the module to RESOLVE. This equipment is
internal or external to the module: in other words, the equipment list is not limited just to the
sources and sinks that can be viewed on the main RESOLVE screen.
Label (read-only)
The name of the piece of equipment supplied by the module.
Type (read-only)
The type of the equipment, as supplied by the module
SubType (read-only)
The sub-type of the equipment, as supplied by the module.
A collection of variables.
The items of the variable structure are:
Label (read-only)
July, 2021
RESOLVE Manual
User Guide 1120
Unit (read-only)
The variable's unit. All queries on the variable's value will be returned in this unit.
Writable (read-only)
Determines whether or not the variable's value can be set, or whether it is only readable.
Value
Gets or, if it is writable, sets the value of the variable.
Driver Collection
There are no variables specific to this collection.
Index individual items by number (Driver[0]) or application (Driver[{GAP}]).
Driver Item
InterfaceVersion (read only)
The interface version of RESOLVE that this driver was built with.
ModLink Collection
For example, if module A is connected to module B and module C, there will be two entries in
this collection: A-B and A-C.
This collection is used to hold the calculation order data as well as some adaptive timestepping
data.
ModLink Item
CalcOrder
This is the order of calculation for this module pair. It is the number that is entered on the
"calculation order" screen.
Mod1
This is the name of the first module in the pair (the order is arbitrary).
Mod2
This is the name of the second module in the pair.
The following variables are part of the data required to set up adaptive timestepping in a
RESOLVE prediction.
TargetRMS
This is an array of RMS targets for the target variable (see below).
It is an array over all schedule records, for example the tag string:
Resolve.ModLink[2].TargetRMS[1] will obtain the second (zero indexed) schedule record
RMS target for the third module link object.
TargetVar
Similarly, this is an array of target variables over all schedule records for this module link,
e.g. water cut, THP.
2.13.1.2.5 Schedule variables
There are two variables that relate to the standard forecast scheduling: Resolve.Schedule and
Resolve.ScheduleList.
StartDate
The start date of the forecast run.
DebugTimesteps
Enables or disables the writing of timestep data to the log window.
July, 2021
RESOLVE Manual
User Guide 1122
TimestepMode
Select:
0 - fixed timesteps
1 - adaptive timestepping
InitialTimestep
The fixed timestep for fixed timestep mode, or the initial timestep if adaptive
timestepping is implemented
InitialTimestepType
"Iinitial timestep" is in:
0 - days
1 - weeks
2 - months
3 - years
MaxTimestep (MinTimestep)
Adaptive timestepping only. The maximum(minimum) timestep size at which to grow the
timestep
MaxTimestepType (MinTimestepType)
Adaptive timestepping only. See InitialTimestepType above
MaxTimestepMultiplier (MinTimestepMultiplier)
Adaptive timestepping only. The maximum (minimum) multiplier to take on the timestep
to determine the forthcoming timestep.
ClosedConnectionPenalty
Adaptive timestepping only. The penalty applied to the forthcoming timestep when
objects are not flowing.
EndDate[i]
The end date of the ith schedule record.
OptimisationMode
0 - optimise at every timestep
1 - no optimisation
2 - optimise at a given frequency
OptimisationFrequency
If OptimisationMode is set to 2, then this is the frequency at which the optimisation takes
place
OptimisationFrequencyType
See InitialTimestepType above
Connection Collection
There are no variables specific to this collection. Index individual items by number only
(Connection[0]).
The collection is read only. Items can be connected by calling an appropriate "command".
Connection Item
Mod1
The first "module" connected.
Mod2
The second "module" connected.
Source
The first "source/sink" connected.
Sink
The second "source/sink" connected.
The source is considered to be the "data provider" and the sink is the "data acceptor", i.e. the
source/sink status is determined by the data flow direction, and not necessarily the fluid flow. A
case where the fluid flow is different to the data flow direction is that of injector coupling between
a reservoir simulator and a surface network: in this case fluid is passing from the network to the
simulator, but it is the simulator that supplies the network with IPR data.
The data providers have small dots at the top left hand corner of their icon on the RESOLVE
screen.
2.13.1.2.7 Properties variables
ForecastMode
Select:
0: Full Forecast
1: Full Forecast with Global Optimisation
2: Single Solve / Optimisation Only
July, 2021
RESOLVE Manual
User Guide 1124
EnableScript
Enables or disables the execution of the VB script.
SystemTitle
The title displayed at the top of the main window
DisplaySystemTitle
Toggle the display of the system title at the top of the main window
ReloadOnStart
Reload the applications at the start of the RESOLVE run
RunInParallel
Ensure that applications are timestepped and initialised in parallel. Turning this flag off
causes the applications to be run sequentially always.
This variables relate to the quantities listed on the "Optimisation Parameters" screen.
OptimisationMode
Affects how variables are reset at each timestep of an optimised forecast
0 - keep the controls from the last timestep as the starting point of the new
optimisation
1 - reset the controls to those from the start of the run as the starting point of the
new optimisation
MaxIter
Maximum number of "SLP" iterations.
MaxGrowth
Maximum growth multiplier of the trust region in the SLP (> 1).
MinGrowth
Minimum growth multiplier of the truct region in the SLP (< 1)
ObjFnTol
Tolerance on the convergence of the objective function.
ConstraintTol
Tolerance on how much a constraint can be violated for the solution to be considered
feasible.
LinearCritereon
The linearity test quantity for the trust region (see the "SLP" description for more
information).
The optimiser schedule consists of a list of "sub-schedules" which terminate at a given date and
which run concurrently; in each sub-schedule controls and constraints can be enabled or
disabled and the objective function can be changed. Alternatively, the optimiser can be disabled
all together.
List
This references a list of the sub-schedules.
For example: Resolve.OptimiserSchedule.List.Count returns the number of sub-
schedules in the list, and Resolve.OptimiserSchedule.List[i]... allows to query the
variables for a given element of the list.
DisableAll
If set, this completely disables the optimiser completely for the duration of the sub-
schedule.
Enable
This is an array over all control variables, constraints, and objective functions,
determining whether these individual elements are active or inactive in the sub-
schedule.
They are ordered as follows:
1. Each module in the RESOLVE.Module array:
2. Objective functions for the module
3. Constraints for the module
4. Control variables for the module
The Event Driven Schedule variables refer to those variables used in the "Advanced
scheduling" feature of RESOLVE. An example of its use with OpenServer is given in the
drilling.rsa example.
Resolve.AdvancedScheduleStart
Resolve.AdvancedSchedulePreSolve
Resolve.AdvancedSchedulePostSolve, e.g. Resolve.AdvancedSchedulePostSolve[0]
July, 2021
RESOLVE Manual
User Guide 1126
[0].LHS.Expression
These are collections of aggregated conditions that refer to the advanced scheduling Start,
PreSolve, and PostSolve sections respectively.
A single condition would be represented by one item of one of the above collections, e.g.
Resolve.AdvancedSchedulePostSolve[i].
ConditionExecuted
The number of times the condition has been triggered during a run, e.g.
Resolve.AdvancedSchedulePostSolve[i].ConditionExecuted. At the start of the run
this value should be zero.
TimeConditionExecuted
The last time the condition was executed.
MaxNumberToExecute
The number of times the condition can be executed in a run before it is effectively
removed from the list (or ignored).
Action
A "structure" that represents the action to perform if a condition is triggered, e.g.
Resolve.AdvancedSchedulePostSolve[i].Action.RedoSolve
RHS
The expression for the right-hand side of the condition.
Condition
1 - greater than (LHS > RHS)
2 - equal to (LHS = RHS)
3 - not equal to (LHS <> RHS)
The expressions mentioned above consist of a single item: Expression. See the example at the
top of this page.
In addition, the condition contains a collection of AND or OR flags for the sub-conditions.
For example:
DoSet("Resolve.AdvancedSchedulePostSolve[0].Logical[0]", "0") - sets the first entry
in the collection to "AND", i.e. if <A> AND <B>
DoSet("Resolve.AdvancedSchedulePostSolve[0].Logical[0]", "1") - sets the first entry
in the collection to "OR", i.e. if <A> OR <B>
The action is the action to take when a schedule condition has triggered.
For example:
Resolve.AdvancedSchedulePostSolve[0].Action.VariableActions[0].Variable returns the
first entry in the list of variables to change.
Resolve.AdvancedSchedulePostSolve[0].Action.VariableActions[0].NewValue returns
the value of the first entry in the list of variables to change.
Action
The following structures are available from the Action structure.
RedoSolve
Governs whether, after a solve has been performed, the system should be resolved
(effectively, this would be used to "test" the change that has just been made). It is only
available in the PostSolve section of the event driven scheduling.
ClosureActions
Actions which cause the closure of a connection in Resolve.
VariableActions
Actions in which a variable changes value.
RankingActions
Actions, taken from the above lists, which are ranked according to an extra variable.
ClosureActions
For example:
Resolve.AdvancedSchedulePostSolve[0].Action.ClosureActions[0].Connection
Connection
A string that describes the connection to perform the action on. The string is of the form:
A~B, where A and B are the labels of the upstream and downstream objects
July, 2021
RESOLVE Manual
User Guide 1128
respectively. (An easy way to get this string is to use <Ctrl> <RClick> on an existing entry
in the list).
Action
The action to perform:
0 - open
1 - block
VariableActions
For example:
Resolve.AdvancedSchedulePostSolve[0].Action.VariableActions[0].NewValue
Variable
The variable string, e.g. Sep_GOR.
NewValue
The new value that the variable is to take
RankingActions
For example:
Resolve.AdvancedSchedulePostSolve[0].Action.RankingActions.NumberToExecute,
Resolve.AdvancedSchedulePostSolve[0].Action.RankingActions[0].ActionToPerform
RankBy
Corresponds to the "order to execute actions" box on the "Variable ranking" screen.
NumberToExecute
Corresponds to the "count" box on the "Variable ranking" screen.
ActionToPerform
This is the description of the variable that is to be ranked.
RankingVariable
An expression which determines how the variable is ranked
For example:
DoSet("Resolve.AdvancedSchedulePostSolve[1].Action.RankingActions
[0].RankingVariable.Expression", "10")
assigns the value "10" to the ranking of this variable.
The scenario manager variables give access to the "Scenario Manager" section of RESOLVE.
The variables are all to be found under the top-level structure: Resolve.Scenario.
Name
The name of the scenario
e.g. Resolve.Scenario[0].Name.
Once the name is obtained, the name can be used to index the collection, e.g.
Resolve.Scenario[{name}]...
AdvancedSchedulePreSolve / AdvancedSchedulePostSolve /
AdvancedScheduleStart
These are "event driven schedule" structures. They are covered in detail in the "Event
Driven Scheduling Variables" section.
AdvancedSchedulePreLoad
A collection of actions to perform prior to the scenario run being performed
e.g. Resolve.Scenario[0].AdvancedSchedulePreLoad.Count, Resolve.Scenario
[0].AdvancedSchedulePreLoad[0].Name
Name
The Resolve OpenServer string on which to call DoSet.
July, 2021
RESOLVE Manual
User Guide 1130
Value
The value of the OpenServer variable to set to.
LastRunDate (read-only)
The date that this scenario was last executed.
ErrorStatusMsg (read-only)
The error status of the last run (if any).
Results
Accesses the results for the scenario in question. Use <Ctrl> <RClick> from the scenario
results screen to examine the structure of these.
Example
To change the end date of the RESOLVE run for a given scenario (as part of the
scenario), the following could be used:
Call DoSet("Resolve.Scenario[0].AdvancedSchedulePreLoad.Add", "0")
" add an entry to the preload list
Call DoSet("Resolve.Scenario[0].AdvancedSchedulePreLoad[0].Name",
"Resolve.ScheduleList[0].EndDate") " set the OpenServer variable for the
end of the run
Call DoSet("Resolve.Scenario[0].AdvancedSchedulePreLoad[0].Value",
"01/01/2020") " set the value of the OpenServer
variable (change the date)
This set of instructions will change the end date of the run for Scenario[0].
The Resolve.VarLink collection gives access to the "Variable link" objects that allow any
variable to be transferred seamlessly between applications.
There are no elements to the top-level VarLink collection, except the normal operations such as
retrieving the collection count, e.g. Resolve.VarLink.Count.
Label (read-only)
The label assigned to the variable connection, it is always of the form A~B, where A is
the upstream module and B is the downstream module. Once obtained, this can be used
StartModule
The name of the upstream module
EndModule
The name of the downstream module
Mask
Get or set the mask state of the variable connection
Disable
Get or set the disable state of the variable connection
The individual VarLink item described above itself is a collection of individual variable links,
e.g. Resolve.VarLink[{label}].Count, Resolve.VarLink[{label}][0].StartVarID.
StartVarID
The name of the variable in (exported from) the upstream module.
EndVarID
The name of the variable in (exported from) the downstream module
Shift / Multiplier
The linear transformation applied to the quantity as it is passed (e.g. for unit
conversions).
2.13.1.3 Commands
The following list describes the OpenServer commands with their arguments. Some of these
arguments may be optional - if this is the case they will be specified by "arg = (default value)".
All the commands are listed on the left hand side of the "OpenServer test screen".
NewFile()
Clears the current RESOLVE file and creates an empty system.
OpenFile(filename, mode = 0)
Opens the file "filename". If mode = 1 the file is opened in "Results Only" mode, i.e. the
client applications are not loaded and the only functionality enabled is the ability to view
July, 2021
RESOLVE Manual
User Guide 1132
CloseFile()
Closes down the current model. Equivalent to File | Close from the main menu.
SaveFile()
Saves the current RESOLVE file
It is possible as well to save the current GAP file for instance using the following
OpenServer command:
DoCmd("Resolve.Module[{GAP}].SAVEFILE("""C:\test.gap""")")
SaveAsFile(filename, overwrite = 0)
Saves the current RESOLVE file as "filename". If "overwrite" = 0 (default) the command
will return an error if the file already exists. overwrite = 1 forces the file save.
BrSave()
Performs a broadcast save, i.e. broadcasts a save command to all the client modules.
Note that not all modules (e.g. Eclipse) may implement a Save command (the
Petroleum Experts products always do).
SaveMod(label)
Broadcasts a Save command to the module specified by "label". Note, as with BrSave
above, that not all modules implement the Save command.
CreateArchive(archivefile)
Creates an archive with the given name of the current RESOLVE model. All the
RESOLVE module files will be contained in this archive, and (along with BrSave()) it can
be used to store the state of the model at the end of a run.
ExtractArchive(archivefile, new-path)
Extracts the archive specified by "archivefile" to the path given by "new-path".
MakeProjectFile(file)
Creates a text file ("file") which contains all the module files that are used in the
RESOLVE model.
ShutDown()
Shuts down RESOLVE.
Run()
Runs the RESOLVE prediction. This call blocks until the run is complete.
RunOneStep()
Runs a single step of the RESOLVE prediction. This call blocks until it is complete.
RunEnd(runtoend = 0)
Terminates the current prediction if single-stepping through the run.
If runtoend = 1 the run will be completed to the end of the schedule.
Optimise()
Perform a RESOLVE optimisation.
SaveStream(streamname, maxnames=10000)
This saves the current forecast results under a new stream name called "streamname".
If "maxnames" is set then, if the total number of saved streams exceeds this value, the
oldest stream(s) is/are deleted.
SaveOptStream(streamname, maxnames=10000)
The same as SaveStream() for optimisation results.
Model building
CreateModule(driver, x, y, label)
driver - the name of the driver (or application name), e.g. GAP
x - the x coordinate of the icon on the screen (left = 0)
y - the y coordinate of the icon on the screen (top = 0)
label - the label to give to the module
DeleteModule(label)
label - the name of the module to delete
LoadModule(label)
label - the name of the module to load. This loads the module with the current module
data (e.g. for a GAP module the GAP application will start and load up the required case)
ExportVariable(module,equipment,variable,newvarname)
Exports ("publishes") a variable from the client application specified by "module".
The "equipment" is the item of equipment that "owns" the variable.
The "variable" is the variable name. This can be obtained from an OpenServer string of
the form: Resolve.Module[{xxx}].Equip[{yyy}].Variable[n].Label. In Hysys and UniSim
July, 2021
RESOLVE Manual
User Guide 1134
Design, it can be obtained by right-clicking on the icon and going to the "Object
Browser".
The "newvarname" is the name to which the exported variable will be referred in
RESOLVE.
Examples:
To export a solver variable from GAP.
These can be obtained using the method above.
DoCmd("Resolve.EXPORTVARIABLE
(""GAP"",""Sep1"",""GOR"",""myGor"")
RemoveVariable(module,variable)
This removes an exported variable. "module" is the name of the client module which has
already exported the variable. "variable" is the name applied to the variable (equivalent
to "newvarname" in the ExportVariable entry above).
RemoveAllVariables(module="")
This removes all the variables exported from "module". If this is defaulted, then all
exported variables from all modules will be removed.
AddScenario(name)
Adds an empty scenario to the scenario manager with a given name.
NewScenario(name)
The same as AddScenario().
RemoveAllScenarios()
Clears the scenario manager of all its contents.
RemoveScenario(name)
Removes the scenario titled "name" from the scenario manager.
SetScenarioAsCurrentSchedule(scenariolabel)
The reverse of CopyScenario() where src-scenario is defaulted. This copies a named
scenario "scenariolabel" to the current event driven schedule.
RunScenario(scenario)
Runs the given scenario in the current RESOLVE session (i.e. LSF is not used even if it
is available).
RunAllScenarios(use-lsf=0)
Runs all the scenarios in a single batch job. If "use-lsf" is specified, this will pop-up the
application that allows the selection of LSF nodes for the runs. This will obviously require
some user input and goes against the normal rule of "no user interface components for
OpenServer commands", so a "DoSlowCmd" should be used.
Miscellaneous commands
ChangePath(newpath)
This goes into all the client modules which support the feature and change the model file
path from the current path to "newpath". This does not force a reload of the models: if it is
used, the file should be saved and then reloaded.
ExModOSCmd
This is obsolete.
To execute a command in GAP, the syntax <DoCmd("Resolve.Module[{GAP}].MOD
[{PROD}].Well[{W1}].Mask()> is allowed and preferred.
For instance, to save a GAP file, the following command can be used:
DoCmd("Resolve.Module[{GAP}].SAVEFILE("""C:\test.gap""")")
These archives contain the required Excel spreadsheet that implements the macro, as well as
any other associated files.
More information as well as a list of the OpenServer examples provided can be found in the
"Worked Examples" section.
July, 2021
RESOLVE Manual
User Guide 1136
The plug-in DLL needs to be "pointed" at the Excel file created by the user. Once the DLL is
"Registered" this is performed from the "parameters" button of the optimisation "Summary"
screen.
The optimiser calls subroutines in the Excel macro. Data (e.g. the control operating points for
the next iteration) are returned to RESOLVE by placing the required values on specified cells on
a worksheet.
Comprehensive information on the data transfer, the workflow, and the operation of the macro
subroutines can be found in the Excel spreadsheet sample provided.
If a file browser is prompted to open or save a RESOLVE model, or any file browser under
RESOLVE to open a client file (e.g. GAP, REVEAL, or Hysys) then RESOLVE will hang if the
"My Documents" directory is browsed
This is the result of a bug in the Microsoft system DLL "mydocs.dll" when used with
applications like RESOLVE which operate on COM objects in a multithreaded manner. There is
no workaround to this problem.
As a result, users are forced to set up and use the data directory under the "System
preferences" if running under Win2000. This avoids the possibility that the browser will
automatically open in the "My Documents" directory, but does not stop the user from
subsequently browsing to this directory (in which case RESOLVE will hang).
July, 2021
RESOLVE Manual
User Guide 1138
Examples Guide
Examples Guide 1140
3 Examples Guide
This section contains some step-by-step examples that will illustrate how to setup advanced
integration models for various purposes.
The examples concern connections between a variety of different applications, which are not all
products provided by Petroleum Experts. For more help it might be needed to refer to the
documentation for these applications.
There is also additional help in the help files of the RESOLVE drivers. These can be accessed
from the Help Viewer window of the main RESOLVE screen.
The models used in these examples can all be found under the samples sub-directory of the
main installation.
The models are distributed as RESOLVE archives characterised by the ".rsa" extension.
A detailed index of the examples available can be found in the "Worked Examples - Index"
section.
July, 2021
RESOLVE Manual
Examples Guide 1142
The examples in this guide are divided in the following general sections:
The following table can be used as reference for the worked examples included in this guide:
July, 2021
RESOLVE Manual
Examples Guide 1144
July, 2021
RESOLVE Manual
Examples Guide 1146
July, 2021
RESOLVE Manual
Examples Guide 1148
models.
July, 2021
RESOLVE Manual
Examples Guide 1150
The objective of this section is to demonstrate how to setup a complete RESOLVE model.
Two fields are being produced: one oil field and one retrograde condensate field. For the first
five years of the fields life, there is no existing pipeline to export the gas toward the gas market.
Due to environmental reasons, the gas produced cannot be flared.
Therefore, the only solution is to re-inject the gas produced in the reservoirs.
The production system has been modelled in GAP. A gas injection system has also been
designed and modelled in GAP.
The engineers have to check that the gas injection system designed is able to re-inject the gas
produced at every point in time during the first five years of production.
To do so, RESOLVE can be used to connect the GAP production system with the GAP gas
injection system, enabling to model the re-injection capacity at every point in time during the first
five years of production.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP is registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 1152
...\resolve\Section_1-Getting_Started\Example_1_1-Getting_Started
This folder contains a file "Getting_started.rsa" which is a "RESOLVE archive file" that contains
the RESOLVE file, GAP file and other associated files required to go through the example. The
archive file needs to be extracted either in the current location or a location of the user"s
choice.
Step 1 Objective:
Start a new RESOLVE project and initialise the units
Start RESOLVE, and open a new project using File | New on the main menu bar or the icon
on the shortcut icon bar.
This will enable to open a graphical view window that is going to be used to graphically
represent the different modules used in the model and the existing connections between them.
Select Options | Units and set the input and output units to Oilfield, then select OK.
July, 2021
RESOLVE Manual
Examples Guide 1154
Now is a good time to save the file using File | Save As..., and enter a file name
(Getting_Started.rsl).
Go to Step 2?
3.3.3 Step 2 - Create production model instance
Step 2 Objective:
Create the GAP production system instance
From the main menu, go to Edit System | Add Client program or select the icon on the
shortcut bar and from the resulting menu, select "GAP".
The cursor, when held over the main screen, will change to indicate that an instance of the
application can be made.
Click in the graphical window to specify the location of the GAP icon, and give the case a label
(for instance: "Production").
Double-click on the GAP icon and the following screen will appear:
July, 2021
RESOLVE Manual
Examples Guide 1156
This screen enables to setup the basic options of the GAP connection to be established.
The first step will be to specify the ".GAP" file to be used in the file name section, as illustrated
above.
The file used in that specific case will be: Getting_Started.gap
GAP files can contain up to four different systems: production, water injection, gas injection and
lift gas injection all associated within the same GAP model.
Once the model considered has been chosen through the Browse option, one needs to select
which system is to be loaded in RESOLVE. In this case, we want the production system to be
loaded, so the Main System option is chosen.
The second step is to make sure that the "Always save forecast snapshots" option has been
selected. This will be useful when analysing the results in the step 8 of this step by step guide.
The other options available on this screen are defined in more details in the "Loading and
Editing a GAP case" section.
Select OK and GAP will start up and load the required case (i.e. the GAP model will be opened
on the taskbar at bottom of screen).
It will then query the case for its inputs and outputs and will display these on the screen as shown
below.
The icons can be moved individually by selecting the "move" icon on the toolbar ( ) and then
dragging them to the required positions (Groups of icons can be moved by first using the select
icon , dragging over the group to select them and then dragging them as a group to the
required location).
When selecting the GAP model from the taskbar at the bottom of the screen, it is possible to
notice that the wells from this model are connected internally in GAP to tanks modelled with
MBAL, as illustrated below.
July, 2021
RESOLVE Manual
Examples Guide 1158
In this model, two reservoirs are being produced through two different surface network systems:
one oil reservoir and one retrograde condensate reservoir.It is possible to note that many of the
wells are masked. In fact these wells are scheduled to come on line at different times during the
forecast.
The icons displayed in the RESOLVE model will therefore include all the data acceptors (i.s.
sinks) for the model, in this case the wells and all the data providers (i.e. sources), in this case
the separators, as illustrated above.
Two of these (OIL SEP and COND SEP) just represent the fixed pressure points at
the top of the oil and condensate systems. These are not important for the purposes of
this example.
The GAS COND and GAS OIL separators represent the gas-separation streams of
the former separators. It is these streams that we would like to reinject into the MBAL
tanks through the injection model.
This production GAP model has an associated gas injection model. It is possible to visualise
both models simultaneously by selecting Window | Tile vertically in the GAP main menu.
It is possible to see that the gas injection system associated to the production system is also
separated in two sections: one section re-injecting in the oil reservoir and one section re-
injecting in the retrograde condensate reservoir.
The next step will illustrate how to load the injection system in the RESOLVE model.
Go to Step 1 or Step 3
3.3.4 Step 3 - Create injection model instance
Step 3 Objective:
Create the GAP injection system instance in the RESOLVE model
July, 2021
RESOLVE Manual
Examples Guide 1160
The next step is to load the gas injection system that shall be used to reinject the production gas
into the MBAL tanks.
The gas injection system is going to be taken from the production system GAP file: the injection
system has effectively been associated to the gas production model in the GAP setup. Please
refer to the GAP manual for further information regarding this procedure.
In order to load the injection system, it will be required to create an instance of GAP on the
RESOLVE screen as per the previous step and give this instance a label (for instance:
"injection").
Once this is done, double-click on the new GAP icon and browse for the same file (GAP
production model) as done previously:
This time the system selected should be the "Associated Gas Injection" system as
shown above.
When OK is pressed, the gas injection system is opened and queried for its inputs and outputs.
July, 2021
RESOLVE Manual
Examples Guide 1162
In order to achieve produced gas re-injection, the two injection manifolds are to be connected
with the gas streams output from the production system.
Go to Step 2 or Step 4?
3.3.5 Step 4 - Connect the modules
Step 4 Objective:
Connect the two GAP modules
In the case of a relatively simple model such as the one considered here, the diverse elements
can be connected directly from the graphical view.
For more complex models including a large number of connections for instance, these links can
To make the links, click and drag between the icons to be connected.
The gas separated from the oil system is to be reinjected into the oil tank, and the gas
separated from the condensate system is to be reinjected into the condensate tank.
To make this happen, connect the GAS OIL separator to the IM1 injection manifold, and
connect the GAS COND separator to the IM3 injection manifold.
When a forecast is run, data will be passed between the separators and the injection manifolds.
This data will include the pressures, temperatures, and black oil phase rates. As this is a fully
compositional model in GAP, it will also include the EOS data (molar fractions and EOS
properties) and mass flow rates.
July, 2021
RESOLVE Manual
Examples Guide 1164
in this case, the pressure of the separator will be applied to the injection manifold. The phase
rate will be applied as a constraint on the injection manifold. The result of this is that the
injection system will be allowed to inject as much as it can with an injection pressure
equal to the separator pressure as long as it does not exceed the amount passed from
the production system.
Of course the separator pressure is unlikely to be high enough to allow injection back into the
reservoir. For this reason compressors have been included in the GAP injection system. For
simplicity in this case fixed pressure increases have been used to raise the pressure to allow
injection, although it should be noted that in general this is not a good way to model a
compressor and it is not recommended, as it might lead to negative pressures being observed
in the GAP model.
If all the fluids passed into a GAP injection system are to be reinjected (i.e. regardless of
whether the injection system is able to or not) then it will be possible to use a GAP source rather
than an injection manifold. GAP floats the pressure of these source items to inject all the fluid
that is passed to them and so the continuity of pressure across application boundaries can be
worked around.
Go to Step 3 or Step 5?
3.3.6 Step 5 - Enter schedule
Step 5 Objective:
Enter the schedule forecast
Once the links between the different nodes have been setup, it will be possible to specify the
forecast data: start date / end date and timestep length.
The simulation start date happens to coincide with the tank start date and could be found by
clicking on the "Select from client modules" button.
The end date and the timestep length should be entered as shown.
Further informations on the different options of that screen can be found in the "Schedule"
section of the present manual.
Prior to the forecast being run, it will be important to define the variables that have to be
exported to the RESOLVE results.
Go to Step 4 or Step 6?
July, 2021
RESOLVE Manual
Examples Guide 1166
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates.
In this example, we would expect to see the pressure, oil produced, water produced, and gas
produced reported for the connections between GAS OIL and IM1 and GAS COND and IM3.
In addition, we would like to see how much of the gas that is produced by the production system
ends up being reinjected.
Also, in the case where all the gas is reinjected, we would like to see how far the wells need to
be choked to achieve this.
The variables that need to be reported in addition of the standard RESOLVE variables are
therefore:
IM1 Gas Rate
IM3 Gas Rate
WINJ1con dP choke
WINJ2con dP choke
WINJ2oil dP choke
WINJ3oil dP choke
RESOLVE can automatically build a list of the GAP variables available and that list can be
reported directly through the RESOLVE interface.
To obtain that list, right-click on the GAP injection icon and select the "Output Variables"
option.
The following screen will be displayed (i.e. this might take some time with models where a large
number of variables have to be retrieved), where one can select the element considered (i.e. for
instance IM1) and the variable associated with it that needs to be reported (i.e. here gas rate)
and use the "Add From List" button to add it to the list of variables to be reported.
Once all the variables mentioned above have been selected, the screen will be displayed as:
July, 2021
RESOLVE Manual
Examples Guide 1168
Once the variables to be reported have been selected, click OK and proceed to the next step,
which illustrates how to run the prediction forecast.
Make sure that there are no screens left open in GAP as this can interfere with the remote
operation of GAP by RESOLVE.
Go to Step 5 or Step 7?
3.3.8 Step 7 - Run the forecast
Step 7 Objective:
Run the forecast
The first thing that happens is that both modules perform their respective initialisations.
In the case of GAP, this means loading the tank data and initialising them (i.e. running a history
if necessary) to the forecast start date.
For compositional runs, RESOLVE will also obtain the composition names from the different
applications (i.e. in this case, the production and gas injection systems). In general the
composition names used by different applications may be different and thus RESOLVE must be
told which compositions correspond to which across the models. In this case the situation is
simpler; the composition names in the production and injection systems are identical.
Nevertheless, the same process of mapping composition names must be passed through.
The following screen (i.e. the "Composition table" screen) will be presented:
Select the GAS OIL to IM1 connection: the left hand side list will specify the list of components
and their names used by the GAS OIL separator. The right hand side list will specify the list of
components and their names used by IM1.
If the components are in the same order in both list, which is the case here, it will be possible to
click "Add All" to automatically make correspondences between the left and right hand lists.
Once this is done, the following screen will appear, illustrating the component mapping that has
been done for this connection.
If the components were NOT in the same order, then it will have been possible to manually
select each component in the list and click on "Add Individual Connection" to establish the
July, 2021
RESOLVE Manual
Examples Guide 1170
different connections.
The same procedure has to be done for the GAS COND to IM3 connection.
Once both connections will have their composition mapped, the icon next to each connection
will turn from a red cross to a green tick, illustrating the fact that a full composition mapping has
been performed in the model.
It is important to note that this step will NOT be necessary if linking models that use black oil
PVT descriptions.
Click "OK" to continue the forecast - The status of the run will be displayed in the RESOLVE
"Calculation Progress" screen, as illustrated below.
July, 2021
RESOLVE Manual
Examples Guide 1172
Once the forecast has been performed, it will be possible to visualise and analyse the results,
as described in the next step.
Go to Step 6 or Step 8?
The objective of this example is to pass the produced (separated) gas stream from the oil and
gas condensate reservoirs to be injected back into the reservoirs through the injection model.
For the run, GAP performs a solve and optimisation of the production system with the initial tank
data. The production will be taken from the separator gas streams and passed to the injection
system. As mentioned earlier, the pressure will be passed to the injection manifold and the gas
phase rate will be applied as a constraint to the injection manifold. The injection system will then
be solved and optimised while honouring the gas rate constraint. To honour the constraints, the
injection model may have to choke back the injection wells. GAP will then be ready to take a
timestep.
The timestep should be 2 months (i.e. as entered on the "forecast data" screen) but it is
possible to notice that RESOLVE performs a timestep on the 01/12/2005. This is because
RESOLVE is adding an extra timestep to synchronise with the GAP schedule bringing on wells
in the production system. Several subsequent timesteps will also be truncated in this way.
The forecast will take some time to reach 2015, although the run can be ended at any time by
going to Run | Stop or clicking on the stop button on the toolbar.
One can now view the results of the run in two different formats:
To see the results in tabular format, select the Results | View Forecast Results
(Table) section or the following icon:
To see the result in a plot format, select the Results | View Forecast Plots section or
the following icon:
One of the interesting elements of this model is to compare the amount of gas that is produced
to the amount of gas that is injected.
July, 2021
RESOLVE Manual
Examples Guide 1174
Select the "IM1" variable entry in the left hand side list. A list of variables that are
available for display appear at the bottom left hand corner of the screen, as illustrated
below.
Select the "Gas Injected" variable and click on the button. The following
screen will appear, that enables to select for which nodes of the system the user
wishes to display the evolution of the "Gas Injected" variable. In this case, select both
IM1 and IM3 variables, as illustrated below.
July, 2021
RESOLVE Manual
Examples Guide 1176
Once this is done, click "OK" and the following plot will be displayed, illustrating the
gas injected in both IM1 and IM3 manifolds. It is possible to note that the injection stops
for both injection systems on the 01/01/2010 - This is due to the GAP schedule, that
specifies that both injection systems are MASKED after this date.
The next step will be to compare the amount of gas injected to the quantity of gas
produced for each system, to verify whether the injection systems in place allows to re-
inject all the produced gas or not.
To do so, the same procedure can be used:
Select the IM1 connection (! not the variable, the connection itself) which will
contain the gas produced by the production system and passed to the injection
system as a constraint in the list of variables on the left hand side of the screen.
July, 2021
RESOLVE Manual
Examples Guide 1178
Select the "Gas Produced" variable at the bottom of the screen and click on the
button.
The following screen is displayed, enabling to select the connections for which
the gas produced variable has to be displayed. Select both IM1 and IM3
connections, as illustrated below.
July, 2021
RESOLVE Manual
Examples Guide 1180
It is possible to notice that until 01/01/2010 (i.e. when the injection system is then
switched off through the GAP scheduling) the injection rate is the same as the
production rate: this enables to confirm that the injection system is able to re-inject all
the gas that is produced.
It is also possible to see an increase in production following 2010 due to more wells
(i.e. condensate wells) being brought on.
As the injection and production rates are the same before 2010, chokes are being set
on the injection wells to meet the maximum gas injection constraint.
To do so, select the injection well dP choke variables in the left hand side list and
select dP choke. Add these variables to the plot, as illustrated below.
This plot enables to illustrate the pressure losses required across the wellhead chokes
to meet the gas injection constraints.
Also a very good way to investigate what happens in the model at each timestep, is to go to the
GAP model and reload a prediction snapshot at a specific date.
This is possible because the option to "Always save forecast snapshots" was selected
when the GAP model instances in RESOLVE were created. This automatically tells GAP that
prediction snapshots should be saved at each timestep. (This can also be manually input in
GAP by using the prediction calculations "settings" option).
This can be done by going to the GAP production model (for example) and select Prediction |
Reload prediction snapshots.
July, 2021
RESOLVE Manual
Examples Guide 1182
This will ask for the file to be saved, and then call up the various timesteps at which snapshots
were saved as shown below.
If we select the date shown, we will reload a snapshot of the model at that point in time.
July, 2021
RESOLVE Manual
Examples Guide 1184
By double clicking on the Gas oil separator in the production model and going to the results tab,
we can get how much gas was separated from the stream. This volume will be placed as a
constraint in the IM1 injection manifold for that timestep as shown below.
This volume will therefore act as a constraint, enabling the GAP optimiser to calculate the
pressure losses needed across the wellhead chokes to respect this constraint.
July, 2021
RESOLVE Manual
Examples Guide 1186
The objective of this section is to demonstrate how to setup a connection between GAP
production and injection models and a reservoir simulation model setup in REVEAL.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 1188
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\REVEAL
\Example_2_1_1-GAP_REVEAL
This folder contains a file "GAP_REVEAL.rsa" which is a "RESOLVE archive file" that contains
the RESOLVE file, REVEAL file, GAP file and other associated files required to go through the
example. The archive file needs to be extracted either in the current location or a location of the
user"s choice.
Go to Step 1
Step 1 Objective:
Start a new RESOLVE project and initialise the units
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-
REVEAL.rsl).
Go to Step 2
Step 2 Objective:
Create a REVEAL instance in the RESOLVE model
The next step is to create instances of the various applications that have to be connected
through RESOLVE.
From the main menu, go to Edit System | Add Client program or select the icon.
From the resulting menu, select "REVEAL".
The cursor, when held over the main screen, will change to indicate that an instance of the
application can be made.
Click on the main screen where to position the REVEAL icon, and give the case a label (say,
"Reservoir").
July, 2021
RESOLVE Manual
Examples Guide 1190
For the file name, browse to the file "Reservoir.rvl" as shown above.
Note that from this screen it is possible to select a remote host on which REVEAL can be run.
This is especially useful in cases where several reservoir models are included in the same
model: in this case it will probably be useful to run these simulations in parallel.
When OK is pressed, REVEAL will start and load the required case.
It will then query the case for its sources and sinks (wells) and will display these on the screen as
shown below.
The icons can be moved by selecting the "move" icon on the toolbar ( ) and then dragging
them to the required positions.
Note that we have four producers and four gas injectors linked to the reservoir model.
Go to Step 3
3.4.1.1.4 Step 3 - Create GAP instances
Step 3 Objective:
Create the GAP production and gas injection instances in the RESOLVE model
Repeat the previous step to create an instance of GAP on the RESOLVE main screen.
Label the GAP case created "Production". Browse for the model "Production System.GAP".
July, 2021
RESOLVE Manual
Examples Guide 1192
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP file contains a production and gas injection model. In the production model, four production
wells, Well1, Well2, Well3 and Well4 are seen. These are the same wells as were identified
from the REVEAL case. It is possible to look at the GAP interface as the GAP model will be
open on the taskbar to confirm the contents of the GAP file.
Once this has been done, the RESOLVE model will be as follows:
Repeat the previous step to create an instance of GAP on the RESOLVE main screen for the
gas injection system.
Label the GAP case created "Gas_Injection". Browse for the case "Production System.GAP"
and select the Associated Gas Injection option. Effectively, both the production and gas
injection models were associated in the GAP model itself: the Production System.gap file will
therefore include both the production and the gas injection model.
July, 2021
RESOLVE Manual
Examples Guide 1194
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. Four
injection wells, GInj1, GInj2, GInj3 and GInj4 will be found. These are the same wells as were
identified from the REVEAL case. It is possible to look at the GAP interface to confirm the
contents of the GAP file.
Go to Step 4
3.4.1.1.5 Step 4 - Establish connections
Step 4 Objective:
Establish the connections between the GAP and the REVEAL models
Connect the Well1 icon in REVEAL to Well1 in GAP by clicking into the first icon and dragging
the connection to the second.
Repeat this for Well2, Well3, Well4 and GInj1, GInj2, GInj3 and GInj4.
July, 2021
RESOLVE Manual
Examples Guide 1196
Note that it is also possible to make the connections using the "Connection wizard". This is
obtained by invoking Edit System | Connection Wizard under the main menu and is
especially useful when dealing with a large number of connections.
Go to Step 5
Step 5 Objective:
Finalise the RESOLVE model setup
Before the simulation is run, further changes have to be made to the system configuration.
There are various settings that can affect the way the system behaves.
A detailed description of the different techniques used to determine these IPRs, along with their
respective advantages and disadvantages, can be found in the "IPR Generation Options"
section.
This will enable to choose whether the IPR calculation in REVEAL is based either on block or
drainage region pressures.
When using this method, a calculation has to be performed prior to launching the forecast. This
has to be done only once when the RESOLVE model is setup UNLESS the well models
specified in REVEAL are modified. Click on the setup tab and the interface below appears. The
July, 2021
RESOLVE Manual
Examples Guide 1198
calculate button has to be selected to perform the pre-run calculation required for this method.
Once the calculation is complete, the setup button will become green.
Click "OK" to go back to the main screen.
It is possible to monitor the inflow performance relationships (i.e. IPR curves) that are passed
from the reservoir model to the surface network model at every timestep - to do so, make sure
that the Run | IPR Logging option is selected in the main RESOLVE menu.
REVEAL will then control that well with the fixed boundary condition for the duration of the next
RESOLVE timestep. The settings indicated below show that the wells in this REVEAL model are
controlled between solves according to the system response.
This algorithm enables to select at each timestep the fixed boundary condition that will have the
minimum effect on the accuracy of the prediction.
To setup this option, double-click on the REVEAL icon to view the REVEAL data entry screen
and go to the "Associated Data" tab.
This will enable to choose which well control mode is chosen, as illustrated below.
On this screen, it is also possible as well to select a restart option i.e. If REVEAL is to be started
from the initial state of the reservoir model or from a specified restart stage.
Stop RESOLVE from reloading the models at the start of the run
It is sometimes useful to have RESOLVE reload the files at the start of the run. In some cases
the forecasts may leave wells masked or separator pressures changed in which case when a
new run is started it will be needed to reload the original file.
This option is the default in RESOLVE, however, it can be time consuming, specially when
dealing with large models.
If there is no reason for the models to be reloaded before the start of the run, then it is possible
to modify this option by going to Options | System options and change the settings to "Do
not reload client modules".
If this is the option selected, then RESOLVE will not reload the models at the beginning of the
run, but will use the models that are currently open on the machine.
July, 2021
RESOLVE Manual
Examples Guide 1200
Go to Step 6
3.4.1.1.7 Step 6 - Setup RESOLVE schedule
Step 6 Objective:
Setup the RESOLVE Schedule
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
For the purposes of this example, we will be making use of the basic scheduling only.
The advanced scheduling is described in the "Example_2.1.3" section.
To setup the basic RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button. This will display a screen allowing the user to
select the required start date from a list of the various model start dates.
July, 2021
RESOLVE Manual
Examples Guide 1202
The timestep and schedule duration are also entered here as shown.
Here GAP and REVEAL will be synchronised every 2 months until the schedule completes on
1/1/2013.
Go to Step 7
3.4.1.1.8 Step 7 - Publish variables
Step 7 Objective:
Publish the variables to report in the RESOLVE results section
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates and ratios. In this example, we wish to report all
the variables for the wells and separator in the GAP production model as well as the injection
wells and manifold in the GAP injection model.
RESOLVE can automatically build a list of the GAP variables available and that can be reported
directly through the RESOLVE interface.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
Select the 'Production' tab, and click Edit variables. A list of variables is available to import.
These consist of output variables such as solver results or cumulatives, or input variables such
as constraints or items' masking variables. If a variable is required and is not included in this
list, it is always possible to Copy and Paste the corresponding OpenServer string in the
'Variable string' field.
Select 'Sep1' and click the red arrow: this will import all the variables corresponding to Sep1.
Repeat this for the four production wells.
Click OK. In the screen below, click 'Plot invert selection' in order for a tick to appear in the 'Add
to plot' column for all variables. This ensures that all variables are accessible from the plotting
window.
July, 2021
RESOLVE Manual
Examples Guide 1204
Go to the 'Gas_Injection' tab do the same for the gas injection model. Output all variables for
the injection wells and manifold (IM1) and proceed to the next step.
Make sure that there are no screens left open in GAP as this can interfere with the remote
operation of GAP by RESOLVE.
Go to Step 8.
3.4.1.1.9 Step 8 - Run the forecast
Step 8 Objective:
Run the prediction forecast
To run the forecast from beginning to end without stopping, press the icon.
Note that the run can be paused or stopped with other toolbar icons.
Once the forecast has been started, REVEAL will perform an equilibration calculation. If it was
setup to load a restart file it will do this instead.
The equilibrated reservoir data will be passed to GAP in the form of well IPR curves (i.e. for both
producers and injectors). GAP will use this data to solve and optimise the system. The solution
points will then be returned to REVEAL ready to take the first week"s timestep. Before this, the
RESOLVE forecast enters "pause" mode.
The results of the run for the first timestep can be checked briefly by holding the mouse over a
connection icon:
July, 2021
RESOLVE Manual
Examples Guide 1206
To run the rest of the forecast without stopping, press the icon. Note that the run can be
paused or stopped with other toolbar icons.
Go to Step 9
3.4.1.1.10 Step 9 - Analyse results
Step 9 Objective:
Analysing the Results
RESOLVE has a set of results - the amount of results reported in the RESOLVE model
is function of the variables that have been published by the user prior to the run itself.
Refer to step 7 for further details.
It is also possible to view all the results of a simulation for any of the client applications
in the application itself.
The results from the client applications are best viewed when the run has completed or is
paused: this is because RESOLVE is controlling the application and it is possible that viewing
application results will interfere with the control. For this reason, application user interfaces are
disabled while the run is proceeding.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon.
These results can also be displayed as the run is proceeding.
For this specific case, we want to first analyse the oil that has been produced for the entire
production system as well as for each individual well.
All the nodes of the RESOLVE model are listed in the left hand side of the screen. Select the
"Sep1" node.
Once this is done, a list of the variables associated with this node will appear at the bottom of
the screen, as illustrated below.
Select the variable to be viewed, here oil produced and click on the button.
The following screen will be displayed, when one can define which nodes have to be included in
the oil produced plot. Here we select the separator node as well as all the wells nodes.
July, 2021
RESOLVE Manual
Examples Guide 1208
Click "OK" and the following plot is displayed, illustrating the oil production at separator level
and for each individual well.
Once this is plotted, it is possible to observe very high production with very sudden decline at
the start of the forecast. From december 2008 onwards, the production decline is much slower.
In order to understand this behaviour, it is possible for instance to plot the evolution of reservoir
pressure in the wells and compare it to the amount of gas injected in the reservoir.
In order to create a second plot, use the Results | View Forecast Plots (new window), or
click on the icon.
Then, use the same procedure than described above to plot the reservoir pressure in the
producing wells and the gas injected at the injection manifold level.
July, 2021
RESOLVE Manual
Examples Guide 1210
It is possible to notice that no injection is possible with the current gas injection system at the
beginning of the field life: the reservoir pressure is too high. This leads to a very high well
potential, hence a very large production.
This very large production will lead to a very rapid reservoir pressure decline without pressure
support.
As soon as the pressure passes below 4,200psig, then it is possible to re-inject gas in the
system, and this slows down considerably the reservoir pressure decline, as illustrated above.
Once the results have been analysed, it will be possible to save the results of this run by
selecting the icon. This will enable to keep the results of this run in memory, in order for
instance to compare it to future runs.
When doing so, the following screen will appear, enabling to provide a name to the result
stream to be saved. Call it "Example_2-1" for instance and click "OK".
It is also possible to save the plot templates if one wishes to go back to the same plot to
visualise results of the next forecast run for instance. To do so, used the icon in every plot
open, and give a name to each plot. Here, we will save two plots called respectively
"Oil_Production" and "Reservoir_Pressure".
In addition to the forecast results, because the "IPR Logging" option was selected before the
run, it is possible to visualise the different IPR curves that have been passed from the reservoir
model to the surface network model at each timestep.
This can be done by going to the Results | View IPR log results section or by clicking on the
icon.
July, 2021
RESOLVE Manual
Examples Guide 1212
This enables to select the well to consider and the dates at which the IPR curves have to be
displayed.
In this case, select "Well 1" and "All" dates. Once this is done, click "View".
This will display the following screen, that provides for each date (i.e. the date selection can be
done in the top left hand corner) the IPR data passed from the reservoir model to the surface
network model.
This IPR data can be plotted by using the "Plot" option: after selecting the dates and the
parameters to be displayed in the "Variables" section, the IPR curves are displayed.
July, 2021
RESOLVE Manual
Examples Guide 1214
3.4.1.2 Example 2.1.2: GAP - REVEAL Connection with Event Driven Scheduling
3.4.1.2.1 Overview
1. Example Introduction
The objective of this section is to demonstrate how to control events in a surface network -
reservoir simulation coupled model by using the RESOLVE event driven scheduling capability.
The idea used by the event driven scheduling facility is that the user is able to set up conditions,
such that when a condition is "triggered" an action or several actions will be performed.
A condition is a statement of the form: "IF <variable> <condition> <value>"
The condition can be checked by RESOLVE prior to solving the system (pre-solve), after solving
the system (post-solve), or at the start of the run.
A new discovery has been made - A full field model is required to analyse the behaviour of this
field under different production and injection conditions.
In order to do so, a numerical simulation reservoir model has been setup as well as surface
network models for the production and the gas injection system. These models have been
A capacity constraint of 120,000 STB of liquid per day needs to be respected at the separator
level.
Initially, the status of the system is such that all the production and injection wells are open and
that the capacity constraint has not been setup at the separator level.
The objective of the study is to start the forecast with only one production well open and to
understand at what point in time will the additional production wells have to be put on stream.
In order to do so, the event driven scheduling capability of RESOLVE can be used to monitor the
liquid production at the separator and open the production wells when required.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 1216
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\REVEAL
\Example_2_1_2-GAP_REVEAL_Event_Driven_Scheduling
Go to Step 1
Step 1 Objective:
Open the RESOLVE file and modify the main model options to be able to handle the
event driven scheduling
Once the model is loaded, ensure that the models are set to be reloaded when the forecast
starts by going to the Options | System Options section.
This is important in this case as the event driven scheduling is going to be used to
modify the client application models, so we need to ensure that we are starting from
the same initial state every time we run the forecast.
Go to Step 2
July, 2021
RESOLVE Manual
Examples Guide 1218
Step 2 Objective:
Publish the variables to be used for the event driven scheduling
To begin, variables have to be published from the different client modules prior to using these
variables in the event driven scheduling section.
To start using the tool invoke Variables | Import application variables. The interface below
comes up. This is where the variables to be published for the client modules are published.
Mask flag for well2, well3 and well4 - This will provide the status of the wells (i.e. Open
or Closed).
Separator liquid rate.
Max liquid rate constraint at separator.
Click on the equipment masking tab. This is where the mask flags for the production wells will
be published. Select well2 and click on the red arrow to publish the variable. Do the same for
well3 and well4. The mask flags will return "1" if the item is masked or "0" if unmasked.
Revert back to the OpenServer variables tab. Publish the liquid rate at separator (Sep1).
July, 2021
RESOLVE Manual
Examples Guide 1220
Click on Constraints (input) variables tab and also publish the maximum liquid rate constraint
at separator. To list the variables in this section, click on the "Rescan button". This is as
shown below.
Click OK and the interface below shows the variables published for the production system.
July, 2021
RESOLVE Manual
Examples Guide 1222
Once the variables have been published, it will be possible to go ahead with the event driven
scheduling setup.
Go to Step 1 or Step 3
3.4.1.2.4 Step 3 - Setup Event Driven Scheduling
Step 3 Objective:
Setup the Event Driven Scheduling
Once the variables have been published, the event driven scheduling section can be setup.
To access the event driven scheduling setup, click on Events/Actions | Event driven
scheduling.
In simple terms, the screen allows setting up many conditions (i.e. which can be aggregated
together with AND or OR statements to form a single condition), such that when a condition is
"triggered" an action or several actions will be performed.
A condition is a statement of the form: "IF <variable> <condition> <value>"
The condition can be checked by RESOLVE prior to solving the system (pre-solve), after solving
the system (post-solve), or at the start of the run. This can be specified by using the Schedule
section at the top of the screen.
When setting up a condition, it is best to work from left to right across the screen.
The conditions that have to be set for this example are outlined below:
At the end of each timestep, i.e. Postsolve, if separator liquid < 118,000 stb/day then,
First unmask well2
If the condition becomes true again at a later time, unmask well3
If the condition becomes true again at a later time, unmask well4
The first condition is entered by populating the event drilling schedule interface as shown
below.
July, 2021
RESOLVE Manual
Examples Guide 1224
With the condition input, the next step is to specify / define the actions to be taken. This is done
by clicking on <no action> tab on the left.
Input the actions as shown above. Mask flag of "1" means the wells will be masked.
This will enable to mask the wells and the maximum liquid rate constraint at the separator at the
start of the run.
Once this is done, then it will be required to set a condition to be checked at the end of every
timestep, to determine the time at which well2, well3 and well4 are to be opened.
This event is to be performed during the "Post solve" as shown below. It can be seen that a
separator liquid of 118,000stb/day instead of 120,000stb/day is used. A little tolerance is given
for the rate to ensure the event occurs only when the condition is fulfilled as the solvers may not
July, 2021
RESOLVE Manual
Examples Guide 1226
always obtain exact solutions at each timestep. e.g. a solution of 119900 stb/day is still
realistically close to 120,000 stb/day and is not reason to trigger the event.
In addition, we want three groups of actions to be performed, i.e one well (i.e. well2) to be first
unmasked, and later on another well (i.e. well3) to be unmasked and later on another well (i.e.
well4) to be unmasked. This means the condition will be executed three times and thus the
"Times to execute" option should be set to 3.
The action to be taken are input as shown below. The 3 wells are to be unmasked when the
condition is satisfied. However we wish to set a sequence for the unmasking, i.e. wells2, then
well3 and finally well4. This is done by clicking on "Rank icons" button.
We shall select each of the wells in the order of which they shall be unmasked.
To do so, select each well in the top list and click the Add button to add them to the ranking list.
Then, in the "Ranking Variable" section, enter the order in which the wells have to be opened:
the value assigned to well2 is 1, to well3 is 2 and to well4 is 3.
Then, select the order in which the actions have to be executed: the well2 has the lowest ranking
value assigned to it and should be open first, therefore the "Order to execute actions" has to
be set to "Lowest first".
The count value is left to 1: it is only necessary to open one well at a time.
July, 2021
RESOLVE Manual
Examples Guide 1228
In summary, this means at the first execution of the post solve condition specified previously,
unmask well2, at the second execution unmask well3 and at the third execution of the condition,
unmask well4.
Select OK, and ensure "Re-take pass through system (redo solve) after action has been
performed" is checked as shown on the screenshot below. This ensures that as soon as a
condition is triggered, RESOLVE will perform the required action and will rerun the timestep with
the new event/action to ensure continuity in the solutions.
Click OK twice to go back to the main RESOLVE screen: the event driven scheduling section is
now setup and the forecast is ready to be run.
Go to Step 2 or Step 4
Step 4 Objective:
Run the forecast
July, 2021
RESOLVE Manual
Examples Guide 1230
To run the the forecast, press the icon. Note that the run can be paused or stopped with
other toolbar icons.
RESOLVE will complete the run to 2013 taking 2 months timesteps and alter the well start times
based on the event driven schedule which has been defined.
Go to Step 3 or Step 5
Step 5 Objective:
Analysing the Results
When the forecast is run the first thing that will happen is that the GAP and REVEAL models will
be reloaded. This might take a few seconds.
When the forecast is running, it is possible to go to the "Event" section of the RESOLVE output
log, as illustrated below. This will enable to see all the messages that are published by the event
driven scheduling, specifying the dates at which the conditions specified are triggered and the
corresponding actions taken.
For instance, it is possible to see from this output log when the wells are opened.
For instance, it is possible to go to the Results | View Forecast Plots section and plot the oil
produced at the separator and by each individual well. As this plot setup was saved when the
Example_2_1 was performed, it is possible to simply reload the plot directly by using the
icon.
July, 2021
RESOLVE Manual
Examples Guide 1232
The following plot will be displayed, illustrating the sequence of the well openings programmed
by the script.
It is possible to notice that as soon as a new well is open, the GAP optimiser makes it the main
production well: this is due to the fact that the wells that have already been producing for a while
will have a higher WC and potentially higher GOR than the well that has just been opened: the
GAP optimiser will try to maximise the quantity of oil produced at the separator, and will
therefore minimise the water and gas production if possible.
Go to Step 4
3.4.1.3 Example 2.1.3: GAP - REVEAL Connection with Visual Workflow Manager
3.4.1.3.1 Overview
1. Example Introduction
The objective of this section is to demonstrate how to control events in a surface network -
reservoir simulation coupled model by using the RESOLVE visual workflow manager.
As previously seen in Example 2.1.2, the event driven scheduling facility enables setting up
conditions, such that when a condition is "triggered" an action or several actions will be
performed. The condition can be checked by RESOLVE prior to solving the system (pre-solve),
after solving the system (post-solve), or even at the start of the run.
However, the beauty of the workflow manager is that the same workflows which are being
executed using the event driven schedule can also be directly setup "visually/graphically" as
would be done if the flow diagrams were being drawn by hand. This gives a better picture of the
workflow objectives which become easily customisable and easily understood by colleagues
involved with project. In essence field management logics/objectives can be either executed by
event driven schedule or easily done using visual workflow manager.
To illustrate how the visual workflow manager works, we shall use a new example and achieve
some field management objectives using both the event driven scheduling capability as well as
the visual workflow manager.
An offshore gas condensate field consisting of one reservoir being depleted using 3 sub-sea
completed wells. Production from the wells flow to a common platform through a flowline and
one riser - R1. The wells can be controlled by subsea chokes and/or using top side chokes at
the top of the riser as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1234
The reservoir model is contained in a separate REVEAL file. Total Gas production is limited to
6MMsm3/d from Riser 1. Additionally each well is limited to 2.5MMsm3/d and an abandonment
constraint of 200kSm3/d.
To honour the production constraints, the field can be controlled at the wellheads or at the riser.
Because of the two level control, trying to solve this mathematically within GAP will lead to
calculation instabilities as well as long run times since either the wellhead chokes or riser choke
can be used to achieve the same objectives. This is where it is useful to implement some logics
to control the model based on how the field will be managed in reality.
Apart from the production constraints, there is also the objective to avoid risk of hydrates. To
prevent these flow assurance problems, the pressure drop across the subsea choke is to be
minimised hence, the subsea chokes are to be left as fully open as possible while choking them
only to meet the individual well constraints. This means the overall control is mainly done with the
As the reservoir pressure declines and the wells cannot meet the production targets even when
fully open, then the topside choke can be gradually opened in steps of 0.2" up to a maximum of
5". Additionally when the top side choke needs to be opened more than 4", there is the
possibility to bring a new well online (well A-4) to increase the field potential. At this point, the
top side choke can be reduced back to the minimum value to meet the production target but not
less than 3".
The objective of the study is to generate a forecast to maintain production targets as much as
possible while implementing the field management logics/workflow.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to run this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 1236
~\IPM x.x\Samples\Resolve\Section_2-Connection_to_Reservoir_Simulation_Tools
\REVAL\Example_2_1_3-GAP_REVEAL_Visual_Workflow_Manager
Go to Step 1
Start RESOLVE and go to File | Archive | Extract. Select the GAP_Reveal_Visual Workflow
Manager_Start.rsa file and extract its contents into a selected location. When the "Open
Master File?" question is prompted, select "Yes".
Once the model is loaded, ensure that the models are set to be reloaded when the forecast
starts by going to the Options | System Options section.
This is important in this case as the workflow manager is going to be used to modify
the client application models, so we need to ensure that we are starting from the same
initial state every time we run the forecast.
Go to Step 2
3.4.1.3.3 Step 2 - Publish variables
Step 2 to Step 5 will cover the field management logics executed using event driven scheduling,
while steps 6 to step 9 will feature the same logics executed using the workflow manager.
Based on the modelling objectives for the project, the logic to be implemented is provided
schematically in the flow diagram below
July, 2021
RESOLVE Manual
Examples Guide 1238
Step 2 Objective:
Publish the variables to be used both for the event driven scheduling and for workflow
manager.
To begin, variables have to be published from the different client modules prior to using these
variables in RESOLVE. This procedure is described in the "Publish Application Variables"
section.
To start using the tool select Variables | Import application variables. This will bring up the
interface below where the variables to be used from the client modules are published.
On the Solver (output) variables tab, select R1_TSChoke and publish the "Gas rate" variable
using the red arrow as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1240
On the Constraints tab, publish the maximum gas rate for the riser. The constraint is placed on
the R1_Downchoke joint in the GAP model.
Edit the 'Variable name' column for the Maximum gas rate such that there are no brackets '()' in
the name: brackets are not supported in variable names in Visual Workflows as these normally
represent arguments.
To publish the choke size for Topside choke, this variable will have to be obtained directly from
GAP. The procedure to do this is as follows:
Double click on the Riser choke element within the GAP model and go to the "Control" tab.
Obtain the OpenServer access string for the choke diameter variable by using Ctrl+Right click.
Copy the access string displayed.
Revert back to the Event driven scheduling section and paste the access string. The variable
can be given a name of choice e.g. R1_Chokesize.
July, 2021
RESOLVE Manual
Examples Guide 1242
Next, click on the equipment masking tab. This is where the mask flags for the production wells
will be published. Select well A-4 and click on the red arrow to publish the variable. The mask
flags will return "1" if the item is masked or "0" if unmasked.
Click OK and the interface below shows the variables published for the production system.
Select "Plot invert selection" to make all the variables available for plotting.
July, 2021
RESOLVE Manual
Examples Guide 1244
Go to Step 1 or Step 3
3.4.1.3.4 Step 3 - Setup Event Driven Scheduling
Step 3 Objective:
Setup the Event Driven Scheduling
Once the variables have been published, the event driven scheduling section can be setup.
To access the event driven scheduling setup, click on Events/Actions | Event driven
scheduling to bring up the interface below.
For the objectives at hand, the following conditions will be executed for the logic:
At the end of each timestep, i.e. Postsolve, there shall be two levels of conditions to
implement.
1. If Riser 1 gas rate < 6MMsm3/d then,
Increase top side choke by 0.2" as many times as required
2. If topside choke gets to 4"
Set topside choke diameter back to 3"
Unmask well A-4
The Start condition is entered by populating the event drilling schedule interface as shown
below.
July, 2021
RESOLVE Manual
Examples Guide 1246
With the condition input, the next step is to specify / define the actions to be taken. This is done
by clicking on <no action> tab on the left.
Input the actions as shown above. Mask flag of "1" means the wells will be masked.
The next step is to define the actions taken at the end of the timestep during the Post solve.
This event is to be performed during the "Post solve" as shown below. A little tolerance is
given (i.e. Maximumgasrate*0.99) as exact solutions may not always be obtained by the solver.
This prevents the actions being triggered when not required. Also notice that times to execute is
set to 16. This is because the choke will be changed from 2.8" to 3.0" to 3.2" e.t.c. up to 5". The
entire sequence including when a new well is brought online will be done 16 times. An
alternative to this is to set the action to execute as many times as possible (e.g. 1000 times) but
set an "AND" condition to capture when the choke size gets to 5 inches.
July, 2021
RESOLVE Manual
Examples Guide 1248
The action to be taken are input as shown below. The option to re-do the timestep with the
action already taken is also selected.
The next step is to define the second set of condition for the Post solve process. This involves
monitoring when the choke size gets to 4 inches and then bringing a new well online. The
condition is only executed once and is defined as shown below:
July, 2021
RESOLVE Manual
Examples Guide 1250
Click OK twice to go back to the main RESOLVE screen: the event driven scheduling section is
now setup. However before the run is done, the appropriate schedule should be set. This is
done by selecting Events/Actions | Scheduling options. Once within this section, the
workflows schedule will be disabled to leave the only the event driven schedule active. Please
note that this is not necessary at the moment as the visual workflow has not been defined.
However, it is good practice.
July, 2021
RESOLVE Manual
Examples Guide 1252
Go to Step 2 or Step 4
To run the the forecast, press the icon. Note that the run can be paused or stopped with
other toolbar icons.
Go to Step 3 or Step 5
When the model is running, it is possible to view the results by selecting Results | View
forecast plots or by clicking on the results icon. The choke size as well as gas rate can then be
selected for the riser for review. In particular, the choke size step changes can be clearly seen
which help maintain the rate plateau. Please note that the choke size changes do not reflect the
exact values of 4 inches or 3 inches defined as part of the event driven schedule because the
model is re-solved at the particular times when the actions are taken. For example when well A4
comes online sometime in 2013, the choke is supposed to go to 3 inches. However, when that
action is to be taken, the model is also being re-solved (i.e. re-take pass option is selected)
hence it goes up to 3.2 inches instead.
Go to Step 4 or Step 6
3.4.1.3.7 Step 6 - Verify available variables
Step 6 Objective:
To execute the same field management objectives using the Visual workflow manager.
The first step will be to verify the available variables to be used for the workflow
manager.
The workflow manager shall be used to execute thesame workflow for the project as was
previously done using event driven schedule.
July, 2021
RESOLVE Manual
Examples Guide 1254
The variables to be used by the workflow manager are thus the same variables which were
published earlier for the event driven schedule in Step 2. The list of the variables are shown
below. They have been published beforehand and there is no need to repeat the process.
Variables:
Go to Step 5 or Step 7
3.4.1.3.8 Step 7 - Setup the workflow
Step 7 Objective:
Setup the Workflow manager
To define the logic using the workflow manager, select Event/Actions | Workflows | Start.
This brings up the start section for the workflow where all conditions and actions to be executed
at the start of the run will be defined.
July, 2021
RESOLVE Manual
Examples Guide 1256
The RESOLVE interface will be now include the Start workflow sheet/page.
The logic required at the start is as defined for the previous event driven schedule i.e.
To implement the logic, select the "Assignment" element/icon from the palette ( ), click on
the Assignment element and click anywhere within the interface.
Double click on the assignment element to define the actions to be executed as shown below.
The workflow item name can be changed as shown below.
With these steps completed, the various workflow items can now be linked together using the
link button . The linking shall be done from the Start element to the Assignment element and
finally to "Continue run".
This completes the setup of the Start section of the workflow manager. The next step is to
define the Post solve section. The logic to be executed at the end of every timestep consists of
two conditional statements as follows;
July, 2021
RESOLVE Manual
Examples Guide 1258
The Decision and Assignment elements will be used to implement the logic and will be added
from the palette ( ). The first condition will be defined for the decision element as shown
below. Note that another condition to ensure the choke size is increased up to a maximum of 5
inches is also defined within the same element.
Next step is to define the Assignment element from the palette ( ) to execute the action for
the condition defined above. This is shown below.
July, 2021
RESOLVE Manual
Examples Guide 1260
This completes the set up of the first conditional statement. The next step is to define the
second condition for the post-solve i.e. when the new well should be brought online. This
condition should only be executed once so we will put a further criterion/condition which ensures
that the overall condition is only true if the well has not been previously opened.
July, 2021
RESOLVE Manual
Examples Guide 1262
The action to be implemented for the second condition is defined using the Assignment
element as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1264
Once both conditions with their corresponding actions have been defined for the post solve, the
work-flow can then be linked. Since this is a post-solve which is executed after a time-step, it is
possible to re-take the time-step with the action already in place to take into account the
changes and hence continuity in the results.
However, before linking and running the work-flow, we need to think about how the work-flow will
be executed.
Condition 1 and its corresponding action will be executed first and then condition 2 (if true) and
its own action. If any of the conditions is valid and the corresponding action is taken, the time-
step should be resolved and the post-solve will be re-executed. Reviewing the conditional
statements, for condition 1, we want this to be executed as many times as possible and we
have put a stop criterion in it (i.e. to ensure the choke size does not exceed 5 inches). For
condition 2, we only need this condition to be executed once i.e. when the choke size gets to 4
inches, well A4 should be opened and we have set the conditional statement to only be true is
well A4 was previously closed.
Now the next step is how to ensure the post solve section is re-solved if any action is taken
(either for condition 1 or condition 2). This is the function of the Redo solve terminator.
July, 2021
RESOLVE Manual
Examples Guide 1266
It is then possible to link up the workflow using the Link icon . The final workflow is shown
below.
It is sometimes possible that the progression of the logic from the decision element to the
actions is not correctly represented when the linking is done e.g. The action being linked as a
"No" instead of a "Yes" or vice-versa. This can be easily rectified by double-clicking on the
decision element and setting the status of the action elements correctly as shown below.
Finally the Event driven schedule will be disabled under "Events/Actions | Scheduling
options" so the visual workflows alone is executed.
July, 2021
RESOLVE Manual
Examples Guide 1268
Go to Step 6 or Step 8
To run the the forecast, press the icon. Note that the run can be paused or stopped with
other toolbar icons.
Go to Step 7 or Step 9
As would be expected, the Visual workflow manager provides the same results as the Event
driven schedule for the modelling objectives. Overall, using visual workflows provides more
flexibility in the implementation of logic since a workflow can be called at any point during the
solve, i.e., between individual application solves, by using the workflow instance from the 'Edit
System' menu. Additionally, visual workflows also allow implementation of much complex field
logic and operations (e.g. mathematical operations, data object functions, sub-flowsheets etc.)
which is not possible using Event Driven Scheduling.
1. Example Introduction
The objective of this section is to demonstrate how to setup a list of scenarios to be run with an
existing RESOLVE model, how to run them together and how to compare the results.
This is possible by using the scenario management capability of RESOLVE.
A new discovery has been made - A full field model is required to analyse the behaviour of this
field under different production and injection conditions.
In order to do so, a numerical simulation reservoir model has been setup as well as surface
network models for the production and the gas injection system. These models have been
dynamically linked through a RESOLVE model.
July, 2021
RESOLVE Manual
Examples Guide 1270
The development team wants to run the three following scenarios and compare the results:
Base Case: All four production wells are open for the duration of the forecast - No
capacity limit is set at the separator level.
Scenario 1: Only one well is open at the start of the forecast, well1 - A capacity limit of
120,000 STB/d liquid is set at the separator level - The event driven scheduling
capability is used so that the wells 2, 3 and 4 are open as soon as the production falls
below this plateau production rate.
Scenario 2: Only one well is open at the start of the forecast, well1 - A capacity limit of
140,000 STB/d liquid is set at the separator level - The event driven scheduling
capability is used so that the wells 2, 3 and 4 are open as soon as the production falls
below this plateau production rate.
In order to do so, the scenario management capabilities of RESOLVE are being used.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\REVEAL
\Example_2_1_4-GAP_REVEAL_Scenario_Management
Go to Step 1
Step 1 Objective:
Open the RESOLVE file and modify the main model options to be able to handle the
event driven scheduling
Once the model is loaded, ensure that the associated models are set to be reloaded when the
forecast starts by going to the Options | System Options section.
This is important in this case as the event driven scheduling is going to be used in
certain scenarios to modify the client application models, so we need to ensure that we
are starting from the same initial state every time we run the forecast.
July, 2021
RESOLVE Manual
Examples Guide 1272
Go to Step 2
Step 2 Objective:
Setup the different scenarios using the scenario manager
To start using this feature, invoke Scenarios| Browse/Edit. The interface below comes up.
Base Case: All four production wells are open for the duration of the forecast - No capacity
limit is set at the separator level.
Scenario 1: Only one well is open at the start of the forecast, well1 - A capacity limit of
120,000 STB/d liquid is set at the separator level - The event driven scheduling
capability is used so that the wells 2, 3 and 4 are open as soon as the production falls
below this plateau production rate.
July, 2021
RESOLVE Manual
Examples Guide 1274
Scenario 2: Only one well is open at the start of the forecast, well1 - A capacity limit of
140,000 STB/d liquid is set at the separator level - The event driven scheduling
capability is used so that the wells 2, 3 and 4 are open as soon as the production falls
below this plateau production rate.
Click on "Add empty scenario" at the bottom of the scenario list section to add the first case.
Label this "Base case".
It can be seen that the Scenario manager can work with the Event driven schedule as well as the
Visual workflow manager. For the purposes of the example, only the Event driven scheduling
section will be used.
It is now necessary to specify the scenario 1: it first needs to be setup in the event driven
scheduling section and then imported in the scenario manager, as described in the procedure
below.
Please note that this scenario will be the same as the one defined for the event driven
scheduling example: Example 2.3.
Click OK and go back to the main RESOLVE screen and follow the procedure below.
To begin, variables have to be published from the different client modules prior to using these
variables in the event driven scheduling section.
To start using the tool invoke Variables | Import application variables. The interface below
comes up. This is where the variables to be published for the client modules are published.
Mask flag for well2, well3 and well4 - This will provide the status of the wells (i.e. Open
or Closed).
Separator liquid rate.
Max liquid rate constraint at separator.
July, 2021
RESOLVE Manual
Examples Guide 1276
Click on the equipment masking tab. This is where the mask flags for the production wells will
be published. Select well2 and click on the red arrow to publish the variable. Do the same for
well3 and well4. The mask flags will return "1" if the item is masked or "0" if unmasked.
Revert back to the OpenServer variables tab. Publish the liquid rate at separator (Sep1).
July, 2021
RESOLVE Manual
Examples Guide 1278
Click on Constraints (input) variables tab and also publish the maximum liquid rate constraint
at separator. To list the variables in this section, click on the "Rescan button". This is as
shown below.
Click OK and the interface below shows the variables published for the production system.
Once the variables have been published, it will be possible to go ahead with the event driven
scheduling setup.
Once the variables have been published, the event driven scheduling section can be setup.
To access the event driven scheduling setup, click on Events/Actions | Event driven
scheduling.
July, 2021
RESOLVE Manual
Examples Guide 1280
In simple terms, the screen allows setting up many conditions (i.e. which can be aggregated
together with AND or OR statements to form a single condition), such that when a condition is
"triggered" an action or several actions will be performed.
A condition is a statement of the form: "IF <variable> <condition> <value>"
The condition can be checked by RESOLVE prior to solving the system (pre-solve), after solving
the system (post-solve), or at the start of the run. This can be specified by using the Schedule
section at the top of the screen.
When setting up a condition, it is best to work from left to right across the screen.
The conditions that have to be set for this example are outlined below:
At the end of each timestep, i.e. Postsolve, if separator liquid < 118,000 stb/day then,
First unmask well2
If the condition becomes true again at a later time, unmask well3
If the condition becomes true again at a later time, unmask well4
The first condition is entered by populating the event drilling schedule interface as shown
below.
With the condition input, the next step is to specify / define the actions to be taken. This is done
by clicking on <no action> tab on the left.
July, 2021
RESOLVE Manual
Examples Guide 1282
Input the actions as shown above. Mask flag of "1" means the wells will be masked.
This will enable to mask the wells and the maximum liquid rate constraint at the separator at the
start of the run.
Once this is done, then it will be required to set a condition to be checked at the end of every
timestep, to determine the time at which well2, well3 and well4 are to be opened.
This event is to be performed during the "Post solve" as shown below. It can be seen that a
separator liquid of 118,000stb/day instead of 120,000stb/day is used. A little tolerance is given
for the rate to ensure the event occurs only when the condition is fulfilled as the solvers may not
always obtain exact solutions at each timestep. e.g. a solution of 119900 stb/day is still
realistically close to 120,000 stb/day and is not reason to trigger the event.
In addition, we want three groups of actions to be performed, i.e one well (i.e. well2) to be first
unmasked, and later on another well (i.e. well3) to be unmasked and later on another well (i.e.
well4) to be unmasked. This means the condition will be executed three times and thus the
"Times to execute" option should be set to 3.
The action to be taken are input as shown below. The 3 wells are to be unmasked when the
condition is satisfied. However we wish to set a sequence for the unmasking, i.e. wells2, then
well3 and finally well4. This is done by clicking on "Rank icons" button.
July, 2021
RESOLVE Manual
Examples Guide 1284
We shall select each of the wells in the order in which they shall be unmasked.
To do so, select each well in the top list and click the Add button to add them to the ranking list.
Then, in the "Ranking Variable" section, enter the order in which the wells have to be opened:
the value assigned to well2 is 1, to well3 is 2 and to well4 is 3.
Then, select the order in which the actions have to be executed: the well2 has the lowest ranking
value assigned to it and should be open first, therefore the "Order to execute actions" has to
be set to "Lowest first".
The count value is left to 1: it is only necessary to open one well at a time.
In summary, this means at the first execution of the post solve condition specified previously,
unmask well2, at the second execution unmask well3 and at the third execution of the condition,
unmask well4.
Select OK, and ensure "Re-take pass through system (redo solve) after action has been
performed" is checked as shown on the screenshot below. This ensures that as soon as a
condition is triggered, RESOLVE will perform the required action and will rerun the timestep with
the new event/action to ensure continuity in the solutions.
July, 2021
RESOLVE Manual
Examples Guide 1286
Click OK twice to go back to the main RESOLVE screen: the event driven scheduling for the
scenario 1 is now setup and can be imported in the scenario manager.
Select Scenarios| Add current schedule" and this schedule will be automatically imported in
the scenario manager - name it 'scenario1'.
July, 2021
RESOLVE Manual
Examples Guide 1288
It is possible to note that this scenario is similar to scenario 1, except for the fact the the
capacity limit at the separator is higher.
Therefore, the easiest way to setup this scenario is to import the current event driven
scheduling, initially setup for scenario 1 and then modify the maximum liquid constraint.
As previously done, go to Scenarios on the main menu and select "Add current schedule"
and name the new scenario "Scenario 2".
To modify the scenario 2, select any of the EDS sub sections e.g. Start, Presolve or Postsolve
and then Double-click. As shown above the Start section is being selected.
This will automatically open the event driven scheduling section associated to scenario 2, as
illustrated below.
July, 2021
RESOLVE Manual
Examples Guide 1290
It is required to modify the capacity limit associated to the separator for this scenario. This
capacity limit is set in the "Start" section, so it is required to select this section in the
"Schedule" drop down box at the top left hand corner of the screen. The following screen will be
displayed.
Go to the Action section and change the maximum liquid capacity at separator to 140000 STB/
day, as illustrated below.
As the maximum liquid capacity limit at the separator has been modified, it will also be
necessary to modify the condition which triggers the well opening: the wells should be open
when the liquid rate at the separator falls below 138000 STB/d and not anymore when it falls
below 118000 STB/d.
This has to be modified in the "PostSolve" section of the event driven scheduling, as illustrated
below.
July, 2021
RESOLVE Manual
Examples Guide 1292
Once this is done, click "OK" to go back to the scenario manager screen, as illustrated below.
The three scenarios are now setup: the model is now ready to be run. Click "OK" to go back to
the main RESOLVE screen to do so.
Go to Step 1 or Step 3
Step 3 Objective:
Running the scenarios
The scenarios are now setup and the simulation is ready to be run.
This is achieved by clicking on the icon or alternatively select Run| Run Scenario(s).
The interface below appears where the user can selectively choose which scenarios to run.
Highlight the three scenarios and select OK to launch the run.
July, 2021
RESOLVE Manual
Examples Guide 1294
Note that "Use clustering" can be selected to run different models on clusters of machines.
See the "Running scenarios on a cluster" section for further information.
Once the run begins, the interface below appears showing the progress run of each scenario.
At the end of the runs, the scenario progress interface will be as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1296
The three scenarios have been run: it is now possible to analyse and compare the result.
Go to Step 2 or Step 4
Step 4 Objective:
Analysing and comparing the scenarios results
For each scenario that is being run, RESOLVE will automatically save the results. Once all the
scenarios have been run, it will be possible to access these results by going to the Results |
View Scenario Results (Table) to see the scenarios results in TABULAR form or to the
Results | View Scenario Plots to see the scenario results in PLOT form.
In this case, it is for instance possible to compare the liquid produced at the separator for each
of the scenarios.
Select the "Sep1" item for the list. A list of variables associated with the separator will be
displayed at the bottom left hand corner of the screen, as displayed below.
July, 2021
RESOLVE Manual
Examples Guide 1298
Select the "Sep1" item for the three scenarios by using the tab on the right hand side of the
screen.
The following plot will be displayed, illustrating the liquid produced at the separator for the three
scenarios.
July, 2021
RESOLVE Manual
Examples Guide 1300
It is possible to notice that the liquid production for the base case scenario is much higher than
the liquid production for both capacity limited scenarios at the beginning of the run, but then the
production decline is faster.
The plateau rate for the 140,000 STB/d liquid production can only be maintained for about a
year, whereas it can be maintained for nearly 3 years in the 120,000 STB/d scenario.
Using the same method, it is possible to plot the gas and water produced for the three
scenarios:
Based on these results, the user can decide the most suitable case.
Go to Step 3
In this example, GAP and REVEAL have the same number of components and both consider a
full composition. If the objective is to achieve integration between a GAP model and a REVEAL
model having a different number of components (typically a reduced or lumped composition in
REVEAL and a full composition in GAP), please refer to the Lumping/Delumping example.
A condensate field is to be modeled. This field is being produced for its condensate production,
and there are no export facilities or market for the gas. All the produced gas must therefore be
re-injected in the reservoir and the production is currently limited by the ability of the surface
facilities to compress and re-inject the gas.
July, 2021
RESOLVE Manual
Examples Guide 1302
A REVEAL model of the reservoir is available, along with a GAP model of the surface network
(including the compressors) and a fully characterised equation of state of the reservoir fluid. The
field has 5 producers and 3 injectors.
The fluid is a condensate with a single-stage flash CGR of 97.23 STB/MMscf, API of 29 and the
following phase enveloppe.
The GAP injection and production models are as follows. The condensate is separated from the
gas at the defined separator pressure (the temperature of separation is calculated by the
network). The condensate is sent to the separator 'Oil' and the gas to be re-injected sent to the
separator 'Reinjection gas'. The gas injection network includes a compressor which models the
gas handling facility.
July, 2021
RESOLVE Manual
Examples Guide 1304
When performing integration between compositional models, the following should be noted:
At the beginning of every time step, the IPR is passed from the reservoir simulator to GAP in
the form of a table of phase rates vs BHP. This is identical to the Black Oil case.
The IPR table contains phase rates at standard conditions, therefore it is important to ensure
that the separator train is consistent between the reservoir simulator and GAP. If this is not the
case, this will lead to mass inconsistencies between the models.
The composition of the produced fluid is passed from the reservoir simulator to GAP at every
time step. Similarly the composition of the re-injected gas is passed from GAP to the
reservoir simulator.
Only mole percentages are passed between applications, therefore it is important to ensure
that the components properties are consistent between applications.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\REVEAL
\Example_2_1_5-GAP_REVEAL_Compositional
This folder contains a file "REVEAL Full Composition.rsa" which is a RESOLVE archive file
that contains the RESOLVE file, REVEAL file, GAP file and other associated files required to go
through the example. The archive file needs to be extracted either in the current location or a
location of the users choice.
Go to Step 1
3.4.1.5.1 Step 1: Create new file
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
July, 2021
RESOLVE Manual
Examples Guide 1306
Access the Controls/EOS and change the range of validity for the Volume Shift, including
negative values:
Now is a good time to save the file using File | Save As..., and enter a file name (e.g.
GAP_REVEAL_compositional.rsl).
Go to Step 2
3.4.1.5.2 Step 2: Add an instance of REVEAL
The next step is to create a REVEAL instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to Edit System| Add Client Program or select the icon.
From the displayed menu list, select "REVEAL". Click on the main screen where the REVEAL
icon is to be located, and give the case a label (say, "REVEAL").
July, 2021
RESOLVE Manual
Examples Guide 1308
After that, clicking on Ok will return to the main screen and open up the REVEAL model:
Note: one can use the Move tool to move the wells in the screen.
The REVEAL reservoir model has overall 11 wells:
- Wells PR1 to PR8 producers
- Wells INJ1 to INJ3 are gas injectors
Once the REVEAL deck is loaded, access the 'Case Details' tab and select as IPR model the
'Drainage region (advanced)' (ref. IPR Model topic)
Then select 'Set up' and 'Calculate'. The program will calculate parameters to correct the block
IPR to determine a more representative drainage region IPR.
After the calculation is finished, select OK to go back to the main program panel.
Go to Step 3
3.4.1.5.3 Step 3: Add instances of GAP
The next step is to create the GAP instance for the production network.
From the main menu, go to Edit System | Add Client program or select the icon. From
the resulting menu, select "GAP". Click on the main screen where to position the GAP icon, and
give the case a label (say, "Production").
July, 2021
RESOLVE Manual
Examples Guide 1310
For the file name, browse to the file "Production.gap" as shown above.
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
When OK is pressed, GAP will start and load the required case. It will then query the case for its
sources and sinks (wells) and will display these on the screen as shown below.
The next step is to create the GAP instance for the injection network. From the main menu, go
to Edit System | Add Client program or select the icon. From the resulting menu, select
"GAP". Click on the main screen where to position the GAP icon, and give the case a label (say,
"Injection").
For the file name, browse to the file "Production.gap" as shown above. This is the main
production model. As this network model is an associated Gas Injection network
model, then select as System "Associated Gas Injection".
July, 2021
RESOLVE Manual
Examples Guide 1312
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes. When OK is pressed, GAP will start and load the required case.
Go to Step 4
3.4.1.5.4 Step 4: Make the connections
The next step is to connect the sources and sinks of the different applications. To connect the
systems, go to "link" mode by pressing the icon and link the different items by drag and drop.
It is required to connect:
The production wells of REVEAL and the production wells of GAP
The injection wells of REVEAL and the injection wells of GAP
The 'Reinjection Gas' separator of the production system to the 'IM1' manifold of the injection
system.
When a well of GAP is connected to a well of REVEAL, IPR data and compositional data is
passed at every time step. When the 'Reinjection Gas' separator is connected to the 'IPM1'
injection manifold, the following data is passed:
pressure and temperature
composition
the gas rate at the 'Reinjection Gas' separator is passed as a maximum gas rate constraint
on the 'IM1' manifold. This ensures that the injection system does not inject more gas than the
amount produced.
Go to Step 5.
3.4.1.5.5 Step 5: Import application variables
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to publish additional
variables for reporting and to be able to set up a controlling logic.
It is required to publish:
The oil rate from separator 'Oil'
The gas rate from separator 'Reinjection Gas'
The maximum gas rate constraint from separator 'Reinjection Gas'
The gas rate from injection manifold 'IM1'
The maximum gas rate constraint from injection manifold 'IM1'
From the menu, enter Variables | Import application variables. Import the variables listed
above for the production and the injection systems by selecting the corresponding tab and
clicking Edit variables.
July, 2021
RESOLVE Manual
Examples Guide 1314
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
July, 2021
RESOLVE Manual
Examples Guide 1316
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
Go to Step 6.
This can implemented by taking the following steps. Create a direct link from the injection to the
production using the icon.
Double click on the link, and pass the injection manifold gas rate to the 'Reinjection Gas'
separator maximum gas rate.
Create the following Pre-Solve workflow, using an Assignment element. The objective of this
workflow is to reset the constraint on the production system at the beginning of every time step.
July, 2021
RESOLVE Manual
Examples Guide 1318
The final step to implement the feedback loop is to configure the loop, by entering the Run |
Edit Loop menu. Define the following fluid connection convergence item with a convergence of
1, and enter the 'Maximum number of iterations' as 2. RESOLVE will consider the loop
converged if the produced gas and the re-injected gas are within 1 MMscf/d of each other, or if
the maximum number of iterations it reached.
Go to Step 7.
3.4.1.5.7 Step 7: Enter the schedule
To setup the RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
For the purposes of this example, we will be making use of the basic scheduling only.
July, 2021
RESOLVE Manual
Examples Guide 1320
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button. This will display a screen allowing the user to
select the required start date from a list of the various model start dates. In this case the start
date is 01/09/2009.
The timestep and schedule duration are also entered here as shown (1 month). All the linked
application models will be synchronised every month until the schedule completes on
01/01/2014.
To run the forecast from beginning to end without stopping, press the icon.
Note that the run can be paused or stopped with other toolbar icons.
Run the forecast from beginning to end. Do this by pressing the icon.
The first action done is the initialisation of both modules. In the case of REVEAL, this means
initialising the reservoir model (i.e. running a history if necessary until the start date of the
forecast is reached).
When the model is run, the following screen will appear. The purpose of this screen is to define
the mapping between the components of the different applications. Map the components by
clicking the 'Add All' button, and do this for all the wells.
July, 2021
RESOLVE Manual
Examples Guide 1322
At the end of the run, the following production profile and cumulative oil production is obtained.
It is possible to verify that the injection system has been able to re-inject all the produced gas.
At the reservoir level, the injected gas can be clearly seen by looking at the fluid CGR: the
injected gas corresponds to the low CGR regions. In this example, the produced gas rate
increases slightly during the run, and the oil rate decreases. The decrease of the oil rate is due
to a decrease of the producing CGR, which is due to the reservoir depletion and the
breakthrough of the low CGR injected gas. The image below shows the gas phase CGR which
clearly illustrates the injected gas, and the condensate droping out in the zone of wells PR7 and
PR8 (which has no gas re-injection).
This concludes the compositional integration example. The next example looks at lumping the
reservoir composition into a smaller composition to speed up the reservoir calculations. It
introduces the Lumping/Delumping technique in RESOLVE to perform integration between
applications which have different requirements as to the number of components used.
This example builds on Example 2.1.5, in which integration was performed between a
compositional REVEAL model and a compositional GAP model having the same number of
components (15 components), and it is recommended that the user completes this example
July, 2021
RESOLVE Manual
Examples Guide 1324
first. This was done in the context of a condensate field with gas recycling. However, as detailed
in the Lumping/Delumping section, different applications have different requirements regarding
the number of components used. Generally a reservoir simulator requires a reduces
composition to avoid excessive run times, while the surface network requires a detailed
composition if the objective is to perform temperature prediction and flow assurance
calculations.
Each module uses the PVT modelling approach which is best suited to each tool, that is to say:
The REVEAL reservoir model uses a grouped (7 pseudo components) fully compositional
PVT description
The GAP surface network model uses an extended fully compositional (15 components) PVT
description.
Before completing this example, it may be preferrable to complete Example 2.1.5 as this
example builds on it. The field considered is a condensate field which is being produced for its
condensate production. All the produced gas needs to be re-injected, and the production is
constrained by the capacity of surface facilities to re-inject the gas.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
July, 2021
RESOLVE Manual
Examples Guide 1326
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\REVEAL
\Example_2_1_6-GAP_REVEAL_Compositional_Lumping_Delumping
This folder contains a file "REVEAL Lumping Delumping Start.rsa" which is a "RESOLVE
archive file" that contains the RESOLVE file, REVEAL file, GAP file and other associated files
required to go through the example. The archive file needs to be extracted either in the current
location or a location of the user"s choice.
Go to Step 1
3.4.1.6.1 Step 1: Open the RESOLVE model
Open the RESOLVE model provided in the archive, named 'LumpingDelumping.rsl'. This
contains the model built in Example 2.1.5. It includes:
The GAP production and injection network
The REVEAL model
The feedback loop and Pre-Solve workflow required to ensure that the produced gas can be
re-injected.
Currently the model is setup using a full composition in the reservoir model and in the surface
network. In the next steps, an equivalent lumped composition is built, and the RESOLVE model
setup to perform lumping/delumping.
3.4.1.6.2 Step 2: Create the lumped composition in PVTp
The objective of this step is to create a lumped composition which will be equivalent to the initial
full composition, along with the rule which will enable to pass from one to the other. The starting
point is an EOS which has been characterised and matched to a PVT lab report: this is
provided in the file 'FullComposition.pvi' contained in the archive. The steps involved are to:
Create the lumped composition from the full composition
Quality check that the two compositions are consistent by running PVT experiments such as
CCE, CVD etc.
Open 'FullComposition.pvi' in PVTp. This contains the full EOS as used in Example 2.1.5.
The screen for creating the lumped composition is accessed via Data | Lumping/Delumping
for IPM.
July, 2021
RESOLVE Manual
Examples Guide 1328
The 'Lumping Method' is by default set to 'Manual Lumping': this allows the user to manually
create the lumping rule and to choose how to lump the components together. Click on Lump
Stream.
Select the components that will be part of each lump on the bottom-right hand of the table, the
Add Lump. As a rule of thumb, components with similar molecular weights can be lumped
together. In any case, finding the best way of lumping is a trial and error process, based on
having a final lumped EOS as close to the original EOS as possible. Create the lumps shown
below, and click on Lump.
July, 2021
RESOLVE Manual
Examples Guide 1330
The program will ask whether or not to hold single components during lumping. If selected, this
enables to keep the molar fraction of single components constant through lumping. Select the
pseudo C17::C20 and click OK.
Click OK to create a lumping rule: this will be required by RESOLVE to perform Lumping/
Delumping.
The quality of the lumped composition created can be verified by calculating the phase
envelopes or simulating experiments such as a CVD, as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1332
In the menu Data | Lumping/Delumping for IPM, select 'Export .prp', to export the full and the
lumped composition together in a single file. When prompted, click OK to export the lumping
rule as well. Save this as 'Full and Lumped.prp'.
Create a stream containing only the lumped composition by clicking on 'To Stream' then on
'Clear Lumping'.
July, 2021
RESOLVE Manual
Examples Guide 1334
This results in a new stream, 'full_LUMP', which contains only the lumped composition.
Select 'Exit and Save', select this stream from the stream list and export it as 'Lumped.prp'
from File | Export.
3.4.1.6.3 Step 3: Import the lumped composition in REVEAL
Open the REVEAL model provided 'ReservoirB_lumped.rvl'. Enter the input wizard from Input |
Control | General Data.
From the wizard, select 'Edit Composition'. The following screen pops up: select 'Reset
Comp' then 'Import .PRP' and import 'Lumped.prp' which was created in Step 2. Click OK.
In the Initialisation | PVT section, copy the composition mole percents as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1336
In this window, a pair of EOS (full and lumped) is defined for each pair of connected
applications which requires lumping or delumping. In this example, we need to perform
delumping from the reservoir to the production network, and lumping from the gas injection
network to the reservoir.
In the 'Reservoir-Production' tab, click on the red 'Setup' button and import 'Full and
Lumped.prp' created in Step 2. Perform the same operation in the 'Reservoir -
Gas_Injection' tab.
July, 2021
RESOLVE Manual
Examples Guide 1338
Run the forecast from beginning to end. Do this by pressing the icon.
The first action done is the initialisation of both modules. In the case of REVEAL, this means
initialising the reservoir model (i.e. running a history if necessary until the start date of the
forecast is reached).
When the model is run, the following screen will appear. The purpose of this screen is to define
the mapping between the components of the different applications.
Between the production network and the gas injection network, no lumping and delumping is
required and the components can be mapped directly, by selecting them from the lists and
clicking 'Add individual connection' for each component.
From the gas injection network to the reservoir, lumping is required. Select the connection 'Inj1-
>Inj1', and under 'Resolve lumping/delumping' select 'External Lumping'. The lumped
components can then be mapped. Repeat this for the other two injection wells.
July, 2021
RESOLVE Manual
Examples Guide 1340
From the reservoir to the production network, delumping is required. Select the connection
'PR1->PR1', and under 'Resolve lumping/delumping' select 'External delumping'. The
components can then be mapped. Repeat this for the other seven production wells.
Once the run is finished, the results can be analysed. The RESOLVE file contains saved results
from Example 2.1.5 (obtained with a full composition throughout), which can be compared with
the lumping/delumping approach followed here. The following plot compares the oil production
profile for the two cases. The results are very close, in particular considering the complexity of
the problem, with delumping from the reservoir to the production network, lumping from the gas
injection network to the reservoir and condensate dropout within the reservoir.
July, 2021
RESOLVE Manual
Examples Guide 1342
The following plot compares the producing CGR for well PR7, which zone is produced in
depletion only. The decrease in CGR for this well is only due to condensate dropout within the
reservoir (no gas re-injection), and this shows that the lumped EOS is able to accurately capture
this effect.
Analysis of the run time also shows that using a lumped composition results in a decreased
calculation time for the reservoir simulator.
Therefore this example demonstrates that the objectives of the lumping/delumping methodology
are achieved:
Perform integration between applications having a different PVT description
Ensure that the results are consistent compared to each application using the full EOS
composition.
3.4.2 Eclipse
3.4.2.1 Example 2.2.1: GAP - Eclipse Connection
3.4.2.1.1 Overview
1. Example Introduction
The process of this exercise demonstrates how to setup a connection between a GAP
production network, water injection network and reservoir simulation model setup using Eclipse.
The field being modelled consists of 3 producer wells and 4 water injector wells, with the
intention being to determine the production over the course of a 5 year prediction.
July, 2021
RESOLVE Manual
Examples Guide 1344
The first objective is to couple the GAP and Eclipse models, and the second to run the model
and determine this production and injection behaviour.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for Eclipse and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
It is important as well that Eclipse is configured to run through RESOLVE using MPI. The
example also assumes that Eclipse will run on the same machine as RESOLVE is running.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\ECLIPSE
\Example_2_2_1-GAP_Eclipse
July, 2021
RESOLVE Manual
Examples Guide 1346
This folder contains a file "GAP_Eclipse.rsa" which is a "RESOLVE archive file" that contains
the RESOLVE file, Eclipse file, GAP file and other associated files required to go through the
example. The archive file needs to be extracted either in the current location or a location of the
user's choice.
Go to Step 1
3.4.2.1.2 Step 1 - Initialise Model
Step 1 Objective:
Start a new RESOLVE project and initialise the units
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-
Eclipse.rsl).
Go to Step 2
3.4.2.1.3 Step 2 - Create Eclipse instance
Step 2 Objective:
Create an Eclipse instance in the RESOLVE model
The next step is to create an Eclipse instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to System | Create instance or select the icon.
From the resulting menu, select "Eclipse" (this is the E100 black-oil Eclipse driver).
The cursor, when held over the main screen, will change to indicate that an instance of the
application can be made.
Click on the main screen where the Eclipse icon is to be located, and give the case a label (say,
"Reservoir").
July, 2021
RESOLVE Manual
Examples Guide 1348
Note that from this screen it is possible to select a remote host on which Eclipse can be run.
This is especially useful in cases where several reservoir models have to be run: in this case it
will probably be more efficient to run these simulations in parallel.
Next click on "Start", Eclipse will start and load the required case. It will then query the case for
its sources and sinks (wells) and will display these on the screen as shown below. The icons
can be moved by selecting the "Move" icon on the toolbar ( ) and then dragging them to the
required positions.
The type of the well (which is obtained from the query of Eclipse) can be found by double-
clicking on the separate icons.
Go to Step 3
3.4.2.1.4 Step 3 - Create GAP production instance
Step 3 Objective:
Create the GAP production instance in the RESOLVE model
Repeat the previous step to create an instance of GAP on the RESOLVE main screen.
Label the created GAP case "PROD" and browse for the model "Oil Field.gap".
July, 2021
RESOLVE Manual
Examples Guide 1350
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
Under snapshot mode section, select "Always save forecast snapshots".This saves a
snapshot of each prediction timestep in GAP, allowing the user to reload a copy of each
snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Also note that the “Rule based solver” box has been ticked for GAP. This uses simple
engineering rules to meet constraints in the GAP network, and is significantly faster than a full
optimisation (although it may produce slightly less oil/gas). More information on this can be
found in section 2.10.5.2 of the GAP User Manual.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP file contains production and water injection models. In the production model, three
production wells, PROD1, PROD2 and PROD3 will be found. These are the same wells
identified from the Eclipse case. One can look at the GAP interface (i.e. the GAP model will be
open on the windows taskbar) to confirm the contents of the GAP file by clicking on Window |
Tile vertically from the main GAP menu.
Go to Step 4
3.4.2.1.5 Step 4 - Connect the production wells
Step 4 Objective:
Connect the production wells from the Eclipse model to the production wells from the
GAP production network model.
Connect the PROD1 icon in Eclipse to PROD1 in GAP by clicking into the first icon and
dragging the connection to the second. Repeat this for the other producer wells.
July, 2021
RESOLVE Manual
Examples Guide 1352
Note that it is possible to make the connections using the "Connection wizard".
This is obtained by invoking Edit System | Connection Wizard under the main menu.
Go to Step 5
3.4.2.1.6 Step 5 - Create GAP water injection instance
Step 5 Objective:
Load the GAP Water Injection model and connect the wells to their counterparts in
Eclipse
The original GAP file that was loaded earlier contains an associated water injection system;
thus it is possible to create an instance of GAP for the water injection system in the RESOLVE
model.
For the filename, enter the same GAP production system model. This is because the
injection system is associated (i.e. or linked) with the production system in one GAP file.
Consequently, the "associated water injection" option has to be selected from the drop-down
menu for the system.
An advantage of this is that it allows to model production and injection systems of GAP
simultaneously using only a single GAP license.
Press OK and the injection system well (i.e. and injection manifold) will be displayed.
After this, connect the GAP and Eclipse icons together appropriately.
July, 2021
RESOLVE Manual
Examples Guide 1354
Go to Step 6
Step 6 Objective:
Setup the Eclipse model options
Before the simulation is run, some further changes can be made to the configuration of the
Eclipse link.
A detailed description of the different techniques used to determine these IPRs, along with their
respective advantages and disadvantages, can be found in the "IPR Generation Options"
section.
To select these options, double-click on the Eclipse icon to view the Eclipse data entry screen.
The Inflow performance type should be set to "Calculated PI (based on drainage)" with the
"scaling" option.
Click on "Options" and then "calculate" to perform the pre-run calculations required for this
method.
July, 2021
RESOLVE Manual
Examples Guide 1356
The calculation will be performed and the results will be as shown below:
Click on "OK" and "OK" again to return to the "Load and edit simulation model" interface.
When GAP solves/optimises its system, RESOLVE will return the result as an operating point for
the well on the inflow relation that Eclipse passed for that well, i.e. a BHP, phase rates, and a
THP. Eclipse will then have to control that well with a fixed boundary condition for the duration of
the next timestep. The user can select which boundary condition should be used: either fixed
rate, BHP or THP. The default settings illustrated below show that the wells in this Eclipse model
are controlled between GAP solves with a fixed rate.
In general, group controls should be removed from Eclipse data files as they could interfere with
the controlling data that is coming from GAP. However, there are certain cases where these
July, 2021
RESOLVE Manual
Examples Guide 1358
By default, the group controls established in this model are not respected. Should the user
decide to respect one of the groups controls, this group control would have to be selected by
clicking on the tick box next to its name.
Further information can be found in the "Loading and Editing an Eclipse case" section.
It is useful to publish the results from the summary section of Eclipse into RESOLVE. This is
done by double-clicking on the icon. Go to the "Miscellaneous" tab and select "Plot summary
vectors in RESOLVE" as shown below.
Go to Step 7
3.4.2.1.8 Step 7 - Setup Forecast Schedule
Step 7 Objective:
Setup the forecast schedule
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 1360
For the purposes of this example, we will be making use of the basic scheduling only.
The advanced scheduling is described in the "Example_2.1.3" section.
Invoke the schedule screen from the main menu using Schedule | Forecast data.
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button.
This will display a screen allowing to select the required start date from a list of the various
model start dates.
The timestep and schedule duration are also entered here as shown. Enter the data in the
screen shot above.
Here we will synchronise GAP and ECLIPSE every 1 month until the schedule completes on
1/1/2020.
Go to Step 8
3.4.2.1.9 Step 8 - Publish Variables
Step 8 Objective:
Publish the GAP variables to report in the RESOLVE results section
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to report all the
variables for the wells and separator in the GAP production model as well as the injection wells
and manifold in the GAP injection model.
RESOLVE can automatically build a list of the GAP variables available and that can be reported
directly through the RESOLVE interface.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
July, 2021
RESOLVE Manual
Examples Guide 1362
Select the 'Production' tab, and click Edit variables. A list of variables is available to import.
These consist of output variables such as solver results or cumulatives, or input variables such
as constraints or items' masking variables. If a variable is required and is not included in this
list, it is always possible to Copy and Paste the corresponding OpenServer string in the
'Variable string' field.
Select 'Sep1' and click the red arrow: this will import all the variables corresponding to Sep1.
Repeat this for the three production wells.
Click OK. In the screen below, click 'Plot invert selection' in order for a tick to appear in the 'Add
to plot' column for all variables. This ensures that all variables are accessible from the plotting
window.
July, 2021
RESOLVE Manual
Examples Guide 1364
Go to the 'Water_Injection' tab do the same for the gas injection model. Output all variables for
the injection wells and manifold (IM1) and proceed to the next step.
Make sure that there are no screens left open in GAP as this can interfere with the remote
operation of GAP by RESOLVE.
Go to Step 9
3.4.2.1.10 Step 9 - Run the Forecast
Step 9 Objective:
Run the prediction forecast
To run the forecast from beginning to end without stopping, press the icon.
Note that the run can be paused or stopped with other toolbar icons.
Once the forecast has been started, Eclipse will perform an equilibration calculation. If it was
setup to load a restart file it will do this instead.
The equilibrated reservoir data will be passed to GAP in the form of well IPR curves (i.e. for both
producers and injectors). GAP will use this data to solve and optimise the system. The solution
points will then be returned to Eclipse ready to take the first month's timestep. Before this, the
RESOLVE forecast enters "pause" mode.
To run the rest of the forecast without stopping, press the icon. Note that the run can be
paused or stopped with other toolbar icons.
Note that some extra timesteps may be inserted. These are put in to coordinate with the
reporting dates specified in the Eclipse deck.
Go to Step 10
3.4.2.1.11 Step 10 - Analyse the Results
Step 10 Objective:
Analysing the Results
RESOLVE has a set of results - the amount of results reported in the RESOLVE model
is function of the variables that have been published by the user prior to the run itself.
Refer to step 8 for further details.
It is also possible to view all the results of a simulation for any of the client applications
in the application itself.
July, 2021
RESOLVE Manual
Examples Guide 1366
The results from the client applications are best viewed when the run has completed or is
paused: this is because RESOLVE is controlling the application and it is possible that viewing
application results will interfere with the control. For this reason, application user interfaces are
disabled while the run is proceeding.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon.
These results can also be displayed as the run is proceeding.
For this specific case, we want to first analyse the oil that has been produced for the entire
production system as well as for each individual well.
The following screen will be displayed. All the nodes of the RESOLVE model are listed in the left
hand side of the screen. The variables that have been imported can be plotted along with the
'Connections' results which are automatically stored by RESOLVE.
In this example, GAP and Eclipse have the same number of components and both consider a
full composition. If the objective is to achieve integration between a GAP model and an Eclipse
model having a different number of components (typically a reduced or lumped composition in
Eclipse and a full composition in GAP), please refer to the Lumping/Delumping example.
A condensate field is to be modeled. This field is being produced for its condensate production,
and there are no export facilities or market for the gas. All the produced gas must therefore be
re-injected in the reservoir and the production is currently limited by the ability of the surface
facilities to compress and re-inject the gas.
An Eclipse model of the reservoir is available, along with a GAP model of the surface network
(including the compressors) and a fully characterised equation of state of the reservoir fluid. The
field has 5 producers and 3 injectors.
July, 2021
RESOLVE Manual
Examples Guide 1368
The fluid is a condensate with a single-stage flash CGR of 97.23 STB/MMscf, API of 29 and the
following phase enveloppe.
The GAP injection and production models are as follows. The condensate is separated from the
gas at the defined separator pressure (the temperature of separation is calculated by the
network). The condensate is sent to the separator 'Oil' and the gas to be re-injected sent to the
separator 'Reinjection gas'. The gas injection network includes a compressor which models the
gas handling facility.
When performing integration between compositional models, the following should be noted:
At the beginning of every time step, the IPR is passed from the reservoir simulator to GAP in
the form of a table of phase rates vs BHP. This is identical to the Black Oil case.
The IPR table contains phase rates at standard conditions, therefore it is important to ensure
that the separator train is consistent between the reservoir simulator and GAP. If this is not the
case, this will lead to mass inconsistencies between the models.
The composition of the produced fluid is passed from the reservoir simulator to GAP at every
time step. Similarly the composition of the re-injected gas is passed from GAP to the
reservoir simulator.
Only mole percentages are passed between applications, therefore it is important to ensure
that the components properties are consistent between applications.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both Eclipse and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
July, 2021
RESOLVE Manual
Examples Guide 1370
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\ECLIPSE
\Example_2_2_2-GAP_Eclipse_Compositional
This folder contains a file "ECLIPSE Full Composition.rsa" which is a "RESOLVE archive file"
that contains the RESOLVE file, the Eclipse model, GAP file and other associated files required
to go through the example. The archive file needs to be extracted either in the current location or
a location of the user"s choice.
Go to Step 1
3.4.2.2.1 Step 1: Create new file
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Access the Controls/EOS and change the range of validity for the Volume Shift, including
negative values:
July, 2021
RESOLVE Manual
Examples Guide 1372
Now is a good time to save the file using File | Save As..., and enter a file name (e.g.
GAP_REVEAL_compositional.rsl).
Go to Step 2
3.4.2.2.2 Step 2: Add an instance of Eclipse
The next step is to create an E300 instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to Edit System| Add Client Program or select the icon.
From the displayed menu list, select "Eclipse300". Click on the main screen where the REVEAL
icon is to be located, and give the case a label (say, "Eclipse300").
In the File name field browse the Eclipse data deck 'FullComposition.DATA':
After that, clicking on Ok will return to the main screen and open up the Eclipse model:
Note: one can use the Move tool to move the wells in the screen.
The Eclipse reservoir model has overall 11 wells:
- Wells PR1 to PR8 producers
- Wells INJ1 to INJ3 are gas injectors
July, 2021
RESOLVE Manual
Examples Guide 1374
Once the Eclipse deck is loaded, access the 'Control Data' tab and select as IPR model the
'Calculated PI (based on drainage)' and 'Scaling' (ref. IPR Model topic).
Then select 'Options' and 'Calculate'. The program will calculate parameters to correct the block
IPR to determine a more representative drainage region IPR.
After the calculation is finished, select OK to go back to the main program panel.
Go to Step 3
3.4.2.2.3 Step 3: Add instances of GAP
The next step is to create the GAP instance for the production network.
From the main menu, go to Edit System | Add Client program or select the icon. From
the resulting menu, select "GAP". Click on the main screen where to position the GAP icon, and
give the case a label (say, "Production").
July, 2021
RESOLVE Manual
Examples Guide 1376
For the file name, browse to the file "Production.gap" as shown above.
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
When OK is pressed, GAP will start and load the required case. It will then query the case for its
sources and sinks (wells) and will display these on the screen as shown below.
The next step is to create the GAP instance for the injection network. From the main menu, go
to Edit System | Add Client program or select the icon. From the resulting menu, select
"GAP". Click on the main screen where to position the GAP icon, and give the case a label (say,
"Injection").
For the file name, browse to the file "Production.gap" as shown above. This is the main
production model. As this network model is an associated Gas Injection network
model, then select as System "Associated Gas Injection".
July, 2021
RESOLVE Manual
Examples Guide 1378
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes. When OK is pressed, GAP will start and load the required case.
Go to Step 4
3.4.2.2.4 Step 4: Make the connections
The next step is to connect the sources and sinks of the different applications. To connect the
systems, go to "link" mode by pressing the icon and link the different items by drag and drop.
It is required to connect:
The production wells of Eclipse and the production wells of GAP
The injection wells of Eclipse and the injection wells of GAP
The 'Reinjection Gas' separator of the production system to the 'IM1' manifold of the injection
system.
When a well of GAP is connected to a well of Eclipse, IPR data and compositional data is
passed at every time step. When the 'Reinjection Gas' separator is connected to the 'IPM1'
injection manifold, the following data is passed:
pressure and temperature
composition
the gas rate at the 'Reinjection Gas' separator is passed as a maximum gas rate constraint
on the 'IM1' manifold. This ensures that the injection system does not inject more gas than the
amount produced.
Go to Step 5.
3.4.2.2.5 Step 5: Import application variables
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to publish additional
variables for reporting and to be able to set up a controlling logic.
It is required to publish:
The oil rate from separator 'Oil'
The gas rate from separator 'Reinjection Gas'
The maximum gas rate constraint from separator 'Reinjection Gas'
The gas rate from injection manifold 'IM1'
The maximum gas rate constraint from injection manifold 'IM1'
From the menu, enter Variables | Import application variables. Import the variables listed
above for the production and the injection systems by selecting the corresponding tab and
clicking Edit variables.
July, 2021
RESOLVE Manual
Examples Guide 1380
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
July, 2021
RESOLVE Manual
Examples Guide 1382
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
Go to Step 6.
This can implemented by taking the following steps. Create a direct link from the injection to the
production using the icon.
Double click on the link, and pass the injection manifold gas rate to the 'Reinjection Gas'
separator maximum gas rate.
Create the following Pre-Solve workflow, using an Assignment element. The objective of this
workflow is to reset the constraint on the production system at the beginning of every time step.
July, 2021
RESOLVE Manual
Examples Guide 1384
The final step to implement the feedback loop is to configure the loop, by entering the Run |
Edit Loop menu. Define the following fluid connection convergence item with a convergence of
1, and enter the 'Maximum number of iterations' as 2. RESOLVE will consider the loop
converged if the produced gas and the re-injected gas are within 1 MMscf/d of each other, or if
the maximum number of iterations it reached.
Go to Step 7.
3.4.2.2.7 Step 7: Enter the schedule
To setup the RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
For the purposes of this example, we will be making use of the basic scheduling only.
July, 2021
RESOLVE Manual
Examples Guide 1386
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button. This will display a screen allowing the user to
select the required start date from a list of the various model start dates. In this case the start
date is 01/09/2009.
The timestep and schedule duration are also entered here as shown (1 month). All the linked
application models will be synchronised every month until the schedule completes on
01/01/2014.
To run the forecast from beginning to end without stopping, press the icon.
Note that the run can be paused or stopped with other toolbar icons.
Run the forecast from beginning to end. Do this by pressing the icon.
The first action done is the initialisation of both modules. In the case of Eclipse, this means
initialising the reservoir model (i.e. running a history if necessary until the start date of the
forecast is reached).
When the model is run, the following screen will appear. The purpose of this screen is to define
the mapping between the components of the different applications. Map the components by
clicking the 'Add All' button, and do this for all the wells.
July, 2021
RESOLVE Manual
Examples Guide 1388
At the end of the run, the following production profile and cumulative oil production is obtained.
It is possible to verify that the injection system has been able to re-inject all the produced gas.
At the reservoir level, the injected gas can be clearly seen by looking at the fluid CGR: the
injected gas corresponds to the low CGR regions. In this example, the produced gas rate
increases slightly during the run, and the oil rate decreases. The decrease of the oil rate is due
to a decrease of the producing CGR, which is due to the reservoir depletion and the
breakthrough of the low CGR injected gas. The image below shows the molar fraction of C1 in
the gas, which clearly illustrates the injected gas.
This concludes the compositional integration example. The next example looks at lumping the
reservoir composition into a smaller composition to speed up the reservoir calculations. It
introduces the Lumping/Delumping technique in RESOLVE to perform integration between
applications which have different requirements as to the number of components used.
This example builds on Example 2.2.2, in which integration was performed between a
compositional Eclipse model and a compositional GAP model having the same number of
components (15 components), and it is recommended that the user completes this example
first. This was done in the context of a condensate field with gas recycling. However, as detailed
in the Lumping/Delumping section, different applications have different requirements regarding
the number of components used. Generally a reservoir simulator requires a reduces
composition to avoid excessive run times, while the surface network requires a detailed
composition if the objective is to perform temperature prediction and flow assurance
July, 2021
RESOLVE Manual
Examples Guide 1390
calculations.
Each module uses the PVT modelling approach which is best suited to each tool, that is to say:
The Eclipse reservoir model uses a grouped (7 pseudo components) fully compositional PVT
description
The GAP surface network model uses an extended fully compositional (15 components) PVT
description.
Before completing this example, it may be preferrable to complete Example 2.2.2 as this
example builds on it. The field considered is a condensate field which is being produced for its
condensate production. All the produced gas needs to be re-injected, and the production is
constrained by the capacity of surface facilities to re-inject the gas.
The RESOLVE model as built in Example 2.2.2. This model is set up to ensure that the
system does not produce more gas than it can re-inject.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both Eclipse and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\ECLIPSE
\Example_2_2_3-GAP_ECLIPSE_Compositional_Lumping_Delumping
This folder contains a file "ECLIPSE Lumping Delumping Start.rsa" which is a "RESOLVE
archive file" that contains the RESOLVE file, Eclipse file, GAP file and other associated files
required to go through the example. The archive file needs to be extracted either in the current
location or a location of the user"s choice.
July, 2021
RESOLVE Manual
Examples Guide 1392
Go to Step 1
3.4.2.3.1 Step 1: Open the RESOLVE model
Open the RESOLVE model provided in the archive, named 'LumpingDelumping.rsl'. This
contains the model built in Example 2.2.2. It includes:
The GAP production and injection network
The Eclipse model
The feedback loop and Pre-Solve workflow required to ensure that the produced gas can be
re-injected.
Currently the model is setup using a full composition the surface network and a lumped
composition in the reservoir simulator. In the next steps, the equivalent lumped composition is
built, and the RESOLVE model setup to perform lumping/delumping.
3.4.2.3.2 Step 2: Create the lumped composition in PVTp
The objective of this step is to create a lumped composition which will be equivalent to the initial
full composition, along with the rule which will enable to pass from one to the other. The starting
point is an EOS which has been characterised and matched to a PVT lab report: this is
provided in the file 'FullComposition.pvi' contained in the archive. The steps involved are to:
Create the lumped composition from the full composition
Quality check that the two compositions are consistent by running PVT experiments such as
CCE, CVD etc.
Open 'FullComposition.pvi' in PVTp. This contains the full EOS as used in Example 2.2.2.
Begin by adding a Lumping object to the characterization screen via the Characterization
ribbon.
So that we can store the resultant composition, also add a new PVT fluid and name it
AutoLumping:
Double click on the Lumping object to open it. In the window that opens the 'Lumping Method' is
set to 'Manual Lumping': this allows the user to manually create the lumping rule and to choose
how to lump the components together.
July, 2021
RESOLVE Manual
Examples Guide 1394
Select the components that will be part of each lump on the bottom-right hand of the table, the
Add Lump. As a rule of thumb, components with similar molecular weights can be lumped
together. In any case, finding the best way of lumping is a trial and error process, based on
having a final lumped EOS as close to the original EOS as possible. Create the lumps shown
below, and select Hold the pseudo C17::C20 during lumping. If selected, this enables to keep
the molar fraction of single components constant through lumping. Then click on Calculate.
The quality of the lumped composition created can be verified by calculating the phase
envelopes or simulating experiments such as a CVD, as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1396
Add an Export EOS object to the characterization screen via the Characterization ribbon.
Within the Export EOS object select "IPM EoS Composition" as the Export type and "Full and
Lumped" as the Export composition , to export the full and the lumped composition together in a
single file. Enable the export of the lumping rule as well. Save this as 'Full and Lumped.prp'.
Click 'Export'.
N.B:
The Eclipse data deck provided has already been set up to use the lumped composition
created. If this had not been the case, PVTp can be used to generate the EOS include file via
the EOS object. When the object is opened, select the Eclipse(Compositional) Format as the
Export type and "Lumped" in the Export composition menu.
3.4.2.3.3 Step 3: Import the lumping rule in RESOLVE
RESOLVE performs the lumping/delumping calculation during the run to map the full
composition to the lumped. The full and the lumped compositions, along with the lumping rule,
now need to imported into RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 1398
In this window, a pair of EOS (full and lumped) is defined for each pair of connected
applications which requires lumping or delumping. In this example, we need to perform
delumping from the reservoir to the production network, and lumping from the gas injection
network to the reservoir.
In the 'Eclipse300-Gas_Injection tab, click on the red 'Setup' button and import 'Full and
Lumped.prp' created in Step 2. Perform the same operation in the 'Eclipse300 -
Production' tab.
July, 2021
RESOLVE Manual
Examples Guide 1400
Run the forecast from beginning to end. Do this by pressing the icon.
The first action done is the initialisation of both modules. In the case of Eclipse, this means
initialising the reservoir model (i.e. running a history if necessary until the start date of the
forecast is reached).
When the model is run, the following screen will appear. The purpose of this screen is to define
the mapping between the components of the different applications.
Between the production network and the gas injection network, no lumping and delumping is
required and the components can be mapped directly, by selecting them from the lists and
clicking 'Add individual connection' for each component.
From the gas injection network to the reservoir, lumping is required. Select the connection 'Inj1-
>Inj1', and under 'Resolve lumping/delumping' select 'External Lumping'. The lumped
components can then be mapped. Repeat this for the other two injection wells.
From the reservoir to the production network, delumping is required. Select the connection
'PR1->PR1', and under 'Resolve lumping/delumping' select 'External delumping'. The
components can then be mapped. Repeat this for the other seven production wells.
July, 2021
RESOLVE Manual
Examples Guide 1402
Once the run is finished, the results can be analysed. The RESOLVE file contains saved results
from Example 2.2.2 (obtained with a full composition throughout), which can be compared with
the lumping/delumping approach followed here. The following plot compares the oil production
profile for the two cases. The results are very close, in particular considering the complexity of
the problem, with delumping from the reservoir to the production network, lumping from the gas
injection network to the reservoir and condensate dropout within the reservoir.
The following plot compares the producing CGR for well PR7, which zone is produced in
depletion only. The decrease in CGR for this well is only due to condensate dropout within the
reservoir (no gas re-injection), and this shows that the lumped EOS is able to accurately capture
this effect.
Analysis of the run time also shows that using a lumped composition results in a decreased
calculation time for the reservoir simulator.
Therefore this example demonstrates that the objectives of the lumping/delumping methodology
are achieved:
Perform integration between applications having a different PVT description
July, 2021
RESOLVE Manual
Examples Guide 1404
Ensure that the results are consistent compared to each application using the full EOS
composition.
3.4.2.4 Example 2.2.4: Mixed Cluster Sensitivity
3.4.2.4.1 Overview
1. Example Introduction
The objective of this example is to demonstrate how to run a mixed cluster sensitivity using the
Case Manager Data Object. If the user is not familiar with the Case Manager, it is
recommended to complete Example 6.8: Case Manager.
The field is modelled in RESOLVE, where the production network in GAP is coupled to an
Eclipse model. We wish to perform a sensitivity on this integrated model in order to vary
parameters from both the reservoir model and the RESOLVE model, and we wish to harness the
computational power of clusters to do so.
The Eclipse model will be running on Linux on an LSF cluster and the IPM models will be
running under PxCluster. Therefore it is required to have a working Linux LSF cluster and to
have setup the Linux executables provided with IPM in order to be able to submit a job on LSF
from RESOLVE. If this is not the case, please refer to the Eclipse Remote Linux Run section of
this manual.
Similarly it will be required to have a working PxCluster setup. This example can be completed
whether a local cluster is used or whether a distributed network cluster is used.
The field being modelled consists of 3 producer wells and 4 water injector wells.
The surface network and reservoir model are coupled in RESOLVE. The RESOLVE model
includes a workflow called 'Voidage' that performs a voidage calculation and defines a water
injection rate target (based on a given voidage replacement target). If the target is not met by
the injection system, new injectors are added: this logic is contained in the model's post-solve
workflow.
July, 2021
RESOLVE Manual
Examples Guide 1406
The RESOLVE model of the field is provided. The objective of this example will be to setup the
Case Manager in order to generate and run these cases, and extract the results. This example
will demonstrate how to setup a mixed cluster sensitivity, where several cases are run
simultaneously with Eclipse on Linux LSF and IPM on PxCluster.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for Eclipse, and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\ECLIPSE
\Example_2_2_4-Mixed-Cluster-Sensitivity
This folder contains a file "Mixed Cluster Sensitivity Eclipse.rsa" which is a "RESOLVE archive
file" that contains the RESOLVE file, Eclipse file, GAP file and other associated files required to
go through the example. The archive file needs to be extracted either in the current location or a
location of the user's choice.
Go to Step 1
3.4.2.4.2 Step 1: Test the RESOLVE model of the field
The objective of this step is to test that the RESOLVE model of the field is setup and runs
correctly before setting up the sensitivity cases.
The archive of the example contains another archive of the RESOLVE model of the field (GAP-
Eclipse.rsa). This archive should be placed and extracted in a directory which is accessible to
all the PxCluster nodes. This should be defined as a UNC path, e.g. \\edi-eng-phy1
\Cluster_share. This folder will be referred to as the 'shared folder' in the remainder of this
example.
Note: If a local PxCluster is used, then this folder does not need to be shared and the user may
July, 2021
RESOLVE Manual
Examples Guide 1408
work from the computer's local drive (e.g. C:\Example). This is true for the remainder of this
example: whenever UNC paths are used, if the user is using a local PxCluster, a local path may
be used.
Open the extracted RESOLVE model (GAP-Eclipse.rsl) by using the UNC path to the shared
folder. If you wish to ensure that the cluster nodes have access to this folder, perform a remote
connection on a cluster node and open the model using its UNC path as shwon below.
When prompted with the following message, define the UNC file paths to the GAP model and
the Linux path to the Eclipse model.
Note: It is not required to specify 'Use Cluster' for the GAP model. If we specify 'Use cluster',
then when we submit a RESOLVE job to the cluster, RESOLVE itself will submit a GAP job to
the cluster, and this is unnecessary. By leaving the setting to 'Use local computer', when we
submit a RESOLVE job to the cluster, the GAP model simply be running on the same node as
RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 1410
Run a few steps of the forecast using the icon. The forecast should start and the Eclipse job
should appear on the Linux side using the 'bjobs' command. If the Linux job fails to start please
refer to the Eclipse on Linux Troubleshooting section of this manual.
Linux 'bjobs':
July, 2021
RESOLVE Manual
Examples Guide 1412
Stop the simulation, save the RESOLVE model and close this model. This completes the
testing of the model, go to Step 2.
3.4.2.4.3 Step 2: Create a new RESOLVE file and add Case Manager
Create a new RESOLVE file, this file can be located on your computer's local drive. This file will
contain the Case Manager Data Object, the various sensitivity cases to be run and the
sensitivity results. It is from the Case Manager that the sensitivity jobs will be submitted to the
cluster.
Setup the RESOLVE model to perform a single solve by setting the Forecast mode to 'Single
solve/optimisation only' in the Options | System Options menu.
July, 2021
RESOLVE Manual
Examples Guide 1414
Go to Step 3.
3.4.2.4.4 Step 3: Create the Case Manager variables
For each case to be run, the objective will be to:
- create a new folder on Linux and create a copy of the Eclipse data file in that folder. The
objective is to store the Eclipse results in separate folders for each case to be run.
- set the inputs into the physical model. In this case there will be two main inputs: the voidage
replacement target and the base Eclipse data file
- run the simulation
- retrieve the desired results and archive the IPM models. In this case we will be interested in the
separator rates and cumulatives, the injection manifold rate and the total number of injection
wells required.
In order to do this, it is required to add input and output variables to the Case Manager in the
variables tab. To add a variable:
- enter the variable name
- select the variable type
- enter a default value (optional): please see table below.
- click the icon
Having added the variables we obtain the following list in the Case Manager:
The Case Manager workflow will be creating directories on Linux in order to store the reservoir
model results in separate directories for each case. This is why we require the Windows-
mapped path to the Linux folder. Once the directories are created, it will be required to change
the path to the Eclipse data file in the 'physical' RESOLVE model: therefore we also require to
know the local Linux path to the data files.
For the SepResults DataSet, select Edit and create the following columns:
July, 2021
RESOLVE Manual
Examples Guide 1416
In the 'Connection to models' panel, add a RESOLVE application with the label 'Resolve' (the
label will be the name of the instance, required to call OpenServer on it). Click the startup button,
and enter the UNC path to the 'physical' RESOLVE model.
Go to Step 4.
3.4.2.4.5 Step 4: Create and import the Case Manager workflow
In this step, the Case Manager workflow is created and edited in order to achieve the objectives
set out on Step 3, namely:
- create a new folder on Linux and create a copy of the data file in that folder.
- set the inputs into the physical model
- run the simulation
- retrieve the desired results and archive the IPM models
Go to the Workflows tab of the Case Manager and create a new workflow called 'Workflow1' by
clicking the icon.
Using the icon, import 'CaseManagerWorkflow.vwk' that was extracted from the main
example archive. The following workflow is imported.
The workflow performs the tasks mentioned above. The following elements of the workflow are
worth noting.
July, 2021
RESOLVE Manual
Examples Guide 1418
Note: the user under which PxCluster is running must have the rights to write to the Linux
directory.
Before doing this, it is required to setup several Data Stores and Data Objects that this
workflow will use.
Add a DataStore Data Object called 'Cases'. This will contain the different data files and
voidage targets and will be used by the workflow to create the cases.
In this Data Store, setup two columns called 'DataFiles' and 'Voidage'.
Populate the columns with the following values. The workflow (which will be added later) is setup
to create a case for each combination of Eclipse data file and voidage replacement target. This
will therefore make a total of 12 cases.
Create another Data Store called 'CasesKey': the workflow will write the correspondence
between the case names and the corresponding inputs.
Create 5 Data Set Data Objects called 'OilResults', 'GasResults', 'WaterResults', 'Water
Injected' and 'NumInjWells'. The workflow is setup to create the columns in these Data Sets and
populate them with the results of the cases.
July, 2021
RESOLVE Manual
Examples Guide 1420
Double click on the workflow and using the icon, import 'ControllingWorkflow.vwk' which
was extracted along with the main example archive. This workflow creates a number of cases in
the Case Manager, runs the cases and extracts the results from the Case Manager.
- Loop through the different data files and the different voidage targets
- Create a case for each data file/voidage combination and set the corresponding input for each
case
- Run all the cases
- Loop through the cases and extract the results to the OilResults, WaterInjected etc. DataSets.
Note: the workflow is setup to create cases called CASE0, CASE1 etc., therefore these will
also be the names of the RESOLVE archives and the directories created on Linux.
Go to Step 6.
3.4.2.4.7 Step 6: Run the cases and analyse the results
In Step 1, by test running the 'physical' RESOLVE model, it was verified that the LSF setup on
Linux for Eclipse was performed correctly.
If this is not already the case, PxCluster should be started before running the model. This can be
done from IPM Utilities:
A running cluster is indicated by the green cluster nodes, as shown below. For more information
on how to setup PxCluster, please refer to the Setting up PxCluster section of this manual.
July, 2021
RESOLVE Manual
Examples Guide 1422
Alternaltively a local cluster can be started by clicking the following button on the console:
In case a limited number licenses are available (RESOLVE, Eclipse, OpenEclipse etc.), it may
be required to limit the number of jobs running in parallel. This can be done by entering a
maximum number of jobs from ‘Cluster options’ button in the Cases tab of the Case Manager.
Run the model using the icon. When the Workflow is executed, the workflow will create and
run the cases, then extract the results into the results Data Sets that have been created.
Once the run is finished, the results can be analysed from those Data Sets. For instance, the
following shows plots of the oil rate profiles, water injection rates and required number of
injectors for several cases.
July, 2021
RESOLVE Manual
Examples Guide 1424
We can also observe that model archives have been saved for all the cases, and that folders
have been created on Linux with the reservoir results of each case. These can then be
examined for further analysis if required. The DataSet 'CasesKey' contains the correspondence
between the case names and the case inputs.
July, 2021
RESOLVE Manual
Examples Guide 1426
3.4.3 tNavigator
3.4.3.1 Example 2.3.1: GAP - tNavigator Connection
3.4.3.1.1 Overview
1. Example Introduction
The process of this exercise demonstrates how to setup a connection between a GAP
production network, water injection network and reservoir simulation model setup using
tNavigator.
The field being modelled consists of 3 producer wells and 4 water injector wells, with the
intention being to determine the production over the course of a 5 year prediction.
Surface network:
July, 2021
RESOLVE Manual
Examples Guide 1428
The first objective is to couple the GAP and tNavigator models, and the second to run the model
and determine this production and injection behaviour.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE GAP tNavigator
1 1 1
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for tNavigator, REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
More information on the configuration of tNavigator can be found in the driver configuration
section.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\TNAVIGATOR
\Example_2_3_1-GAP_tNavigator
This folder contains a file "GAP_tNavigator.rsa" which is a "RESOLVE archive file" that contains
the RESOLVE file, tNavigator files, GAP file and other associated files required to go through
the example. The archive file needs to be extracted either in the current location or a location of
the user's choice.
Go to Step 1
3.4.3.1.2 Step 1 - Initialise Model
Step 1 Objective:
Start a new RESOLVE project and initialise the units
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
July, 2021
RESOLVE Manual
Examples Guide 1430
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-
tNavigator.rsl).
Go to Step 2
3.4.3.1.3 Step 2 - Create tNavigator instance
Step 2 Objective:
Create a tNavigator instance in the RESOLVE model
The next step is to create a tNavigator instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to System | Create instance or select the icon.
Next click on "Start", tNavigator will start and load the required case. It will then query the case
for its sources and sinks (wells) and will display these on the screen as shown below. The icons
can be moved by selecting the "Move" icon on the toolbar ( ) and then dragging them to the
required positions.
July, 2021
RESOLVE Manual
Examples Guide 1432
The type of the well (which is obtained from the query of tNavigator) can be found by double-
clicking on the separate icons.
Go to Step 3
Step 3 Objective:
Create the GAP production instance in the RESOLVE model
Repeat the previous step to create an instance of GAP on the RESOLVE main screen.
Label the created GAP case "PROD" and browse for the model "Oil Field.gap".
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
Under snapshot mode section, select "Always save forecast snapshots".This saves a
snapshot of each prediction timestep in GAP, allowing the user to reload a copy of each
snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Also note that the “Rule based solver” box has been ticked for GAP. This uses simple
engineering rules to meet constraints in the GAP network, and is significantly faster than a full
July, 2021
RESOLVE Manual
Examples Guide 1434
optimisation (although it may produce slightly less oil/gas). More information on this can be
found in section 2.10.5.2 of the GAP User Manual.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP file contains production and water injection models. In the production model, three
production wells, PROD1, PROD2 and PROD3 will be found. These are the same wells
identified from the tNavigator case. One can look at the GAP interface (i.e. the GAP model will
be open on the windows taskbar) to confirm the contents of the GAP file by clicking on Window
| Tile vertically from the main GAP menu.
Go to or Step 4
3.4.3.1.5 Step 4 - Connect the production wells
Step 4 Objective:
Connect the production wells from the tNavigator model to the production wells from
the GAP production network model.
Connect the PROD1 icon in tNavigator to PROD1 in GAP by clicking into the first icon and
dragging the connection to the second. Repeat this for the other producer wells.
Note that it is possible to make the connections using the "Connection wizard".
This is obtained by invoking Edit System | Connection Wizard under the main menu.
Go to Step 5
3.4.3.1.6 Step 5 - Create GAP water injection instance
Step 5 Objective:
Load the GAP Water Injection model and connect the wells to their counterparts in
tNavigator
The original GAP file that was loaded earlier contains an associated water injection system;
thus it is possible to create an instance of GAP for the water injection system in the RESOLVE
model.
July, 2021
RESOLVE Manual
Examples Guide 1436
For the filename, enter the same GAP production system model. This is because the
injection system is associated (i.e. or linked) with the production system in one GAP file.
Consequently, the "associated water injection" option has to be selected from the drop-down
menu for the system.
An advantage of this is that it allows to model production and injection systems of GAP
simultaneously using only a single GAP license.
Press OK and the injection system well (i.e. and injection manifold) will be displayed.
After this, connect the GAP and tNavigator icons together appropriately.
Go to Step 6
Step 6 Objective:
Setup the tNavigator model options
Before the simulation is run, some further changes can be made to the configuration of the
tNavigator link.
To select these options, double-click on the tNavigator icon and select ‘Corrected (Petex)’
under IPR model:.
July, 2021
RESOLVE Manual
Examples Guide 1438
Click on "Calculate now" to perform the pre-run calculations required for this method.
The calculation will be performed and the results will be as shown below:
Click on "OK" and "OK" again to return to the "tNavigator case" interface.
When GAP solves/optimises its system, RESOLVE will return the result as an operating point
for the well on the inflow relation that tNavigator passed for that well, i.e. a BHP, phase rates,
July, 2021
RESOLVE Manual
Examples Guide 1440
and a THP. tNavigator will then have to control that well with a fixed boundary condition for the
duration of the next timestep. The user can select which boundary condition should be used.
Here we will use ‘Driven by GAP well type’, meaning Gap will determine the rate control based
upon the definition in the GAP model. Typically this will be done using a single phase. The other
options are outlined in section 2.5.9.3.
Go to Step 7
3.4.3.1.8 Step 7 - Setup Forecast Schedule
Step 7 Objective:
Setup the forecast schedule
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
For the purposes of this example, we will be making use of the basic scheduling only.
The advanced scheduling is described in the "Example_2.1.3" section.
Invoke the schedule screen from the main menu using Schedule | Forecast data.
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button.
This will display a screen allowing to select the required start date from a list of the various
model start dates.
July, 2021
RESOLVE Manual
Examples Guide 1442
The timestep and schedule duration are also entered here as shown. Enter the data in the
screen shot above.
Here we will synchronise GAP and tNavigator every 1 month until the schedule completes on
1/1/2020.
Go to Step 8
3.4.3.1.9 Step 8 - Publish Variables
Step 8 Objective:
Publish the GAP variables to report in the RESOLVE results section
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to report all the
variables for the wells and separator in the GAP production model as well as the injection wells
and manifold in the GAP injection model.
RESOLVE can automatically build a list of the GAP variables available and that can be reported
directly through the RESOLVE interface.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
Select the 'Prod' tab, and click Edit variables. A list of variables is available to import. These
consist of output variables such as solver results or cumulatives, or input variables such as
constraints or items' masking variables. If a variable is required and is not included in this list, it
is always possible to Copy and Paste the corresponding OpenServer string in the 'Variable
string' field.
Select 'Sep1' and click the red arrow: this will import all the variables corresponding to Sep1.
Repeat this for the three production wells.
July, 2021
RESOLVE Manual
Examples Guide 1444
Click OK. In the screen below, click 'Plot invert selection' in order for a tick to appear in the 'Add
to plot' column for all variables. This ensures that all variables are accessible from the plotting
window.
Go to the 'Inj' tab do the same for the water injection model. Output all variables for the injection
wells and manifold (IM1) and proceed to the next step.
Make sure that there are no screens left open in GAP as this can interfere with the remote
operation of GAP by RESOLVE.
NOTE: tNavigator results can be viewed directly in RESOLVE without having to publish
individual result variables.
Go to Step 9
3.4.3.1.10 Step 9 - Run the Forecast
Step 9 Objective:
Run the prediction forecast
July, 2021
RESOLVE Manual
Examples Guide 1446
Once the forecast has been started, tNavigator will perform an equilibration calculation.
The equilibrated reservoir data will be passed to GAP in the form of well IPR curves (i.e. for both
producers and injectors). GAP will use this data to solve and optimise the system. The solution
points will then be returned to tNavigator ready to take the first month's timestep. Before this,
the RESOLVE forecast enters "pause" mode.
To run the rest of the forecast without stopping, press the icon. Note that the run can be
paused or stopped with other toolbar icons.
Go to Step 10
3.4.3.1.11 Step 10 - Analyse the Results
Step 10 Objective:
Analysing the Results
RESOLVE has a set of results - the amount of results reported in the RESOLVE model
is function of the variables that have been published by the user prior to the run itself.
Refer to step 8 for further details.
The results from the client applications are best viewed when the run has completed or is
paused: this is because RESOLVE is controlling the application and it is possible that viewing
application results will interfere with the control. For this reason, application user interfaces are
disabled while the run is proceeding.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
For this specific case, we want to first analyse the oil that has been produced for the entire
production system as well as for each individual well.
All the nodes of the RESOLVE model are listed in the left hand side of the screen.
July, 2021
RESOLVE Manual
Examples Guide 1448
In this example, GAP and tNavigator have the same number of components and both consider a
full composition. If the objective is to achieve integration between a GAP model and a
tNavigator model having a different number of components (typically a reduced or lumped
composition in tNavigator and a full composition in GAP), please refer to the Lumping/
Delumping example.
A condensate field is to be modeled. This field is being produced for its condensate production,
and there are no export facilities or market for the gas. All the produced gas must therefore be
re-injected in the reservoir and the production is currently limited by the ability of the surface
facilities to compress and re-inject the gas.
A tNavigator model of the reservoir is available, along with a GAP model of the surface network
(including the compressors) and a fully characterised equation of state of the reservoir fluid. The
field has 5 producers and 3 injectors.
The fluid is a condensate with a single-stage flash CGR of 97.23 STB/MMscf, API of 29 and the
following phase enveloppe.
The GAP injection and production models are as follows. The condensate is separated from the
gas at the defined separator pressure (the temperature of separation is calculated by the
network). The condensate is sent to the separator 'Oil' and the gas to be re-injected sent to the
separator 'Reinjection gas'. The gas injection network includes a compressor which models the
gas handling facility.
July, 2021
RESOLVE Manual
Examples Guide 1450
When performing integration between compositional models, the following should be noted:
At the beginning of every time step, the IPR is passed from the reservoir simulator to GAP in
the form of a table of phase rates vs BHP. This is identical to the Black Oil case.
The IPR table contains phase rates at standard conditions, therefore it is important to ensure
that the separator train is consistent between the reservoir simulator and GAP. If this is not the
case, this will lead to mass inconsistencies between the models.
The composition of the produced fluid is passed from the reservoir simulator to GAP at every
time step. Similarly the composition of the re-injected gas is passed from GAP to the
reservoir simulator.
Only mole percentages are passed between applications, therefore it is important to ensure
that the components properties are consistent between applications.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both tNavigator and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\TNAVIGATOR
\Example_2_3_2-GAP_tNavigator_Compositional
Go to Step 1
3.4.3.2.1 Step 1: Create new file
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
July, 2021
RESOLVE Manual
Examples Guide 1452
Access the Controls/EOS and change the range of validity for the Volume Shift, including
negative values:
Now is a good time to save the file using File | Save As..., and enter a file name (e.g.
GAP_REVEAL_compositional.rsl).
Go to Step 2
3.4.3.2.2 Step 2: Add an instance of tNavigator
The next step is to create a tNavigator instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to Edit System| Add Client Program or select the icon.
From the displayed menu list, select "tNavigator". Click on the main screen where the
tNavigator icon is to be located, and give the case a label (say, "tNavigator").
July, 2021
RESOLVE Manual
Examples Guide 1454
In the File name field browse the tNavigator data deck 'FullComposition.DATA':
After that, clicking on Start will return to the main screen and open up the tNavigator model:
Note: one can use the Move tool to move the wells in the screen.
Once the tNavigator deck is loaded, select as IPR model the 'Corrected (Petex)' (ref. IPR Model
topic).
Then select 'Calculate'. The program will calculate parameters to correct the block IPR to
determine a more representative drainage region IPR.
After the calculation is finished, select OK to go back to the main program panel.
July, 2021
RESOLVE Manual
Examples Guide 1456
Go to Step 3
3.4.3.2.3 Step 3: Add instances of GAP
The next step is to create the GAP instance for the production network.
From the main menu, go to Edit System | Add Client program or select the icon. From
the resulting menu, select "GAP". Click on the main screen where to position the GAP icon, and
give the case a label (say, "Production").
For the file name, browse to the file "Production.gap" as shown above.
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
When OK is pressed, GAP will start and load the required case. It will then query the case for its
sources and sinks (wells) and will display these on the screen as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1458
The next step is to create the GAP instance for the injection network. From the main menu, go
to Edit System | Add Client program or select the icon. From the resulting menu, select
"GAP". Click on the main screen where to position the GAP icon, and give the case a label (say,
"Injection").
For the file name, browse to the file "Production.gap" as shown above. This is the main
production model. As this network model is an associated Gas Injection network
model, then select as System "Associated Gas Injection".
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes. When OK is pressed, GAP will start and load the required case.
Go to Step 4
3.4.3.2.4 Step 4: Make the connections
The next step is to connect the sources and sinks of the different applications. To connect the
systems, go to "link" mode by pressing the icon and link the different items by drag and drop.
It is required to connect:
The production wells of tNavigator and the production wells of GAP
The injection wells of tNavigator and the injection wells of GAP
The 'Reinjection Gas' separator of the production system to the 'IM1' manifold of the injection
system.
When a well of GAP is connected to a well of tNavigator, IPR data and compositional data is
passed at every time step. When the 'Reinjection Gas' separator is connected to the 'IPM1'
injection manifold, the following data is passed:
pressure and temperature
composition
the gas rate at the 'Reinjection Gas' separator is passed as a maximum gas rate constraint
on the 'IM1' manifold. This ensures that the injection system does not inject more gas than the
amount produced.
July, 2021
RESOLVE Manual
Examples Guide 1460
Go to Step 5.
3.4.3.2.5 Step 5: Import application variables
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to publish additional
variables for reporting and to be able to set up a controlling logic.
It is required to publish:
The oil rate from separator 'Oil'
The gas rate from separator 'Reinjection Gas'
The maximum gas rate constraint from separator 'Reinjection Gas'
The gas rate from injection manifold 'IM1'
The maximum gas rate constraint from injection manifold 'IM1'
From the menu, enter Variables | Import application variables. Import the variables listed
above for the production and the injection systems by selecting the corresponding tab and
clicking Edit variables.
July, 2021
RESOLVE Manual
Examples Guide 1462
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
Go to Step 6.
This can implemented by taking the following steps. Create a direct link from the injection to the
production using the icon.
July, 2021
RESOLVE Manual
Examples Guide 1464
Double click on the link, and pass the injection manifold gas rate to the 'Reinjection Gas'
separator maximum gas rate.
Create the following Pre-Solve workflow, using an Assignment element. The objective of this
workflow is to reset the constraint on the production system at the beginning of every time step.
The final step to implement the feedback loop is to configure the loop, by entering the Run |
Edit Loop menu. Define the following fluid connection convergence item with a convergence of
1, and enter the 'Maximum number of iterations' as 2. RESOLVE will consider the loop
July, 2021
RESOLVE Manual
Examples Guide 1466
converged if the produced gas and the re-injected gas are within 1 MMscf/d of each other, or if
the maximum number of iterations it reached.
Go to Step 7.
3.4.3.2.7 Step 7: Enter the schedule
To setup the RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
For the purposes of this example, we will be making use of the basic scheduling only.
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button. This will display a screen allowing the user to
select the required start date from a list of the various model start dates. In this case the start
date is 01/09/2009.
The timestep and schedule duration are also entered here as shown (1 month). All the linked
application models will be synchronised every month until the schedule completes on
01/01/2014.
July, 2021
RESOLVE Manual
Examples Guide 1468
To run the forecast from beginning to end without stopping, press the icon.
Note that the run can be paused or stopped with other toolbar icons.
Run the forecast from beginning to end. Do this by pressing the icon.
The first action done is the initialisation of both modules. In the case of tNavigator, this means
initialising the reservoir model (i.e. running a history if necessary until the start date of the
forecast is reached).
When the model is run, the following screen will appear. The purpose of this screen is to define
the mapping between the components of the different applications. Map the components by
clicking the 'Add All' button, and do this for all the wells.
At the end of the run, the following production profile and cumulative oil production is obtained.
It is possible to verify that the injection system has been able to re-inject all the produced gas.
At the reservoir level, the injected gas can be clearly seen by looking at the fluid CGR: the
injected gas corresponds to the low CGR regions. In this example, the produced gas rate
increases slightly during the run, and the oil rate decreases. The decrease of the oil rate is due
to a decrease of the producing CGR, which is due to the reservoir depletion and the
breakthrough of the low CGR injected gas. The image below shows the molar fraction of C1 in
the gas, which clearly illustrates the injected gas.
July, 2021
RESOLVE Manual
Examples Guide 1470
This concludes the compositional integration example. The next example looks at lumping the
reservoir composition into a smaller composition to speed up the reservoir calculations. It
introduces the Lumping/Delumping technique in RESOLVE to perform integration between
applications which have different requirements as to the number of components used.
This example builds on Example 2.3.2, in which integration was performed between a
compositional tNavigator model and a compositional GAP model having the same number of
components (15 components), and it is recommended that the user completes this example
first. This was done in the context of a condensate field with gas recycling. However, as detailed
in the Lumping/Delumping section, different applications have different requirements regarding
the number of components used. Generally a reservoir simulator requires a reduces
composition to avoid excessive run times, while the surface network requires a detailed
composition if the objective is to perform temperature prediction and flow assurance
calculations.
Each module uses the PVT modelling approach which is best suited to each tool, that is to say:
The tNavigator reservoir model uses a grouped (7 pseudo components) fully compositional
PVT description
The GAP surface network model uses an extended fully compositional (15 components) PVT
description.
Before completing this example, it may be preferrable to complete Example 2.3.2 as this
example builds on it. The field considered is a condensate field which is being produced for its
condensate production. All the produced gas needs to be re-injected, and the production is
constrained by the capacity of surface facilities to re-inject the gas.
July, 2021
RESOLVE Manual
Examples Guide 1472
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both tNavigator and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\TNAVIGATOR
\Example_2_3_3-GAP_tNavigator_Compositional_Lumping_Delumping
Go to Step 1
Currently the model is setup using a full composition the surface network and a lumped
composition in the reservoir simulator. In the next steps, the equivalent lumped composition is
built, and the RESOLVE model setup to perform lumping/delumping.
3.4.3.3.2 Step 2: Create the lumped composition in PVTp
The objective of this step is to create a lumped composition which will be equivalent to the initial
full composition, along with the rule which will enable to pass from one to the other. The starting
point is an EOS which has been characterised and matched to a PVT lab report: this is
provided in the file 'FullComposition.pvi' contained in the archive. The steps involved are to:
Create the lumped composition from the full composition
Quality check that the two compositions are consistent by running PVT experiments such as
CCE, CVD etc.
Open 'FullComposition.pvi' in PVTp. This contains the full EOS as used in Example 2.3.2.
The screen for creating the lumped composition is accessed via Data | Lumping/Delumping
for IPM.
July, 2021
RESOLVE Manual
Examples Guide 1474
The 'Lumping Method' is by default set to 'Manual Lumping': this allows the user to manually
create the lumping rule and to choose how to lump the components together. Click on Lump
Stream.
Select the components that will be part of each lump on the bottom-right hand of the table, the
Add Lump. As a rule of thumb, components with similar molecular weights can be lumped
together. In any case, finding the best way of lumping is a trial and error process, based on
having a final lumped EOS as close to the original EOS as possible. Create the lumps shown
below, and click on Lump.
July, 2021
RESOLVE Manual
Examples Guide 1476
The program will ask whether or not to hold single components during lumping. If selected, this
enables to keep the molar fraction of single components constant through lumping. Select the
pseudo C17::C20 and click OK.
Click OK to create a lumping rule: this will be required by RESOLVE to perform Lumping/
Delumping.
The quality of the lumped composition created can be verified by calculating the phase
envelopes or simulating experiments such as a CVD, as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1478
In the menu Data | Lumping/Delumping for IPM, select 'Export .prp', to export the full and the
lumped composition together in a single file. When prompted, click OK to export the lumping
rule as well. Save this as 'Full and Lumped.prp'.
Create a stream containing only the lumped composition by clicking on 'To Stream' then on
'Clear Lumping'.
July, 2021
RESOLVE Manual
Examples Guide 1480
This results in a new stream, 'full_LUMP', which contains only the lumped composition.
N.B:
The tNavigator data deck provided has already been set up to use the lumped composition
created. If this had not been the case, PVTp can be used to generate the EOS include file in
tNavigator format, via File | Export (Eclipse format).
3.4.3.3.3 Step 3: Import the lumping rule in RESOLVE
RESOLVE performs the lumping/delumping calculation during the run to map the full
composition to the lumped. The full and the lumped compositions, along with the lumping rule,
now need to imported into RESOLVE.
In this window, a pair of EOS (full and lumped) is defined for each pair of connected
applications which requires lumping or delumping. In this example, we need to perform
delumping from the reservoir to the production network, and lumping from the gas injection
network to the reservoir.
In the 'tNavigator-Production' tab, click on the red 'Setup' button and import 'Full and
Lumped.prp' created in Step 2. Perform the same operation in the 'tNavigator -
Gas_Injection' tab.
July, 2021
RESOLVE Manual
Examples Guide 1482
Run the forecast from beginning to end. Do this by pressing the icon.
The first action done is the initialisation of both modules. In the case of tNavigator, this means
initialising the reservoir model (i.e. running a history if necessary until the start date of the
forecast is reached).
When the model is run, the following screen will appear. The purpose of this screen is to define
the mapping between the components of the different applications.
Between the production network and the gas injection network, no lumping and delumping is
required and the components can be mapped directly, by selecting them from the lists and
clicking 'Add individual connection' for each component.
From the gas injection network to the reservoir, lumping is required. Select the connection 'Inj1-
>Inj1', and under 'Resolve lumping/delumping' select 'External Lumping'. The lumped
components can then be mapped. Repeat this for the other two injection wells.
July, 2021
RESOLVE Manual
Examples Guide 1484
From the reservoir to the production network, delumping is required. Select the connection
'PR1->PR1', and under 'Resolve lumping/delumping' select 'External delumping'. The
components can then be mapped. Repeat this for the other seven production wells.
Once the run is finished, the results can be analysed. The RESOLVE file contains saved results
from Example 2.3.2 (obtained with a full composition throughout), which can be compared with
the lumping/delumping approach followed here. The following plot compares the oil production
profile for the two cases. The results are very close, in particular considering the complexity of
the problem, with delumping from the reservoir to the production network, lumping from the gas
injection network to the reservoir and condensate dropout within the reservoir.
July, 2021
RESOLVE Manual
Examples Guide 1486
The following plot compares the producing CGR for well PR7, which zone is produced in
depletion only. The decrease in CGR for this well is only due to condensate dropout within the
reservoir (no gas re-injection), and this shows that the lumped EOS is able to accurately capture
this effect.
Analysis of the run time also shows that using a lumped composition results in a decreased
calculation time for the reservoir simulator.
Therefore this example demonstrates that the objectives of the lumping/delumping methodology
are achieved:
Perform integration between applications having a different PVT description
Ensure that the results are consistent compared to each application using the full EOS
composition.
3.4.3.4 Example 2.3.4: Mixed Cluster Sensitivity
3.4.3.4.1 Overview
1. Example Introduction
The objective of this example is to demonstrate how to run a mixed cluster sensitivity using the
Case Manager Data Object. If the user is not familiar with the Case Manager, it is
recommended to complete Example 6.8: Case Manager.
The field is modelled in RESOLVE, where the production network in GAP is coupled to an
tNavigator model. We wish to perform a sensitivity on this integrated model in order to vary
parameters from both the reservoir model and the RESOLVE model, and we wish to harness the
computational power of clusters to do so.
The tNavigator model will be running on Linux on an LSF cluster and the IPM models will be
running under PxCluster. Therefore it is required to have a working Linux LSF cluster and to be
able to submit a job on LSF from RESOLVE. If this is not the case, please refer to the tNavigator
Remote Linux Run section of this manual.
Similarly it will be required to have a working PxCluster setup. This example can be completed
whether a local cluster is used or whether a distributed network cluster is used.
The field being modelled consists of 3 producer wells and 4 water injector wells.
July, 2021
RESOLVE Manual
Examples Guide 1488
Surface network:
The surface network and reservoir model are coupled in RESOLVE. The RESOLVE model
includes a workflow called 'Voidage' that performs a voidage calculation and defines a water
injection rate target (based on a given voidage replacement target). If the target is not met by
the injection system, new injectors are added: this logic is contained in the model's post-solve
workflow.
The RESOLVE model of the field is provided. The objective of this example will be to setup the
Case Manager in order to generate and run these cases, and extract the results. This example
will demonstrate how to setup a mixed cluster sensitivity, where several cases are run
simultaneously with tNavigator on Linux LSF and IPM on PxCluster.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for tNavigator, and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
July, 2021
RESOLVE Manual
Examples Guide 1490
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\TNAVIGATOR
\Example_2_4_4-Mixed-Cluster-Sensitivity
This folder contains a file "Mixed Cluster Sensitivity tNavigator.rsa" which is a "RESOLVE
archive file" that contains the RESOLVE file, tNavigator file, GAP file and other associated files
required to go through the example. The archive file needs to be extracted either in the current
location or a location of the user's choice.
Go to Step 1
3.4.3.4.2 Step 1: Test the RESOLVE model of the field
The objective of this step is to test that the RESOLVE model of the field is setup and runs
correctly before setting up the sensitivity cases.
The archive of the example contains another archive of the RESOLVE model of the field (GAP-
tNavigator.rsa). This archive should be placed and extracted in a directory which is accessible
to all the PxCluster nodes. This should be defined as a UNC path, e.g. \\edi-eng-phy1
\Cluster_share. This folder will be referred to as the 'shared folder' in the remainder of this
example.
Note: If a local PxCluster is used, then this folder does not need to be shared and the user may
work from the computer's local drive (e.g. C:\Example). This is true for the remainder of this
example: whenever UNC paths are used, if the user is using a local PxCluster, a local path may
be used.
Open the extracted RESOLVE model (GAP-tNavigator.rsl) by using the UNC path to the shared
folder. If you wish to ensure that the cluster nodes have access to this folder, perform a remote
connection on a cluster node and open the model using its UNC path as shown below.
When prompted with the following message, define the UNC file paths to the GAP model and
the Linux path to the tNavigator model.
July, 2021
RESOLVE Manual
Examples Guide 1492
Note: It is not required to specify 'Use Cluster' for the GAP model. If we specify 'Use cluster',
then when we submit a RESOLVE job to the cluster, RESOLVE itself will submit a GAP job to
the cluster, and this is unnecessary. By leaving the setting to 'Use local computer', when we
submit a RESOLVE job to the cluster, the GAP model simply be running on the same node as
RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 1494
Note: the Linux account and password under which the case is to be submitted to the Linux
cluster should be specified under 'Reconfigure'.
Run a few steps of the forecast using the icon. The forecast should start and the tNavigator
job should appear on the Linux side using the 'bjobs' command.
Linux 'bjobs':
Stop the simulation, save the RESOLVE model and close this model. This completes the
testing of the model, go to Step 2.
3.4.3.4.3 Step 2: Create a new RESOLVE file and add Case Manager
Create a new RESOLVE file, this file can be located on your computer's local drive. This file will
contain the Case Manager Data Object, the various sensitivity cases to be run and the
July, 2021
RESOLVE Manual
Examples Guide 1496
sensitivity results. It is from the Case Manager that the sensitivity jobs will be submitted to the
cluster.
Setup the RESOLVE model to perform a single solve by setting the Forecast mode to 'Single
solve/optimisation only' in the Options | System Options menu.
July, 2021
RESOLVE Manual
Examples Guide 1498
Go to Step 3.
3.4.3.4.4 Step 3: Create the Case Manager variables
For each case to be run, the objective will be to:
- create a new folder on Linux and create a copy of the tNavigator data file in that folder. The
objective is to store the tNavigator results in separate folders for each case to be run.
- set the inputs into the physical model. In this case there will be two main inputs: the voidage
replacement target and the base tNavigator data file
- run the simulation
- retrieve the desired results and archive the IPM models. In this case we will be interested in the
separator rates and cumulatives, the injection manifold rate and the total number of injection
wells required.
In order to do this, it is required to add input and output variables to the Case Manager in the
variables tab. To add a variable:
- enter the variable name
- select the variable type
- enter a default value (optional): please see table below.
- click the icon
Having added the variables we obtain the following list in the Case Manager:
The Case Manager workflow will be creating directories on Linux in order to store the reservoir
model results in separate directories for each case. This is why we require the Windows-
mapped path to the Linux folder. Once the directories are created, it will be required to change
the path to the tNavigator data file in the 'physical' RESOLVE model: therefore we also require
to know the local Linux path to the data files.
For the SepResults DataSet, select Edit and create the following columns:
In the 'Connection to models' panel, add a RESOLVE application with the label 'Resolve' (the
July, 2021
RESOLVE Manual
Examples Guide 1500
label will be the name of the instance, required to call OpenServer on it). Click the startup button,
and enter the UNC path to the 'physical' RESOLVE model.
Go to Step 4.
3.4.3.4.5 Step 4: Create and import the Case Manager workflow
In this step, the Case Manager workflow is created and edited in order to achieve the objectives
set out on Step 3, namely:
- create a new folder on Linux and create a copy of the data file in that folder.
- set the inputs into the physical model
- run the simulation
- retrieve the desired results and archive the IPM models
Go to the Workflows tab of the Case Manager and create a new workflow called 'Workflow1' by
clicking the icon.
Using the icon, import 'CaseManagerWorkflow.vwk' that was extracted from the main
example archive. The following workflow is imported.
The workflow performs the tasks mentioned above. The following elements of the workflow are
worth noting.
July, 2021
RESOLVE Manual
Examples Guide 1502
Note: the user under which PxCluster is running must have the rights to write to the Linux
directory.
Before doing this, it is required to setup several Data Stores and Data Objects that this
workflow will use.
Add a DataStore Data Object called 'Cases'. This will contain the different data files and
voidage targets and will be used by the workflow to create the cases.
In this Data Store, setup two columns called 'DataFiles' and 'Voidage'.
Populate the columns with the following values. The workflow (which will be added later) is setup
to create a case for each combination of tNavigator data file and voidage replacement target.
This will therefore make a total of 12 cases.
Create another Data Store called 'CasesKey': the workflow will write the correspondence
between the case names and the corresponding inputs.
Create 5 Data Set Data Objects called 'OilResults', 'GasResults', 'WaterResults', 'Water
Injected' and 'NumInjWells'. The workflow is setup to create the columns in these Data Sets and
populate them with the results of the cases.
July, 2021
RESOLVE Manual
Examples Guide 1504
Double click on the workflow and using the icon, import 'ControllingWorkflow.vwk' which
was extracted along with the main example archive. This workflow creates a number of cases in
the Case Manager, runs the cases and extracts the results from the Case Manager.
- Loop through the different data files and the different voidage targets
- Create a case for each data file/voidage combination and set the corresponding input for each
case
- Run all the cases
- Loop through the cases and extract the results to the OilResults, WaterInjected etc. DataSets.
Note: the workflow is setup to create cases called CASE0, CASE1 etc., therefore these will
also be the names of the RESOLVE archives and the directories created on Linux.
Go to Step 6.
3.4.3.4.7 Step 6: Run the cases and analyse the results
In Step 1, by test running the 'physical' RESOLVE model, it was verified that the LSF setup on
Linux for tNavigator was performed correctly.
If this is not already the case, PxCluster should be started before running the model. This can be
done from IPM Utilities:
A running cluster is indicated by the green cluster nodes, as shown below. For more information
on how to setup PxCluster, please refer to the Setting up PxCluster section of this manual.
July, 2021
RESOLVE Manual
Examples Guide 1506
Alternaltively a local cluster can be started by clicking the following button on the console:
In case a limited number licenses are available (RESOLVE, tNavigator etc.), it may be required
to limit the number of jobs running in parallel. This can be done by entering a maximum number
of jobs from ‘Cluster options’ button in the Cases tab of the Case Manager.
Run the model using the icon. When the Workflow is executed, the workflow will create and
run the cases, then extract the results into the results Data Sets that have been created.
Once the run is finished, the results can be analysed from those Data Sets. For instance, the
following shows plots of the oil rate profiles, water injection rates and required number of
injectors for several cases.
July, 2021
RESOLVE Manual
Examples Guide 1508
We can also observe that model archives have been saved for all the cases, and that folders
have been created on Linux with the reservoir results of each case. These can then be
examined for further analysis if required. The DataSet 'CasesKey' contains the correspondence
between the case names and the case inputs.
July, 2021
RESOLVE Manual
Examples Guide 1510
July, 2021
RESOLVE Manual
Examples Guide 1512
2. Licences required
Running this example will require following licenses
RESOLVE tNavigator
1 1
Before starting with this example, it will be necessary to make sure that the RESOLVE driver
for tNavigator and GAP are registered.
Start RESOLVE, and open a new project using File | New or the icon .
Save the model as a new file: Sensitivity_tNav.rsl.
Go to Step 2
icon on the shortcut bar and from the resulting menu, select “tNavigator” and place it on
the canvas.
At the beginning of this step, it is assumed that the units of the RESOLVE model have been
initialised.
Moving forward, from the main menu, go to Edit System | Add Client program or select the
icon on the shortcut bar and from the resulting menu, select “tNavigator” and place it on
the canvas.
The icons can be moved by selecting the “tNavigator” icon on the toolbar ( )
and then dragging them to the required positions.
Double-click on the tNavigator icon to access the tNavigator Case screen, as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1514
Go to Step 3
Go to Step 4
3.4.3.5.4 Step 4: Select and setup the parameters of the reservoir model which will
be used for the sensitivity analysis
Click on the “Setup Tokens” button within the tNavigator case window. This will bring you to the
Edit IPM Tokens screen.
July, 2021
RESOLVE Manual
Examples Guide 1516
Two parameters of the reservoir model will be used as the sensitivity variables: the multiplier for
the vertical permeability (PermZ) and the depth of Water oil contact (WOC). The tokens can be
set up in the data deck by highlighting the value of sensitivity parameters and clicking “Create
new token”, as shown for the example of WOC below.
Once the token is created, the value of a parameter (e.g. WOC) will be replaced by a token (e.g.
~WOC~).
July, 2021
RESOLVE Manual
Examples Guide 1518
Repeat the same procedure for the vertical permeability multiplier and replace it with token
~PermZ~. The selected tokens will be used further in a Visual workflow of the Sibyl data
object.
Go to Step 5
3.4.3.5.5 Step 5: Add and set up the Sibyl data object in the RESOLVE model
Add the Sibyl data object on the canvas in RESOLVE from Edit System | Add data| Sibyl.
This data object allows the user to perform sensitivity analysis on the variables of integrated
models including reservoir models. The integral part of this data object is the Case Manager
which runs the model with a controlling Visual workflow. Access to the user defined variables
used in the workflow is obtained via OpenServer. Therefore, OpenServer is added to the
July, 2021
RESOLVE Manual
Examples Guide 1520
After all the elements have been added to the canvas, double click on the Sibyl data object to
obtain the data entry screen, as shown below.
On the Sibyl screen, define input variables. In this case, these will be the tokens defined in the
data deck earlier. To add these variables, click on Add in the Input variables screen.
From the new window “Setup analysis or forecast variable”, select tokens from the list of
variables ~PermZ~ and ~WOC~.
Define distribution for each variable in the Probability distributions section of the Sybil screen by
clicking on Add.
July, 2021
RESOLVE Manual
Examples Guide 1522
We assume that vertical permeability has a log-normal distribution, whereas the WOC depth
has a Normal (Gaussian) distribution.
The log-normal distribution is defined by selecting the Normal (Gaussian) distribution in the
Shape of curve section and ticking the box for the “Is logarithmic” option. In the Curve properties
section define values which determine the shape of the curve (e.g. Mean of logs=0.1 and Std
dev of logs=0.2).
The curve properties for the Normal (Gaussian) distribution selected for the WOC are as
follows: Mean=7800 m and Std dev.=100 m.
Once the distribution functions are entered, they can be renamed by clicking on the Rename
button, as shown below.
Then selecting these names in the Distribution column for the defined variables.
After doing this, determine the parameters of the model by clicking on the “Physical model” tab.
Then, select the Initial state of the model to be “Open but do not load model” and make sure that
tNavigator is selected in the “Analyse on model” field .
The click on the Setup/Veiw model to access the Case Manager which is used in the
background of the data object to run the model according to the scenario defined by the user.
July, 2021
RESOLVE Manual
Examples Guide 1524
In the Variables tab, navigate to the Connection to models section and select TNAVIGATOR as
Application and Label. Make sure that the file path to the tNavigator case file in the Startup field
is the same as the file path in the tNaviagtor icon on the main canvas, as shown below.
At the bottom of this section, select “Open but do not load model”.
All the input variables in the Variables tab should left unchanged, as it is configured to
automatically create variables from the tokens defined in the data deck.
In the Workflows tab, a workflow template is used. When the simulator is changed, it may be
necessary to adjust the OpenServer strings in the blocks of the workflow (i.e. Run simulation and
Shutdown contain the OpenServer strings corresponding to the simulator in use).
July, 2021
RESOLVE Manual
Examples Guide 1526
Once this is completed, navigate to the Run & Results tab of the Sibyl data object and Run the
model by clicking Run.
Depending on the preferences of the user, the model can be run on the PXcluster by selecting
the “On Cluster” option or on a single core by selecting “Using model connection”.
If the PXCluster is required for the run, it has to be enabled first from Wizards | Run PX Cluster
console. On the accessed screen click on Run standalone cluster and then navigate back to the
Run & Results tab to click Run.
Go to Step 6
3.4.3.5.6 Step 6: The run of the tNavigator model and analysis of modelling results
Before starting the run, select the output variables which will be reported by RESOLVE. The
value of output variables will change when the values for the input parameters are changed.
The following parameters are reported as the output variables:
Cumulative oil recovery from the field-Field:CumOil.
Once the variables are set, the number of run cases can be defined (e.g. 40) and the simulator
can be started by clicking on Run from the Run & Results section, as shown below.
Each simulation will run for 2000 days which was set in the workflow in the Case Manager with
the RunToTIme string. The input data will be supplied to the simulator according to the
distribution defined earlier.
The output variables
The following results are obtained for this run:
These results show that the shallower the depth of the WOC, the higher the water production.
July, 2021
RESOLVE Manual
Examples Guide 1528
This is expected, as the depth of the WOC in this case is closer to the perforation interval of the
producing wells which is fixed in this model (i.e. 7500 ft ). In contrast to this, a higher multiplier
for the vertical permeability leads to higher the oil production.
These results can be used for the drilling plan and specifically the depth of well completion.
3.4.4 IMEX/GEM
3.4.4.1 Example 2.4.1: GAP-IMEX Connection
3.4.4.1.1 Overview
1. Example Introduction
The process of this exercise demonstrates how to setup a connection between a GAP
production network, water injection network and reservoir simulation model setup using IMEX.
The field being modelled consists of 3 producer wells and 4 water injector wells, with the
intention being to determine the production over the course of a 5 year prediction.
Surface network:
The first objective is to couple the GAP and IMEX models, and the second to run the model and
determine this production and injection behaviour.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE GAP IMEX
1 1 1
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for IMEX, REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 1530
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\IMEX-GEM
\Example_2_4_1-GAP_IMEX
This folder contains a file "GAP_IMEX.rsa" which is a "RESOLVE archive file" that contains the
RESOLVE file, IMEX files, GAP file and other associated files required to go through the
example. The archive file needs to be extracted either in the current location or a location of the
user's choice.
Go to Step 1
Step 1 Objective:
Start a new RESOLVE project and initialise the units
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-
IMEX.rsl).
Go to Step 2
3.4.4.1.3 Step 2 - Create IMEX instance
Step 2 Objective:
Create an IMEX instance in the RESOLVE model
The next step is to create an IMEX instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to System | Create instance or select the icon.
July, 2021
RESOLVE Manual
Examples Guide 1532
Next click on "Start", IMEX will start and load the required case. It will then query the case for its
sources and sinks (wells) and will display these on the screen as shown below. The icons can
be moved by selecting the "Move" icon on the toolbar ( ) and then dragging them to the
required positions.
The type of the well (which is obtained from the query of IMEX) can be found by double-clicking
on the separate icons.
Go to Step 3
3.4.4.1.4 Step 3 - Create GAP production instance
Step 3 Objective:
Create the GAP production instance in the RESOLVE model
Repeat the previous step to create an instance of GAP on the RESOLVE main screen.
Label the created GAP case "PROD" and browse for the model "Oil Field.gap".
July, 2021
RESOLVE Manual
Examples Guide 1534
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
Under snapshot mode section, select "Always save forecast snapshots".This saves a
snapshot of each prediction timestep in GAP, allowing the user to reload a copy of each
snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Also note that the “Rule based solver” box has been ticked for GAP. This uses simple
engineering rules to meet constraints in the GAP network, and is significantly faster than a full
optimisation (although it may produce slightly less oil/gas). More information on this can be
found in section 2.10.5.2 of the GAP User Manual.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP file contains production and water injection models. In the production model, three
production wells, PROD1, PROD2 and PROD3 will be found. These are the same wells
identified from the IMEX case. One can look at the GAP interface (i.e. the GAP model will be
open on the windows taskbar) to confirm the contents of the GAP file by clicking on Window |
Tile vertically from the main GAP menu.
Go to Step 4
3.4.4.1.5 Step 4 - Connect the production wells
Step 4 Objective:
Connect the production wells from the IMEX model to the production wells from the
GAP production network model.
Connect the PROD1 icon in IMEX to PROD1 in GAP by clicking into the first icon and dragging
July, 2021
RESOLVE Manual
Examples Guide 1536
the connection to the second. Repeat this for the other producer wells.
Note that it is possible to make the connections using the "Connection wizard".
This is obtained by invoking Edit System | Connection Wizard under the main menu.
Go to Step 5
3.4.4.1.6 Step 5 - Create GAP water injection instance
Step 5 Objective:
Load the GAP Water Injection model and connect the wells to their counterparts in
IMEX
The original GAP file that was loaded earlier contains an associated water injection system;
thus it is possible to create an instance of GAP for the water injection system in the RESOLVE
model.
For the filename, enter the same GAP production system model. This is because the
injection system is associated (i.e. or linked) with the production system in one GAP file.
Consequently, the "associated water injection" option has to be selected from the drop-down
menu for the system.
An advantage of this is that it allows to model production and injection systems of GAP
simultaneously using only a single GAP license.
Press OK and the injection system well (i.e. and injection manifold) will be displayed.
After this, connect the GAP and IMEX icons together appropriately.
July, 2021
RESOLVE Manual
Examples Guide 1538
Go to Step 6
Step 6 Objective:
Setup the IMEX model options
Before the simulation is run, some further changes can be made to the configuration of the
IMEX link.
To select these options, double-click on the IMEX icon and select ‘Corrected’ under IPR model:.
Click on the red cross to perform the pre-run calculations required for this method.
July, 2021
RESOLVE Manual
Examples Guide 1540
The calculation will be performed and the results will be as shown below:
Click on "OK" and "OK" again to return to the "IMEX Case Settings" interface.
When GAP solves/optimises its system, RESOLVE will return the result as an operating point
for the well on the inflow relation that IMEX passed for that well, i.e. a BHP, phase rates, and a
THP. IMEX will then have to control that well with a fixed boundary condition for the duration of
the next timestep. The user can select which boundary condition should be used. Here we will
use ‘Follow Data Deck' option. Typically this will be done using a single phase.
Go to Step 7
3.4.4.1.8 Step 7 - Setup Forecast Schedule
Step 7 Objective:
July, 2021
RESOLVE Manual
Examples Guide 1542
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
For the purposes of this example, we will be making use of the basic scheduling only.
The advanced scheduling is described in the "Example_2.1.3" section.
Invoke the schedule screen from the main menu using Schedule | Forecast data.
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button.
This will display a screen allowing to select the required start date from a list of the various
model start dates.
The timestep and schedule duration are also entered here as shown. Enter the data in the
screen shot above.
Here we will synchronise GAP and IMEX every 1 month until the schedule completes on
1/1/2020.
Go to Step 8
3.4.4.1.9 Step 8 - Publish Variables
Step 8 Objective:
Publish the GAP variables to report in the RESOLVE results section
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to report all the
variables for the wells and separator in the GAP production model as well as the injection wells
and manifold in the GAP injection model.
RESOLVE can automatically build a list of the GAP variables available and that can be reported
directly through the RESOLVE interface.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
July, 2021
RESOLVE Manual
Examples Guide 1544
Select the 'Production' tab, and click Edit variables. A list of variables is available to import.
These consist of output variables such as solver results or cumulatives, or input variables such
as constraints or items' masking variables. If a variable is required and is not included in this
list, it is always possible to Copy and Paste the corresponding OpenServer string in the
'Variable string' field.
Select 'Sep1' and click the red arrow: this will import all the variables corresponding to Sep1.
Repeat this for the three production wells.
Click OK. In the screen below, click 'Plot invert selection' in order for a tick to appear in the 'Add
to plot' column for all variables. This ensures that all variables are accessible from the plotting
window.
July, 2021
RESOLVE Manual
Examples Guide 1546
Go to the 'Gas_Injection' tab do the same for the gas injection model. Output all variables for
the injection wells and manifold (IM1) and proceed to the next step.
Make sure that there are no screens left open in GAP as this can interfere with the remote
operation of GAP by RESOLVE.
Go to Step 9
3.4.4.1.10 Step 9 - Run the Forecast
Step 9 Objective:
Run the prediction forecast
To run the forecast from beginning to end without stopping, press the icon.
Note that the run can be paused or stopped with other toolbar icons.
Once the forecast has been started, IMEX will perform an equilibration calculation.
The equilibrated reservoir data will be passed to GAP in the form of well IPR curves (i.e. for both
producers and injectors). GAP will use this data to solve and optimise the system. The solution
points will then be returned to IMEX ready to take the first month's timestep. Before this, the
RESOLVE forecast enters "pause" mode.
To run the rest of the forecast without stopping, press the icon. Note that the run can be
paused or stopped with other toolbar icons.
Go to Step 10
3.4.4.1.11 Step 10 - Analyse the Results
Step 10 Objective:
Analysing the Results
RESOLVE has a set of results - the amount of results reported in the RESOLVE model
is function of the variables that have been published by the user prior to the run itself.
Refer to step 8 for further details.
It is also possible to view all the results of a simulation for any of the client applications
in the application itself.
The results from the client applications are best viewed when the run has completed or is
paused: this is because RESOLVE is controlling the application and it is possible that viewing
application results will interfere with the control. For this reason, application user interfaces are
disabled while the run is proceeding.
July, 2021
RESOLVE Manual
Examples Guide 1548
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon.
These results can also be displayed as the run is proceeding.
For this specific case, we want to first analyse the oil that has been produced for the entire
production system as well as for each individual well.
All the nodes of the RESOLVE model are listed in the left hand side of the screen.
In this example, GAP and GEM have the same number of components and both consider a full
composition. If the objective is to achieve integration between a GAP model and a GEM model
having a different number of components (typically a reduced or lumped composition in GEM
and a full composition in GAP), please refer to the Lumping/Delumping example.
A condensate field is to be modeled. This field is being produced for its condensate production,
and there are no export facilities or market for the gas. All the produced gas must therefore be
re-injected in the reservoir and the production is currently limited by the ability of the surface
facilities to compress and re-inject the gas.
A GEM model of the reservoir is available, along with a GAP model of the surface network
(including the compressors) and a fully characterised equation of state of the reservoir fluid. The
field has 3 producers and 2 injectors.
July, 2021
RESOLVE Manual
Examples Guide 1550
The fluid is a condensate with a single-stage flash CGR of 136.37 STB/MMscf, API of 58.21
and the following phase envelope.
The GAP injection and production models are as follows. The condensate is separated from the
gas at the defined separator pressure (the temperature of separation is calculated by the
network). The condensate is sent to the separator 'Oil' and the gas to be re-injected sent to the
separator 'Reinjection gas'. The gas injection network includes a compressor which models the
gas handling facility.
When performing integration between compositional models, the following should be noted:
At the beginning of every time step, the IPR is passed from the reservoir simulator to GAP in
the form of a table of phase rates vs BHP. This is identical to the Black Oil case.
The IPR table contains phase rates at standard conditions, therefore it is important to ensure
that the separator train is consistent between the reservoir simulator and GAP. If this is not the
case, this will lead to mass inconsistencies between the models.
July, 2021
RESOLVE Manual
Examples Guide 1552
The composition of the produced fluid is passed from the reservoir simulator to GAP at every
time step. Similarly the composition of the re-injected gas is passed from GAP to the
reservoir simulator.
Only mole percentages are passed between applications, therefore it is important to ensure
that the components properties are consistent between applications.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both GEM and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\IMEX-GEM
\Example_2_4_2-GAP_GEM
This folder contains a file "GEM Full Composition.rsa" which is a "RESOLVE archive file" that
contains the RESOLVE file, the GEM model, GAP file and other associated files required to go
through the example. The archive file needs to be extracted either in the current location or a
location of the user"s choice.
Go to Step 1
3.4.4.2.1 Step 1: Create new file
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Access the Controls/EOS and change the range of validity for the Volume Shift, including
negative values:
July, 2021
RESOLVE Manual
Examples Guide 1554
Now is a good time to save the file using File | Save As..., and enter a file name (e.g.
GAP_REVEAL_compositional.rsl).
Go to Step 2
3.4.4.2.2 Step 2: Add an instance of GEM
The next step is to create a GEM instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to Edit System| Add Client Program or select the icon.
From the displayed menu list, select "GEM". Click on the main screen where the REVEAL icon
is to be located, and give the case a label (say, "GEM").
In the File name field browse the GEM data deck 'Reservoir.dat':
July, 2021
RESOLVE Manual
Examples Guide 1556
After that, clicking on Ok will return to the main screen and open up the GEM model:
Note: one can use the Move tool to move the wells in the screen.
The GEM reservoir model has overall 5 wells:
- Wells P1 to P3 producers
- Wells I1 and I2 are gas injectors
Once the GEM deck is loaded, select as IPR model the 'Corrected' (ref. IPR Model topic).
Then select the red cross (pre-calculation). The program will calculate parameters to correct the
July, 2021
RESOLVE Manual
Examples Guide 1558
Go to Step 3
3.4.4.2.3 Step 3: Add instances of GAP
The next step is to create the GAP instance for the production network.
From the main menu, go to Edit System | Add Client program or select the icon. From
the resulting menu, select "GAP". Click on the main screen where to position the GAP icon, and
give the case a label (say, "Production").
For the file name, browse to the file "ProductionTutorial.gap" as shown above.
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
When OK is pressed, GAP will start and load the required case. It will then query the case for its
sources and sinks (wells) and will display these on the screen as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1560
The next step is to create the GAP instance for the injection network. From the main menu, go
to Edit System | Add Client program or select the icon. From the resulting menu, select
"GAP". Click on the main screen where to position the GAP icon, and give the case a label (say,
"Injection").
For the file name, browse to the file "Production.gap" as shown above. This is the main
production model. As this network model is an associated Gas Injection network
model, then select as System "Associated Gas Injection".
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes. When OK is pressed, GAP will start and load the required case.
July, 2021
RESOLVE Manual
Examples Guide 1562
Go to Step 4
3.4.4.2.4 Step 4: Make the connections
The next step is to connect the sources and sinks of the different applications. To connect the
systems, go to "link" mode by pressing the icon and link the different items by drag and drop.
It is required to connect:
The production wells of GEM and the production wells of GAP
The injection wells of GEM and the injection wells of GAP
The 'Reinjection Gas' separator of the production system to the 'IM1' manifold of the injection
system.
When a well of GAP is connected to a well of GEM, IPR data and compositional data is passed
at every time step. When the 'Reinjection Gas' separator is connected to the 'IPM1' injection
manifold, the following data is passed:
pressure and temperature
composition
the gas rate at the 'Reinjection Gas' separator is passed as a maximum gas rate constraint
on the 'IM1' manifold. This ensures that the injection system does not inject more gas than the
amount produced.
Go to Step 5.
3.4.4.2.5 Step 5: Import application variables
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to publish additional
variables for reporting and to be able to set up a controlling logic.
It is required to publish:
The oil rate from separator 'Oil'
The gas rate from separator 'Reinjection Gas'
The maximum gas rate constraint from separator 'Reinjection Gas'
The gas rate from injection manifold 'IM1'
The maximum gas rate constraint from injection manifold 'IM1'
From the menu, enter Variables | Import application variables. Import the variables listed
above for the production and the injection systems by selecting the corresponding tab and
July, 2021
RESOLVE Manual
Examples Guide 1564
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
July, 2021
RESOLVE Manual
Examples Guide 1566
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
Go to Step 6.
This can implemented by taking the following steps. Create a direct link from the injection to the
production using the icon.
Double click on the link, and pass the injection manifold gas rate to the 'Reinjection Gas'
separator maximum gas rate.
Create the following Pre-Solve workflow, using an Assignment element. The objective of this
workflow is to reset the constraint on the production system at the beginning of every time step.
July, 2021
RESOLVE Manual
Examples Guide 1568
The final step to implement the feedback loop is to configure the loop, by entering the Run |
Edit Loop menu. Define the following fluid connection convergence item with a convergence of
1, and enter the 'Maximum number of iterations' as 3. RESOLVE will consider the loop
converged if the produced gas and the re-injected gas are within 1 MMscf/d of each other, or if
the maximum number of iterations it reached.
July, 2021
RESOLVE Manual
Examples Guide 1570
Go to Step 7.
3.4.4.2.7 Step 7: Enter the schedule
To setup the RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
For the purposes of this example, we will be making use of the basic scheduling only.
The advanced scheduling is described in the "Example_2.1.3" section.
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button. This will display a screen allowing the user to
select the required start date from a list of the various model start dates. In this case the start
date is 01/10/2024.
The timestep and schedule duration are also entered here as shown (1 week). All the linked
application models will be synchronised every month until the schedule completes on
01/10/2025.
3.4.4.2.8 Step 8: Run the model
The simulation is now ready to be run.
July, 2021
RESOLVE Manual
Examples Guide 1572
Run the forecast from beginning to end. Do this by pressing the icon.
The first action done is the initialisation of both modules. In the case of GEM, this means
initialising the reservoir model (i.e. running a history if necessary until the start date of the
forecast is reached).
When the model is run, the following screen will appear. The purpose of this screen is to define
the mapping between the components of the different applications. Map the components by
clicking the 'Add All' button, and do this for all the wells.
At the end of the run, the following production profile and cumulative oil production is obtained.
It is possible to verify that the injection system has been able to re-inject all the produced gas.
July, 2021
RESOLVE Manual
Examples Guide 1574
At the reservoir level, the injected gas can be clearly seen by looking at the Mole fraction of C1
in the injected gas which is higher than the C1 concentration of the fluid which saturates the
reservoir. In this example, the produced gas rate and oil rate increase during the run.
This concludes the compositional integration example. The next example looks at lumping the
reservoir composition into a smaller composition to speed up the reservoir calculations. It
introduces the Lumping/Delumping technique in RESOLVE to perform integration between
applications which have different requirements as to the number of components used.
3.4.4.3 Example 2.4.3 GAP - GEM Lumping/Delumping
1. Example Introduction
The objective of this example is to demonstrate the steps required to achieve integration
between a compositional GAP model and a compositional GEM model having a different
number of components, using the Lumping/Delumping method in RESOLVE.
This example builds on Example 2.4.2, in which integration was performed between a
compositional GEM model and a compositional GAP model having the same number of
components (13 components), and it is recommended that the user completes this example
first. This was done in the context of a condensate field with gas recycling. However, as detailed
in the Lumping/Delumping section, different applications have different requirements regarding
the number of components used. Generally a reservoir simulator requires a reduces
composition to avoid excessive run times, while the surface network requires a detailed
composition if the objective is to perform temperature prediction and flow assurance
July, 2021
RESOLVE Manual
Examples Guide 1576
calculations.
Each module uses the PVT modelling approach which is best suited to each tool, that is to say:
The GEM reservoir model uses a grouped (6 pseudo components) fully compositional PVT
description.
The GAP surface network model uses an extended fully compositional (13 components) PVT
description.
Before completing this example, it may be preferrable to complete Example 2.4.2 as this
example builds on it. The field considered is a condensate field which is being produced for its
condensate production. All the produced gas needs to be re-injected, and the production is
constrained by the capacity of surface facilities to re-inject the gas.
The GAP model of the surface network (including production and gas injection networks),
setup with the full composition
The RESOLVE model as built in Example 2.4.2. This model is set up to ensure that the
system does not produce more gas than it can re-inject.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both GEM and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\IMEX-GEM
\Example_2_4_3-GAP_GEM_Lumping_Delumping
This folder contains a file "GEM Lumping Delumping Start.rsa" which is a "RESOLVE archive
file" that contains the RESOLVE file, GEM file, GAP file and other associated files required to go
July, 2021
RESOLVE Manual
Examples Guide 1578
through the example. The archive file needs to be extracted either in the current location or a
location of the user"s choice.
Go to Step 1
3.4.4.3.1 Step 1: Open the RESOLVE model
Open the RESOLVE model provided in the archive, named 'LumpingDelumping.rsl'. This
contains the model built in Example 2.4.2. It includes:
The GAP production and injection network
The GEM model
The feedback loop and Pre-Solve workflow required to ensure that the produced gas can be
re-injected.
Currently the model is setup using a full composition the surface network and a lumped
composition in the reservoir simulator. In the next steps, the equivalent lumped composition is
built, and the RESOLVE model setup to perform lumping/delumping.
Open 'FullComposition.pvi' in PVTp. This contains the full EOS as used in Example 2.4.2.
Begin by adding a Lumping object to the characterization screen via the Characterization
ribbon.
So that we can store the resultant composition, also add a new PVT fluid and name it PVT
Fluid_Full:
Double click on the Lumping object to open it. In the window that opens the 'Lumping Method' is
set to 'Manual Lumping': this allows the user to manually create the lumping rule and to choose
how to lump the components together.
July, 2021
RESOLVE Manual
Examples Guide 1580
Select the components that will be part of each lump on the bottom-right hand of the table, the
Add Lump. As a rule of thumb, components with similar molecular weights can be lumped
together. In any case, finding the best way of lumping is a trial and error process, based on
having a final lumped EOS as close to the original EOS as possible. Create the lumps shown
below, and select Hold the pseudo C17::C20 during lumping. If selected, this enables to keep
the molar fraction of single components constant through lumping. Then click on Calculate.
The quality of the lumped composition created can be verified by calculating the phase
envelopes or simulating experiments such as a CVD, as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1582
Add an EOS Export object to the characterization screen via the Characterization ribbon.
July, 2021
RESOLVE Manual
Examples Guide 1584
Within the Export EOS object select "IPM EoS Composition" as the Export type and "Full and
Lumped" as the Export composition , to export the full and the lumped composition together in a
single file. Enable the export of the lumping rule as well. Save this as 'Full and Lumped.prp'.
Click 'Export'.
N.B:
The GEM data deck provided has already been set up to use the lumped composition created.
If this had not been the case, PVTp can be used to generate the EOS include file via the EOS
object. When the object is opened, select the CMG(Compositional) Format as the Export type
and "Lumped" in the Export composition menu.
3.4.4.3.3 Step 3: Import the lumping rule in RESOLVE
RESOLVE performs the lumping/delumping calculation during the run to map the full
composition to the lumped. The full and the lumped compositions, along with the lumping rule,
now need to imported into RESOLVE.
In this window, a pair of EOS (full and lumped) is defined for each pair of connected
applications which requires lumping or delumping. In this example, we need to perform
delumping from the reservoir to the production network, and lumping from the gas injection
network to the reservoir.
In the 'GEM-Production' tab, click on the red 'Setup' button and import 'Full and Lumped.prp'
created in Step 2. Perform the same operation in the 'GEM - Gas_Injection' tab.
July, 2021
RESOLVE Manual
Examples Guide 1586
Run the forecast from beginning to end. Do this by pressing the icon.
The first action done is the initialisation of both modules. In the case of GEM, this means
initialising the reservoir model (i.e. running a history if necessary until the start date of the
forecast is reached).
When the model is run, the following screen will appear. The purpose of this screen is to define
the mapping between the components of the different applications.
Between the production network and the gas injection network, no lumping and delumping is
required and the components can be mapped directly, by selecting them from the lists and
clicking 'Add individual connection' for each component.
From the reservoir to the production network, delumping is required. Select the connection 'P1-
>PR1', and under 'Resolve lumping/delumping' select 'External delumping'. The components
can then be mapped. Repeat this for the other three production wells.
July, 2021
RESOLVE Manual
Examples Guide 1588
From the gas injection network to the reservoir, lumping is required. Select the connection 'I1-
>Inj1', and under 'Resolve lumping/delumping' select 'External Lumping'. The lumped
components can then be mapped. Repeat this for the other two injection wells.
Once the run is finished, the results can be analysed. The RESOLVE file contains saved results
from Example 2.4.2 (obtained with a full composition throughout), which can be compared with
the lumping/delumping approach followed here. The following plot compares the oil production
profile for the two cases. The results are very close, in particular considering the complexity of
the problem, with delumping from the reservoir to the production network, lumping from the gas
injection network to the reservoir and condensate dropout within the reservoir.
July, 2021
RESOLVE Manual
Examples Guide 1590
Analysis of the run time also shows that using a lumped composition results in a decreased
calculation time for the reservoir simulator.
Therefore this example demonstrates that the objectives of the lumping/delumping methodology
are achieved:
Perform integration between applications having a different PVT description
Ensure that the results are consistent compared to each application using the full EOS
composition.
3.4.5 NEXUS
3.4.5.1 Example 2.5.1: GAP-Nexus Connection
3.4.5.1.1 Overview
1. Example Introduction
The process of this exercise demonstrates how to setup a connection between a GAP
production network, water injection network and reservoir simulation model setup using Nexus.
The field being modelled consists of 3 producer wells and 4 water injector wells, with the
intention being to determine the production over the course of a 5 year prediction.
Surface network:
The first objective is to couple the GAP and Nexus models, and the second to run the model and
determine this production and injection behaviour.
2. Licenses Required
July, 2021
RESOLVE Manual
Examples Guide 1592
Running this example will require the following licenses to be available to the user:
RESOLVE GAP Nexus
1 1 1
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for Nexus, REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\NEXUS
\Example_2_5_1-GAP_Nexus
This folder contains a file "GAP_Nexus.rsa" which is a "RESOLVE archive file" that contains the
RESOLVE file, Nexus files, GAP file and other associated files required to go through the
example. The archive file needs to be extracted either in the current location or a location of the
user's choice.
Go to Step 1
Step 1 Objective:
Start a new RESOLVE project and initialise the units
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-
Nexus.rsl).
Go to Step 2
3.4.5.1.3 Step 2 - Create Nexus instance
Step 2 Objective:
Create a Nexus instance in the RESOLVE model
The include files of the Nexus model are located in a zip file 'NexusBaggage.zip' which is
included in the example archive. Unzip this file, such that all the include files are located inside a
July, 2021
RESOLVE Manual
Examples Guide 1594
folder called 'nexus_data' which should be placed in the same folder as 'Nexus.fcs'.
Note:
For Nexus to be controlled by RESOLVE, the following keyword should be placed the
xxx_surface.dat file. This has already been done for the files provided.
The next step is to create a Nexus instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to System | Create instance or select the icon.
The study file will create a folder containing files generated by Nexus, so the location one wants
these stored at should be entered.
Next click on "Start", Nexus will start and load the required case. It will then query the case for
its sources and sinks (wells) and will display these on the screen as shown below. The icons
can be moved by selecting the "Move" icon on the toolbar ( ) and then dragging them to the
required positions.
July, 2021
RESOLVE Manual
Examples Guide 1596
The type of the well (which is obtained from the query of Nexus) can be found by double-clicking
on the separate icons.
Go to Step 3
Step 3 Objective:
Create the GAP production instance in the RESOLVE model
Repeat the previous step to create an instance of GAP on the RESOLVE main screen.
Label the created GAP case "PROD" and browse for the model "Oil Field.gap".
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
Under snapshot mode section, select "Always save forecast snapshots".This saves a
snapshot of each prediction timestep in GAP, allowing the user to reload a copy of each
snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Also note that the “Rule based solver” box has been ticked for GAP. This uses simple
engineering rules to meet constraints in the GAP network, and is significantly faster than a full
July, 2021
RESOLVE Manual
Examples Guide 1598
optimisation (although it may produce slightly less oil/gas). More information on this can be
found in section 2.10.5.2 of the GAP User Manual.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP file contains production and water injection models. In the production model, three
production wells, PROD1, PROD2 and PROD3 will be found. These are the same wells
identified from the Nexus case. One can look at the GAP interface (i.e. the GAP model will be
open on the windows taskbar) to confirm the contents of the GAP file by clicking on Window |
Tile vertically from the main GAP menu.
Go to Step 4
3.4.5.1.5 Step 4 - Connect the production wells
Step 4 Objective:
Connect the production wells from the Nexus model to the production wells from the
GAP production network model.
Connect the PROD1 icon in Nexus to PROD1 in GAP by clicking into the first icon and dragging
the connection to the second. Repeat this for the other producer wells.
Note that it is possible to make the connections using the "Connection wizard".
This is obtained by invoking Edit System | Connection Wizard under the main menu.
Go to Step 5
3.4.5.1.6 Step 5 - Create GAP water injection instance
Step 5 Objective:
Load the GAP Water Injection model and connect the wells to their counterparts in
Nexus
The original GAP file that was loaded earlier contains an associated water injection system;
thus it is possible to create an instance of GAP for the water injection system in the RESOLVE
model.
July, 2021
RESOLVE Manual
Examples Guide 1600
For the filename, enter the same GAP production system model. This is because the
injection system is associated (i.e. or linked) with the production system in one GAP file.
Consequently, the "associated water injection" option has to be selected from the drop-down
menu for the system.
An advantage of this is that it allows to model production and injection systems of GAP
simultaneously using only a single GAP license.
Press OK and the injection system well (i.e. and injection manifold) will be displayed.
After this, connect the GAP and Nexus icons together appropriately.
Go to Step 6
Step 6 Objective:
Setup the Nexus model options
Before the simulation is run, some further changes can be made to the configuration of the
Nexus link.
To select these options, double-click on the Nexus icon and select ‘Drainage region (Petex)’
under IPR model:.
July, 2021
RESOLVE Manual
Examples Guide 1602
Click on "Calculate" to perform the pre-run calculations required for this method.
The calculation will be performed and the results will be as shown below:
Click on "OK" and "OK" again to return to the "Nexus Case Settings" interface.
July, 2021
RESOLVE Manual
Examples Guide 1604
When GAP solves/optimises its system, RESOLVE will return the result as an operating point
for the well on the inflow relation that Nexus passed for that well, i.e. a BHP, phase rates, and a
THP. Nexus will then have to control that well with a fixed boundary condition for the duration of
the next timestep. The user can select which boundary condition should be used. Here we will
use ‘Combined’.
Go to Step 7
3.4.5.1.8 Step 7 - Setup Forecast Schedule
Step 7 Objective:
Setup the forecast schedule
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
For the purposes of this example, we will be making use of the basic scheduling only.
The advanced scheduling is described in the "Example_2.1.3" section.
Invoke the schedule screen from the main menu using Schedule | Forecast data.
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button.
July, 2021
RESOLVE Manual
Examples Guide 1606
This will display a screen allowing to select the required start date from a list of the various
model start dates.
The timestep and schedule duration are also entered here as shown. Enter the data in the
screen shot above.
Here we will synchronise GAP and Nexus every 1 month until the schedule completes on
1/1/2020.
Go to Step 8
3.4.5.1.9 Step 8 - Publish Variables
Step 8 Objective:
Publish the GAP variables to report in the RESOLVE results section
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to report all the
variables for the wells and separator in the GAP production model as well as the injection wells
and manifold in the GAP injection model.
RESOLVE can automatically build a list of the GAP variables available and that can be reported
directly through the RESOLVE interface.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
Select the 'Production' tab, and click Edit variables. A list of variables is available to import.
These consist of output variables such as solver results or cumulatives, or input variables such
as constraints or items' masking variables. If a variable is required and is not included in this
list, it is always possible to Copy and Paste the corresponding OpenServer string in the
'Variable string' field.
Select 'Sep1' and click the red arrow: this will import all the variables corresponding to Sep1.
Repeat this for the three production wells.
July, 2021
RESOLVE Manual
Examples Guide 1608
Click OK. In the screen below, click 'Plot invert selection' in order for a tick to appear in the 'Add
to plot' column for all variables. This ensures that all variables are accessible from the plotting
window.
Go to the 'Gas_Injection' tab do the same for the gas injection model. Output all variables for
the injection wells and manifold (IM1) and proceed to the next step.
Make sure that there are no screens left open in GAP as this can interfere with the remote
operation of GAP by RESOLVE.
Go to Step 9
3.4.5.1.10 Step 9 - Run the Forecast
Step 9 Objective:
Run the prediction forecast
To run the forecast from beginning to end without stopping, press the icon.
Note that the run can be paused or stopped with other toolbar icons.
Once the forecast has been started, Nexus will perform an equilibration calculation.
The equilibrated reservoir data will be passed to GAP in the form of well IPR curves (i.e. for both
producers and injectors). GAP will use this data to solve and optimise the system. The solution
points will then be returned to Nexus ready to take the first month's timestep. Before this, the
RESOLVE forecast enters "pause" mode.
To run the rest of the forecast without stopping, press the icon. Note that the run can be
paused or stopped with other toolbar icons.
Go to Step 10
3.4.5.1.11 Step 10 - Analyse the Results
Step 10 Objective:
Analysing the Results
RESOLVE has a set of results - the amount of results reported in the RESOLVE model
is function of the variables that have been published by the user prior to the run itself.
Refer to step 8 for further details.
It is also possible to view all the results of a simulation for any of the client applications
in the application itself.
The results from the client applications are best viewed when the run has completed or is
paused: this is because RESOLVE is controlling the application and it is possible that viewing
application results will interfere with the control. For this reason, application user interfaces are
disabled while the run is proceeding.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon.
These results can also be displayed as the run is proceeding.
For this specific case, we want to first analyse the oil that has been produced for the entire
production system as well as for each individual well.
All the nodes of the RESOLVE model are listed in the left hand side of the screen.
July, 2021
RESOLVE Manual
Examples Guide 1612
In this example, GAP and Nexus have the same number of components and both consider a full
composition. If the objective is to achieve integration between a GAP model and a Nexus model
having a different number of components (typically a reduced or lumped composition in GEM
and a full composition in GAP), please refer to the Lumping/Delumping example.
A condensate field is to be modeled. This field is being produced for its condensate production,
and there are no export facilities or market for the gas. All the produced gas must therefore be
re-injected in the reservoir and the production is currently limited by the ability of the surface
facilities to compress and re-inject the gas.
A Nexus model of the reservoir is available, along with a GAP model of the surface network
(including the compressors) and a fully characterised equation of state of the reservoir fluid. The
field has 5 producers and 3 injectors.
The fluid is a condensate with a single-stage flash CGR of 97.23 STB/MMscf, API of 29 and the
following phase enveloppe.
July, 2021
RESOLVE Manual
Examples Guide 1614
The GAP injection and production models are as follows. The condensate is separated from the
gas at the defined separator pressure (the temperature of separation is calculated by the
network). The condensate is sent to the separator 'Oil' and the gas to be re-injected sent to the
separator 'Reinjection gas'. The gas injection network includes a compressor which models the
gas handling facility.
When performing integration between compositional models, the following should be noted:
At the beginning of every time step, the IPR is passed from the reservoir simulator to GAP in
the form of a table of phase rates vs BHP. This is identical to the Black Oil case.
The IPR table contains phase rates at standard conditions, therefore it is important to ensure
that the separator train is consistent between the reservoir simulator and GAP. If this is not the
case, this will lead to mass inconsistencies between the models.
The composition of the produced fluid is passed from the reservoir simulator to GAP at every
time step. Similarly the composition of the re-injected gas is passed from GAP to the
reservoir simulator.
Only mole percentages are passed between applications, therefore it is important to ensure
that the components properties are consistent between applications.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both Nexus and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
July, 2021
RESOLVE Manual
Examples Guide 1616
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\NEXUS
\Example_2_5_2-GAP_Nexus_Compositional
This folder contains a file "NEXUS Full Composition.rsa" which is a "RESOLVE archive file"
that contains the RESOLVE file, the GEM model, GAP file and other associated files required to
go through the example. The archive file needs to be extracted either in the current location or a
location of the user"s choice.
Go to Step 1
3.4.5.2.1 Step 1: Create new file
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Access the Controls/EOS and change the range of validity for the Volume Shift, including
negative values:
July, 2021
RESOLVE Manual
Examples Guide 1618
Now is a good time to save the file using File | Save As..., and enter a file name (e.g.
GAP_REVEAL_compositional.rsl).
Go to Step 2
3.4.5.2.2 Step 2: Add an instance of Nexus
The include files of the Nexus model are located in a zip file 'FullComposition.zip' which is
included in the example archive. Unzip this file, such that all the include files are located inside a
folder called 'nexus_data' which should be placed in the same folder as 'FullComposition.fcs'.
Note:
For Nexus to be controlled by RESOLVE, the following keyword should be placed the
xxx_surface.dat file. This has already been done for the files provided.
The next step is to create a Nexus instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to Edit System| Add Client Program or select the icon.
From the displayed menu list, select "Nexus". Click on the main screen where the REVEAL icon
is to be located, and give the case a label (say, "Nexus").
In the File name field browse the Nexus *.fcs 'FullComposition.fcs', and enter a name for the
Study file that will be created.
After that, clicking on Ok will return to the main screen and open up the Nexus model:
July, 2021
RESOLVE Manual
Examples Guide 1620
Note: one can use the Move tool to move the wells in the screen.
The Nexus reservoir model has overall 11 wells:
- Wells PR1 to PR8 producers
- Wells INJ1 to INJ3 are gas injectors
Once the Nexus deck is loaded, select as IPR model the 'Drainage Region (Petex)' (ref. IPR
Model topic).
Then select 'Calculate' to perform the initial scaling calculation. The program will calculate
parameters to correct the block IPR to determine a more representative drainage region IPR.
After the calculation is finished, select OK to go back to the main program panel.
Go to Step 3.
3.4.5.2.3 Step 3: Add instances of GAP
The next step is to create the GAP instance for the production network.
From the main menu, go to Edit System | Add Client program or select the icon. From
the resulting menu, select "GAP". Click on the main screen where to position the GAP icon, and
give the case a label (say, "Production").
July, 2021
RESOLVE Manual
Examples Guide 1622
For the file name, browse to the file "Production.gap" as shown above.
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
When OK is pressed, GAP will start and load the required case. It will then query the case for its
sources and sinks (wells) and will display these on the screen as shown below.
The next step is to create the GAP instance for the injection network. From the main menu, go
to Edit System | Add Client program or select the icon. From the resulting menu, select
"GAP". Click on the main screen where to position the GAP icon, and give the case a label (say,
"Injection").
For the file name, browse to the file "Production.gap" as shown above. This is the main
production model. As this network model is an associated Gas Injection network
model, then select as System "Associated Gas Injection".
July, 2021
RESOLVE Manual
Examples Guide 1624
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes. When OK is pressed, GAP will start and load the required case.
Go to Step 4
3.4.5.2.4 Step 4: Make the connections
The next step is to connect the sources and sinks of the different applications. To connect the
systems, go to "link" mode by pressing the icon and link the different items by drag and drop.
It is required to connect:
The production wells of Nexus and the production wells of GAP
The injection wells of Nexus and the injection wells of GAP
The 'Reinjection Gas' separator of the production system to the 'IM1' manifold of the injection
system.
When a well of GAP is connected to a well of Nexus, IPR data and compositional data is
passed at every time step. When the 'Reinjection Gas' separator is connected to the 'IPM1'
injection manifold, the following data is passed:
pressure and temperature
composition
the gas rate at the 'Reinjection Gas' separator is passed as a maximum gas rate constraint
on the 'IM1' manifold. This ensures that the injection system does not inject more gas than the
amount produced.
Go to Step 5.
3.4.5.2.5 Step 5: Import application variables
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to publish additional
variables for reporting and to be able to set up a controlling logic.
It is required to publish:
The oil rate from separator 'Oil'
The gas rate from separator 'Reinjection Gas'
The maximum gas rate constraint from separator 'Reinjection Gas'
The gas rate from injection manifold 'IM1'
The maximum gas rate constraint from injection manifold 'IM1'
From the menu, enter Variables | Import application variables. Import the variables listed
above for the production and the injection systems by selecting the corresponding tab and
clicking Edit variables.
July, 2021
RESOLVE Manual
Examples Guide 1626
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
July, 2021
RESOLVE Manual
Examples Guide 1628
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
Go to Step 6.
This can implemented by taking the following steps. Create a direct link from the injection to the
production using the icon.
Double click on the link, and pass the injection manifold gas rate to the 'Reinjection Gas'
separator maximum gas rate.
Create the following Pre-Solve workflow, using an Assignment element. The objective of this
workflow is to reset the constraint on the production system at the beginning of every time step.
July, 2021
RESOLVE Manual
Examples Guide 1630
The final step to implement the feedback loop is to configure the loop, by entering the Run |
Edit Loop menu. Define the following fluid connection convergence item with a convergence of
1, and enter the 'Maximum number of iterations' as 2. RESOLVE will consider the loop
converged if the produced gas and the re-injected gas are within 1 MMscf/d of each other, or if
the maximum number of iterations it reached.
Go to Step 7.
3.4.5.2.7 Step 7: Enter the schedule
To setup the RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
For the purposes of this example, we will be making use of the basic scheduling only.
July, 2021
RESOLVE Manual
Examples Guide 1632
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button. This will display a screen allowing the user to
select the required start date from a list of the various model start dates. In this case the start
date is 01/09/2009.
The timestep and schedule duration are also entered here as shown (1 month). All the linked
application models will be synchronised every month until the schedule completes on
01/01/2014.
To run the forecast from beginning to end without stopping, press the icon.
Note that the run can be paused or stopped with other toolbar icons.
Run the forecast from beginning to end. Do this by pressing the icon.
The first action done is the initialisation of both modules. In the case of Nexus, this means
initialising the reservoir model (i.e. running a history if necessary until the start date of the
forecast is reached).
When the model is run, the following screen will appear. The purpose of this screen is to define
the mapping between the components of the different applications. Map the components by
clicking the 'Add All' button, and do this for all the wells.
July, 2021
RESOLVE Manual
Examples Guide 1634
At the end of the run, the following production profile and cumulative oil production is obtained.
It is possible to verify that the injection system has been able to re-inject all the produced gas.
At the reservoir level, the injected gas can be clearly seen by looking at the fluid CGR: the
injected gas corresponds to the low CGR regions. In this example, the produced gas rate
increases slightly during the run, and the oil rate decreases. The decrease of the oil rate is due
to a decrease of the producing CGR, which is due to the reservoir depletion and the
breakthrough of the low CGR injected gas. The image below shows the gas phase CGR which
clearly illustrates the injected gas, and the condensate droping out in the zone of wells PROD7
and PROD8 (which has no gas re-injection).
This concludes the compositional integration example. The next example looks at lumping the
reservoir composition into a smaller composition to speed up the reservoir calculations. It
introduces the Lumping/Delumping technique in RESOLVE to perform integration between
applications which have different requirements as to the number of components used.
This example builds on Example 2.5.2, in which integration was performed between a
compositional Nexus model and a compositional GAP model having the same number of
components (15 components), and it is recommended that the user completes this example
July, 2021
RESOLVE Manual
Examples Guide 1636
first. This was done in the context of a condensate field with gas recycling. However, as detailed
in the Lumping/Delumping section, different applications have different requirements regarding
the number of components used. Generally a reservoir simulator requires a reduces
composition to avoid excessive run times, while the surface network requires a detailed
composition if the objective is to perform temperature prediction and flow assurance
calculations.
Each module uses the PVT modelling approach which is best suited to each tool, that is to say:
The Nexus reservoir model uses a grouped (7 pseudo components) fully compositional PVT
description
The GAP surface network model uses an extended fully compositional (15 components) PVT
description.
Before completing this example, it may be preferrable to complete Example 2.5.2 as this
example builds on it. The field considered is a condensate field which is being produced for its
condensate production. All the produced gas needs to be re-injected, and the production is
constrained by the capacity of surface facilities to re-inject the gas.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both Nexus and GAP are registered.
July, 2021
RESOLVE Manual
Examples Guide 1638
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\NEXUS
\Example_2_5_3-GAP_Nexus_Compositional_Lumping_Delumping
This folder contains a file "NEXUS Lumping Delumping Start.rsa" which is a "RESOLVE
archive file" that contains the RESOLVE file, tNavigator file, GAP file and other associated files
required to go through the example. The archive file needs to be extracted either in the current
location or a location of the user"s choice.
Go to Step 1
3.4.5.3.1 Step 1: Open the RESOLVE model
Open the RESOLVE model provided in the archive, named 'LumpingDelumping.rsl'. This
contains the model built in Example 2.5.2. It includes:
The GAP production and injection network
The Nexus model
The feedback loop and Pre-Solve workflow required to ensure that the produced gas can be
re-injected.
Currently the model is setup using a full composition the surface network and a lumped
composition in the reservoir simulator. In the next steps, the equivalent lumped composition is
built, and the RESOLVE model setup to perform lumping/delumping.
The include files of the Nexus model are located in a zip file 'LumpedComposition.zip' which
is included in the example archive. Unzip this file, such that all the include files are located inside
a folder called 'nexus_data' which should be placed in the same folder as
'LumpedComposition.fcs'.
Note:
For Nexus to be controlled by RESOLVE, the following keyword should be placed the
xxx_surface.dat file. This has already been done for the files provided.
July, 2021
RESOLVE Manual
Examples Guide 1640
Quality check that the two compositions are consistent by running PVT experiments such as
CCE, CVD etc.
Open 'FullComposition.pvi' in PVTp. This contains the full EOS as used in Example 2.5.2.
The screen for creating the lumped composition is accessed via Data | Lumping/Delumping
for IPM.
The 'Lumping Method' is by default set to 'Manual Lumping': this allows the user to manually
create the lumping rule and to choose how to lump the components together. Click on Lump
Stream.
Select the components that will be part of each lump on the bottom-right hand of the table, the
Add Lump. As a rule of thumb, components with similar molecular weights can be lumped
together. In any case, finding the best way of lumping is a trial and error process, based on
having a final lumped EOS as close to the original EOS as possible. Create the lumps shown
below, and click on Lump.
July, 2021
RESOLVE Manual
Examples Guide 1642
The program will ask whether or not to hold single components during lumping. If selected, this
enables to keep the molar fraction of single components constant through lumping. Select the
pseudo C17::C20 and click OK.
Click OK to create a lumping rule: this will be required by RESOLVE to perform Lumping/
Delumping.
The quality of the lumped composition created can be verified by calculating the phase
envelopes or simulating experiments such as a CVD, as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1644
In the menu Data | Lumping/Delumping for IPM, select 'Export .prp', to export the full and the
lumped composition together in a single file. When prompted, click OK to export the lumping
rule as well. Save this as 'Full and Lumped.prp'.
Create a stream containing only the lumped composition by clicking on 'To Stream' then on
'Clear Lumping'.
July, 2021
RESOLVE Manual
Examples Guide 1646
This results in a new stream, 'full_LUMP', which contains only the lumped composition.
N.B:
The Nexus data deck provided has already been set up to use the lumped composition created.
If this had not been the case, PVTp can be used to generate the EOS include file in Nexus/VIP
format, via File | Export.
3.4.5.3.3 Step 3: Import the lumping rule in RESOLVE
RESOLVE performs the lumping/delumping calculation during the run to map the full
composition to the lumped. The full and the lumped compositions, along with the lumping rule,
now need to imported into RESOLVE.
In this window, a pair of EOS (full and lumped) is defined for each pair of connected
applications which requires lumping or delumping. In this example, we need to perform
delumping from the reservoir to the production network, and lumping from the gas injection
network to the reservoir.
In the 'Nexus-Production' tab, click on the red 'Setup' button and import 'Full and
Lumped.prp' created in Step 2. Perform the same operation in the 'Nexus - Gas_Injection'
tab.
July, 2021
RESOLVE Manual
Examples Guide 1648
Run the forecast from beginning to end. Do this by pressing the icon.
The first action done is the initialisation of both modules. In the case of Nexus, this means
initialising the reservoir model (i.e. running a history if necessary until the start date of the
forecast is reached).
When the model is run, the following screen will appear. The purpose of this screen is to define
the mapping between the components of the different applications.
Between the production network and the gas injection network, no lumping and delumping is
required and the components can be mapped directly, by selecting them from the lists and
clicking 'Add individual connection' for each component.
From the gas injection network to the reservoir, lumping is required. Select the connection 'Inj1-
>Inj1', and under 'Resolve lumping/delumping' select 'External Lumping'. The lumped
components can then be mapped. Repeat this for the other two injection wells.
July, 2021
RESOLVE Manual
Examples Guide 1650
From the reservoir to the production network, delumping is required. Select the connection
'PR1->PR1', and under 'Resolve lumping/delumping' select 'External delumping'. The
components can then be mapped. Repeat this for the other seven production wells.
Once the run is finished, the results can be analysed. The RESOLVE file contains saved results
from Example 2.5.2 (obtained with a full composition throughout), which can be compared with
the lumping/delumping approach followed here. The following plot compares the oil production
profile for the two cases. The results are very close, in particular considering the complexity of
the problem, with delumping from the reservoir to the production network, lumping from the gas
injection network to the reservoir and condensate dropout within the reservoir.
July, 2021
RESOLVE Manual
Examples Guide 1652
The following plot compares the producing CGR for well PR7, which zone is produced in
depletion only. The decrease in CGR for this well is only due to condensate dropout within the
reservoir (no gas re-injection), and this shows that the lumped EOS is able to accurately capture
this effect.
Analysis of the run time also shows that using a lumped composition results in a decreased
calculation time for the reservoir simulator.
Therefore this example demonstrates that the objectives of the lumping/delumping methodology
are achieved:
Perform integration between applications having a different PVT description
Ensure that the results are consistent compared to each application using the full EOS
composition.
3.4.6 Echelon
Enter topic text here.
3.4.6.1 Example 2.6.1: GAP-Echelon Connection
3.4.6.1.1 Overview
1. Example Introduction
The process of this exercise demonstrates how to setup a connection between a GAP
production network, water injection network and reservoir simulation model setup using
Echelon.
The field being modeled consists of 3 producer wells and 3 water injector well, with the intention
being to determine the production over the course of a 5 year prediction. The porosity map of
the reservoir with the wells is shown below.
July, 2021
RESOLVE Manual
Examples Guide 1654
The first objective is to couple the GAP and Echelon models, and the second to run the model
and determine this production and injection behaviour.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for Echelon and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will display a message confirming the number of drivers that have
been registered.
Echelon is a GPU-accelerated reservoir simulator. To run it you need the following installed:
CUDA capable NVIDIA GPU, NVIDIA GPU Toolkit and driver v10.0 or above. Therefore,
before Echelon is used with RESOLVE you should make sure that all the prerequisites are met
and the simulator can be run as a standalone program. Further information on the Echelon
installation process can be found in the Installation Guide.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\Echelon
\Example_2_6_1-GAP_Echelon
This folder contains a file "GAP_Echelon.rsa" which is a "RESOLVE archive file" that contains
the RESOLVE file, Echelon file, GAP file and other associated files required to go through the
example. The archive file needs to be extracted either in the current location or a location of the
user's choice.
Go to Step 1.
3.4.6.1.2 Step 1 - Initialise model
Step 1 Objective:
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-
Echelon.rsl).
July, 2021
RESOLVE Manual
Examples Guide 1656
The next step is to create an Echelon instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to System | Create instance or select the icon.
For the file name, browse to the file "Echelon.data". Please make sure that the Echelon data
deck contains INTERFAC keyword within the SCHEDULE section.
Note that from this screen it is possible to select a remote host on which Echelon can be run.
This is especially useful in cases where several reservoir models have to be run: in this case it
Next click on "Start", Echelon will start and load the required case. It will then query the case for
its sources and sinks (wells) and will display these on the screen as shown below. The icons
can be moved by selecting the "Move" icon on the toolbar ( ) and then dragging them to the
required positions.
The type of the well (which is obtained from the query of Echelon) can be found by double-
clicking on the separate icons.
3.4.6.1.4 Step 3 - Create GAP produciton instance
Step 3 Objective:
Create the GAP production instance in the RESOLVE model
Repeat the previous step to create an instance of GAP on the RESOLVE main screen.
Label the created GAP case "PROD" and browse for the model "Oil Field.gap".
July, 2021
RESOLVE Manual
Examples Guide 1658
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
Under snapshot mode section, select "Always save forecast snapshots".This saves a
snapshot of each prediction timestep in GAP, allowing the user to reload a copy of each
snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Also note that the “Rule based solver” box has been ticked for GAP. This uses simple
engineering rules to meet constraints in the GAP network, and is significantly faster than a full
optimisation (although it may produce slightly less oil/gas). More information on this can be
found in section 2.10.5.2 of the GAP User Manual.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP file contains production and water injection models. In the production model, three
production wells, PROD1, PROD2 and PROD3 will be found. These are the same wells
identified from the Echelon case. One can look at the GAP interface (i.e. the GAP model will be
open on the windows taskbar) to confirm the contents of the GAP file by clicking on Window |
Tile vertically from the main GAP menu.
Connect the PROD1 icon in Echelon to PROD1 in GAP by clicking into the first icon and
dragging the connection to the second. Repeat this for the other producer wells.
July, 2021
RESOLVE Manual
Examples Guide 1660
Note that it is possible to make the connections using the "Connection wizard".
This is obtained by invoking Edit System | Connection Wizard under the main menu.
3.4.6.1.6 Step 5 - Create GAP water injection instance
Step 5 Objective:
Load the GAP Water Injection model and connect the wells to their counterparts in
Echelon
The original GAP file that was loaded earlier contains an associated water injection system;
thus it is possible to create an instance of GAP for the water injection system in the RESOLVE
model.
For the filename, enter the same GAP production system model. This is because the
injection system is associated (i.e. or linked) with the production system in one GAP file.
Consequently, the "associated water injection" option has to be selected from the drop-down
menu for the system.
An advantage of this is that it allows to model production and injection systems of GAP
simultaneously using only a single GAP license.
Press OK and the injection system well (i.e. and injection manifold) will be displayed.
After this, connect the GAP and Echelon icons together appropriately.
July, 2021
RESOLVE Manual
Examples Guide 1662
Before the simulation is run, some further changes can be made to the configuration of the
Echelon link.
A detailed description of the different techniques used to determine these IPRs, along with their
respective advantages and disadvantages, can be found in the "IPR Generation Options"
section.
To select these options, double-click on the Echelon icon to view the Echelon data entry
screen.
Click on "Calculate" to perform the pre-run calculations required for this method.
July, 2021
RESOLVE Manual
Examples Guide 1664
The calculation will be performed and the results will be as shown below:
When GAP solves/optimises its system, RESOLVE will return the result as an operating point for
the well on the inflow relation that Echelon passed for that well. Echelon will then have to
control that well with a fixed boundary condition for the duration of the next time step. The user
can select which boundary condition should be used. Here we will use ‘Rate (Dominant
phase)’, meaning GAP will determine the rate control based upon the definition in the GAP
model. Typically this will be done using a single phase. The other options are outlined in section
2.5.12.3.
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 1666
For the purposes of this example, we will be making use of the basic scheduling only.
The advanced scheduling is described in the "Example_2.1.3" section.
Invoke the schedule screen from the main menu using Schedule | Forecast data.
The start date can be selected from the start dates of the various connected modules by clicking
The timestep and schedule duration are also entered here as shown. Enter the data in the
screen shot above.
Here we will synchronise GAP and Echelon every 1 month until the schedule completes on
1/1/2025.
3.4.6.1.9 Step 8 - Publish the Forecast
Step 8 Objective: Publish GAP variables to report in the RESOLVE results section
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to report all the
variables for the wells and separator in the GAP production model as well as the injection wells
and manifold in the GAP injection model.
RESOLVE can automatically build a list of the GAP variables available and that can be reported
directly through the RESOLVE interface.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
July, 2021
RESOLVE Manual
Examples Guide 1668
Select the 'PROD' tab, and click Edit variables. A list of variables is available to import. These
consist of output variables such as solver results or cumulatives, or input variables such as
constraints or items' masking variables. If a variable is required and is not included in this list, it
is always possible to Copy and Paste the corresponding OpenServer string in the 'Variable
string' field.
Select 'Separator' and click the red arrow: this will import all the variables corresponding to
Separator. Repeat this for the three production wells.
Click OK. In the screen below, click 'Plot invert selection' in order for a tick to appear in the 'Add
to plot' column for all variables. This ensures that all variables are accessible from the plotting
window.
July, 2021
RESOLVE Manual
Examples Guide 1670
Go to the 'WATINJ' tab and do the same for the water injection model. Output all variables for
the injection wells and manifold (IM1) and proceed to the next step.
Make sure that there are no screens left open in GAP as this can interfere with the remote
operation of GAP by RESOLVE.
NOTE: Echelon results can be viewed directly in RESOLVE without having to publish
individual result variables.
3.4.6.1.10 Step 9 - Run the Forecast
Step 9 Objective: Run the forecast
Once the forecast has been started, Echelon will perform an equilibration calculation.
The equilibrated reservoir data will be passed to GAP in the form of well IPR curves (i.e. for both
producers and injectors). GAP will use this data to solve and optimise the system. The solution
points will then be returned to Echelon in order to take the first month's timestep. Before this,
the RESOLVE forecast enters "pause" mode.
To run the rest of the forecast without stopping, press the icon. Note that the run can be
paused or stopped with other toolbar icons.
RESOLVE has a set of results - the amount of results reported in the RESOLVE model
is function of the variables that have been published by the user prior to the run itself.
Refer to step 8 for further details.
The results from the client applications are best viewed when the run has completed or is
paused: this is because RESOLVE is controlling the application and it is possible that viewing
application results will interfere with the control. For this reason, application user interfaces are
disabled while the run is proceeding.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon.
These results can also be displayed as the run is proceeding.
For this specific case, we want to first analyse the oil that has been produced for the entire
production system as well as for each individual well.
July, 2021
RESOLVE Manual
Examples Guide 1672
All the nodes of the RESOLVE model are listed in the left hand side of the screen.
3.4.7 RN-KIM
Enter topic text here.
3.4.7.1 Example 2.7.1: GAP-RN-KIM Connection
3.4.7.1.1 Overview
1. Example Introduction
The process of this exercise demonstrates how to setup a connection between a GAP
production network, gas injection network and reservoir simulation model setup using RN-KIM.
The field being modeled consists of 4 producer wells and 4 gas injector wells, with the intention
being to determine the production over the course of a 5 year prediction.
The first objective is to couple the GAP and RN-KIM models, and the second to run the model
and determine this production and injection behavior.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for RN-KIM and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will display a message confirming the number of drivers that have
been registered.
Before RN-KIM is used with RESOLVE you should make sure that all the prerequisites are met
and the simulator can be run as a standalone program.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\RN-KIM
\Example_2_7_1-GAP_RN-KIM
This folder contains a file "RN_KIM.rsa" which is a "RESOLVE archive file" that contains the
RESOLVE file, RN-KIM file, GAP file and other associated files required to go through the
example. The archive file needs to be extracted either in the current location or a location of the
user's choice.
3.4.7.1.2 Step 1 - Initialise model
Step 1 Objective: Start a new RESOLVE project and initialise the units
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
July, 2021
RESOLVE Manual
Examples Guide 1674
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-RN-
KIM.rsl).
3.4.7.1.3 Step 2 - Create RN-KIM instance
Step 2 Objective: Create an RN-KIM instance in the RESOLVE model
The next step is to create an RN-KIM instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to System | Create instance or select the icon.
For the file name, browse to the file "RN-KIM.data". This data deck requires python entry
PYTHON
INTEGRATOR_SSTEP_BEGIN 'N:\solver\test_scripts\DriverTests\Prod-Inj
\rn_kim_petex_sync.py' /
/
Next click on "Start", RN-KIM will start and load the required case. It will then query the case for
its sources and sinks (wells) and will display these on the screen as shown below. The icons
can be moved by selecting the "Move" icon on the toolbar ( ) and then dragging them to the
required positions.
July, 2021
RESOLVE Manual
Examples Guide 1676
The type of the well (which is obtained from the query of RN-KIM) can be found by double-
clicking on the separate icons.
3.4.7.1.4 Step 3 - Create GAP produciton instance
Step 3 Objective: Create the GAP production instance in the RESOLVE model
Repeat the previous step to create an instance of GAP on the RESOLVE main screen.
Label the created GAP case "Production" and browse for the model "Production.gap".
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
Under snapshot mode section, select "Always save forecast snapshots".This saves a
snapshot of each prediction time step in GAP, allowing the user to reload a copy of each
snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Also note that the “Rule based solver” box has been ticked for GAP. This uses simple
engineering rules to meet constraints in the GAP network, and is significantly faster than a full
optimisation (although it may produce slightly less oil/gas). More information on this can be
found in section 2.10.5.2 of the GAP User Manual.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP file contains production and water injection models. In the production model, four
production wells, Well1, Well2, Well3 and Well4 will be found. These are the same wells
identified from the RN-KIM case. One can look at the GAP interface (i.e. the GAP model will be
open on the windows taskbar) to confirm the contents of the GAP file by clicking on Window |
Tile vertically from the main GAP menu.
July, 2021
RESOLVE Manual
Examples Guide 1678
Connect the PROD1 icon in RN-KIM to WELL1 in GAP by clicking into the first icon and
dragging the connection to the second. Repeat this for the other producer wells.
Note that it is possible to make the connections using the "Connection wizard".
This is obtained by invoking Edit System | Connection Wizard under the main menu.
The original GAP file that was loaded earlier contains an associated gas injection system; thus
it is possible to create an instance of GAP for the gas injection system in the RESOLVE model.
For the file name, enter the same GAP production system model. This is because the
injection system is associated (i.e. or linked) with the production system in one GAP file.
Consequently, the "associated gas injection" option has to be selected from the drop-down
menu for the system.
July, 2021
RESOLVE Manual
Examples Guide 1680
An advantage of this is that it allows to model production and injection systems of GAP
simultaneously using only a single GAP license.
Press OK and the injection system well (i.e. and injection manifold) will be displayed.
After this, connect the GAP and RN-KIM icons together appropriately.
Before the simulation is run, some further changes can be made to the configuration of the RN-
KIM link.
IPRs that are passed from the reservoir model to the surface network model and determine part
of the well performance, are calculated.
A detailed description of the different techniques used to determine these IPRs, along with their
respective advantages and disadvantages, can be found in the "IPR Generation Options"
section.
To select these options, double-click on the RN-KIM icon to view the RN-KIM data entry
screen.
Click on "Calculate" to perform the pre-run calculations required for this method.
July, 2021
RESOLVE Manual
Examples Guide 1682
The calculation will be performed and the results will be as shown below:
When GAP solves/optimises its system, RESOLVE will return the result as an operating point for
the well on the inflow relation that RN-KIM passed for that well. RN-KIM will then have to control
that well with a fixed boundary condition for the duration of the next time step. The user can
select which boundary condition should be used. Here we will use ‘Rate (Dominant phase)’,
meaning GAP will determine the rate control based upon the definition in the GAP model.
Typically this will be done using a single phase. The other options are outlined in section
2.5.13.3.
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
For the purposes of this example, we will be making use of the basic scheduling only.
The advanced scheduling is described in the "Example_2.1.3" section.
Invoke the schedule screen from the main menu using Schedule | Forecast data.
July, 2021
RESOLVE Manual
Examples Guide 1684
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button.
This will display a screen allowing to select the required start date from a list of the various
model start dates.
The time step and schedule duration are also entered here as shown. Enter the data in the
screen shot above.
Here we will synchronise GAP and RN-KIM every 1 month until the schedule completes on
1/1/2025.
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to report all the
variables for the wells and separator in the GAP production model as well as the injection wells
and manifold in the GAP injection model.
RESOLVE can automatically build a list of the GAP variables available and that can be reported
directly through the RESOLVE interface.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
July, 2021
RESOLVE Manual
Examples Guide 1686
Select the 'Production' tab, and click Edit variables. A list of variables is available to import.
These consist of output variables such as solver results or cumulatives, or input variables such
as constraints or masking variables. If a variable is required and is not included in this list, it is
always possible to Copy and Paste the corresponding OpenServer string in the 'Variable string'
field.
Select 'Separator' and click the red arrow: this will import all the variables corresponding to
Separator. Repeat this for four production wells.
Click OK. In the screen below, click 'Plot invert selection' in order for a tick to appear in the 'Add
to plot' column for all variables. This ensures that all variables are accessible from the plotting
window.
Go to the "Injection" tab and do the same for the water injection model in GAP. Output all
variables for the injection wells and manifold (IM1) and proceed to the next step.
Make sure that there are no screens left open in GAP as this can interfere with the remote
operation of GAP by RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 1688
To run the forecast from beginning to end without stopping, press the icon.
Note that the run can be paused or stopped with other toolbar icons.
Once the forecast has been started, RN-KIM will perform an equilibration calculation.
The equilibrated reservoir data will be passed to GAP in the form of well IPR curves (i.e. for both
producers and injectors). GAP will use this data to solve and optimise the system. The solution
points will then be returned to RN-KIM in order to take the first month's timestep. Before this,
the RESOLVE forecast enters "pause" mode.
To run the rest of the forecast without stopping, press the icon. Note that the run can be
paused or stopped with other toolbar icons.
RESOLVE has a set of results - the amount of results reported in the RESOLVE model
is function of the variables that have been published by the user prior to the run itself.
Refer to step 8 for further details.
The results from the client applications are best viewed when the run has completed or is
paused: this is because RESOLVE is controlling the application and it is possible that viewing
application results will interfere with the control. For this reason, application user interfaces are
disabled while the run is proceeding.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon.
These results can also be displayed as the run is proceeding.
For this specific case, we want to first analyse the oil that has been produced for the entire
production system as well as for each individual well.
All the nodes of the RESOLVE model are listed in the left hand side of the screen.
July, 2021
RESOLVE Manual
Examples Guide 1690
3.4.8 Intersect
Enter topic text here.
3.4.8.1 Example 2.8.1: GAP-Intersect Connection
3.4.8.1.1 Overview
1. Example Introduction
The process of this exercise demonstrates how to setup a connection between a GAP
production network, water injection network and reservoir simulation model setup using
Intersect.
The field being modeled consists of 4 producer wells and 2 water injector well, with the intention
being to determine the production over the course of a 5 year prediction.
The first objective is to couple the GAP and Intersect models, and the second to run the model
and determine this production and injection behavior.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for Intersect and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will display a message confirming the number of drivers that have
been registered.
Before Intersect is used with RESOLVE you should make sure that all the prerequisites are met
and the simulator can be run as a standalone program.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_2-Connection_to_Reservoir_Simulation_Tools\Intersect
\Example_2_8_1-GAP_Intersect
This folder contains a file "GAP_Intersect.rsa" which is a "RESOLVE archive file" that contains
the RESOLVE file, Intersect file, GAP file and other associated files required to go through the
example. The archive file needs to be extracted either in the current location or a location of the
user's choice.
3.4.8.1.2 Step 1 - Initialise model
Step 1 Objective: Start a new RESOLVE project and initialise the units
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
July, 2021
RESOLVE Manual
Examples Guide 1692
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-
Intersect.rsl).
3.4.8.1.3 Step 2 - Create Intesect instance
Objective: Create an Intersect instance in the RESOLVE model
The next step is to create an Intersect instance in the RESOLVE model and load it in the main
RESOLVE screen.
From the main menu, go to System | Create instance or select the icon.
For the file name, browse to the file "Intersect.afi". To connect any Intersect model with
RESOLVE, the data deck requires the following additional line
EXTENSION "ExternalController"
July, 2021
RESOLVE Manual
Examples Guide 1694
Note that from this screen it is possible to select a remote host on which Intersect can be run.
This is especially useful in cases where several reservoir models have to be run: in this case it
will probably be more efficient to run these simulations in parallel.
Next click on "Start", Intersect will start and load the required case. It will then query the case
for its sources and sinks (wells) and will display these on the screen as shown below. The icons
can be moved by selecting the "Move" icon on the toolbar ( ) and then dragging them to the
required positions.
The type of the well (which is obtained from the query of Intersect) can be found by double-
clicking on the separate icons.
3.4.8.1.4 Step 3 - Create GAP production instance
Step 3 Objective: Create the GAP production instance in the RESOLVE model
Repeat the previous step to create an instance of GAP on the RESOLVE main screen.
Label the created GAP case "Production" and browse for the model "Oil Field.gap".
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
Under snapshot mode section, select "Always save forecast snapshots".This saves a
snapshot of each prediction timestep in GAP, allowing the user to reload a copy of each
snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Also note that the “Rule based solver” box has been ticked for GAP. This uses simple
engineering rules to meet constraints in the GAP network, and is significantly faster than a full
optimisation (although it may produce slightly less oil/gas). More information on this can be
found in section 2.10.5.2 of the GAP User Manual.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP file contains production and water injection models. In the production model, three
production wells, P1, P2 and P3 will be found. These are the same wells identified from the
Intersect case. One can look at the GAP interface (i.e. the GAP model will be open on the
windows taskbar) to confirm the contents of the GAP file by clicking on Window | Tile vertically
from the main GAP menu.
July, 2021
RESOLVE Manual
Examples Guide 1696
Connect the P1 icon in Intersect to P1 in GAP by clicking into the first icon and dragging the
connection to the second. Repeat this for the other producer wells.
Note that it is possible to make the connections using the "Connection wizard".
This is obtained by invoking Edit System | Connection Wizard under the main menu.
3.4.8.1.6 Step 5 - Create GAP water injection instance
Step 5 Objective: Load the GAP water injection model and connect the wells to their
counterparts in Intersect
The original GAP file that was loaded earlier contains an associated water injection system;
thus it is possible to create an instance of GAP for the water injection system in the RESOLVE
model.
July, 2021
RESOLVE Manual
Examples Guide 1698
For the file name, enter the same GAP production system model. This is because the
injection system is associated (i.e. or linked) with the production system in one GAP file.
Consequently, the "associated water injection" option has to be selected from the drop-down
menu for the system.
An advantage of this is that it allows to model production and injection systems of GAP
simultaneously using only a single GAP license.
Press OK and the injection system well (i.e. and injection manifold) will be displayed.
After this, connect the GAP and Intersect icons together appropriately.
Before the simulation is run, some further changes can be made to the configuration of the
Intersect link.
A detailed description of the different techniques used to determine these IPRs, along with their
respective advantages and disadvantages, can be found in the "IPR Generation Options"
section.
To select these options, double-click on the Intersect icon to view the data entry screen.
July, 2021
RESOLVE Manual
Examples Guide 1700
Click on "Calculate" to perform the pre-run calculations required for this method.
The calculation will be performed and the results will be as shown below:
July, 2021
RESOLVE Manual
Examples Guide 1702
When GAP solves/optimises its system, RESOLVE will return the result as an operating point for
the well on the inflow relation that Intersect passed for that well. Intersect will then have to
control that well with a fixed boundary condition for the duration of the next time step. The user
can select which boundary condition should be used. Here we will use ‘Rate (Dominant
phase)’, meaning GAP will determine the rate control based upon the definition in the GAP
model. Typically this will be done using a single phase. The other options are outlined in section
2.5.7.1.
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 1704
can be implemented.
For the purposes of this example, we will be making use of the basic scheduling only.
The advanced scheduling is described in the "Example_2.1.3" section.
Invoke the schedule screen from the main menu using Schedule | Forecast data.
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button.
This will display a screen allowing to select the required start date from a list of the various
The time step and schedule duration are also entered here as shown. Enter the data in the
screen shot above.
Here we will synchronise GAP and Intersect every 1 month until the schedule completes on
1/1/2025.
3.4.8.1.9 Step 8 - Publish variables
Step 8 Objective: Publish GAP and Intersect variables to report in the RESOLVE results
section
By default RESOLVE saves a sub-set of the data that is passed at each connection. This is
limited to the pressure and black oil phase rates. In this example, we wish to report all the
variables for the wells and separator in the GAP production model as well as the injection wells
and manifold in the GAP injection model.
RESOLVE can automatically build a list of the GAP variables available and that can be reported
directly through the RESOLVE interface.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
July, 2021
RESOLVE Manual
Examples Guide 1706
Select the 'Production' tab, and click Edit variables. A list of variables is available to import.
These consist of output variables such as solver results or cumulatives, or input variables such
as constraints or masking variables. If a variable is required and is not included in this list, it is
always possible to Copy and Paste the corresponding OpenServer string in the 'Variable string'
field.
Select 'Separator' and click the red arrow: this will import all the variables corresponding to
Separator. Repeat this for three production wells.
Click OK. In the screen below, click 'Plot invert selection' in order for a tick to appear in the 'Add
to plot' column for all variables. This ensures that all variables are accessible from the plotting
window.
July, 2021
RESOLVE Manual
Examples Guide 1708
Go to the "Injection" tab and do the same for the water injection model in GAP. Output all
variables for the injection wells and Injeciton manifold and proceed to the next step.
Make sure that there are no screens left open in GAP as this can interfere with the remote
operation of GAP by RESOLVE.
To run the forecast from beginning to end without stopping, press the icon.
Note that the run can be paused or stopped with other toolbar icons.
Once the forecast has been started, Intersect will perform an equilibration calculation.
The equilibrated reservoir data will be passed to GAP in the form of well IPR curves (i.e. for both
producers and injectors). GAP will use this data to solve and optimise the system. The solution
points will then be returned to Intersect in order to take the first month's timestep. Before this,
the RESOLVE forecast enters "pause" mode.
To run the rest of the forecast without stopping, press the icon. Note that the run can be
paused or stopped with other toolbar icons.
RESOLVE has a set of results - the amount of results reported in the RESOLVE model
is function of the variables that have been published by the user prior to the run itself.
Refer to step 8 for further details.
The results from the client applications are best viewed when the run has completed or is
paused: this is because RESOLVE is controlling the application and it is possible that viewing
application results will interfere with the control. For this reason, application user interfaces are
disabled while the run is proceeding.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon.
These results can also be displayed as the run is proceeding.
For this specific case, we want to first analyse the oil that has been produced for the entire
production system as well as for each individual well.
July, 2021
RESOLVE Manual
Examples Guide 1710
All the nodes of the RESOLVE model are listed in the left hand side of the screen.
July, 2021
RESOLVE Manual
Examples Guide 1712
1. Example Introduction
The objective of this section is to demonstrate how to setup a connection between a GAP
production model and a plant simulation model setup in UniSim.
We are currently producing an oil field, which is modelled in the GAP model below. This field
has a total of 9 producing wells. The wells have an option for gas lift but currently this is not
being utilised. The reservoir performance is captured using a decline curve.
At the separator, the associated gas is sent through a compression train so as to be able to join
an existing gas transportation line. Oil is pumped to a nearby plant for processing. The diagram
below represents the Process facilities.
To be able to join the export line, the gas must be compressed to 1300 psig. The separator
pressure is 150 psig, and at the current operating conditions the field is constrained at 11.525
MMscf/d to honour the export line pressure requirement.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both UniSim and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_3-Connection_to_Process_Modelling_Tools\Example_3_1-
GAP_UniSim
July, 2021
RESOLVE Manual
Examples Guide 1714
This folder contains a file "GAP UniSim Example.rsa" which is a "RESOLVE archive file" that
contains the RESOLVE file, UniSim file, GAP file and other associated files required to go
through the example. The archive file needs to be extracted either in the current location or a
location of the user"s choice.
Go to Step 1
3.5.1.2 Step 1 - Start a new file
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-
UniSim.rsl).
The UniSim model now needs to be setup to use Oilfield units. Open the UniSim model
'Process Model.usc', this will be in the location where the example archive was extracted. Go to
Tools | Preferences, then in the Variables tab. Create a clone of the 'Field' unit system,
rename it 'Field_psig' and change the Pressure unit to psig. Then click on 'Save Preferences'
and close UniSim.
Go to Step 2
The next step is to create instances of the various applications that we wish to connect through
RESOLVE. We shall load the instances on the main screen.
From the main menu, go to Edit System | Add Client Programs or select the icon.
From the resulting menu, select "UniSim". Click on the main screen where the UniSim icon is to
be located, and accept the default label ("UniSim").
July, 2021
RESOLVE Manual
Examples Guide 1716
Double-click on the UniSim icon - the following screen will appear. For the file name, browse to
the file "Process Model.usc" as shown below.
Select the "Transfer upstream composition to UniSim feed stream" option in the
"Miscellaneous options" section above. This enables to specify that the composition at the
output of the GAP model will be passed to the input of the UniSim model.
Note that from this screen it is possible to run UniSim on a remote machine. Further information
on this can be found in the Running UniSim on a remote server section.
When OK is selected, UniSim will start and load the required case. It will then query the case for
its sources and sinks (input and output feeds) and will display these on the screen as shown
below.
We see that we have one input feeds and several output feeds.
The inputs and outputs displayed here can be cross-checked with the actual UniSim case. The
UniSim interface can be seen at the bottom of the screen with "Process Model.usc" loaded into
it.
Go to Step 3.
July, 2021
RESOLVE Manual
Examples Guide 1718
Browse for the GAP production model titled "Surface Network.gap". Also select the feature
"Always save forecast snapshots". This saves a snapshot of each prediction timestep in
GAP, allowing the user to reload a copy of each snapshot and analyse the performance at that
date - this can be particularly useful for troubleshooting purposes.
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP production model contains nine wells and a separator. It is possible to check the GAP
model (i.e. The GAP model will be open on the windows taskbar) to confirm the contents of the
GAP file.
Go to Step 4
3.5.1.5 Step 4 - Connect GAP and UniSim
At this stage, the applications can be connected together graphically. To connect the systems,
go to "link" mode by pressing the icon. Connect the separator 'Sep1' from GAP to the 'Inlet'
of UniSim.
July, 2021
RESOLVE Manual
Examples Guide 1720
Note that it is also possible to make the connections using the "Connection wizard". This is
obtained by invoking Edit System | Connection Wizard under the main menu and is
especially useful when dealing with a large number of connections.
Through this connection, at every time step the following data will be passed from GAP to
UniSim:
Pressure
Temperature
Mass rate
Composition
Go to Step 5
3.5.1.6 Step 5 - Publish aplication variables
The next step is to import some variables from the UniSim and GAP models that will be added
to the RESOLVE reporting system, as RESOLVE stores only variables that have been imported.
RESOLVE can automatically build a list of the GAP and UniSim variables available and that can
be reported directly through the RESOLVE interface.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
Select the 'GAP' tab, and click Edit variables. A list of variables is available to import. These
consist of output variables such as solver results or cumulatives, or input variables such as
constraints or items' masking variables. If a variable is required and is not included in this list, it
is always possible to Copy and Paste the corresponding OpenServer string in the 'Variable
string' field.
From GAP, import the separator phase rates, separator pressure and temperature and mass
flow rate. Also import the phase cumulatives from the 'Cumulative variables' tab.
July, 2021
RESOLVE Manual
Examples Guide 1722
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
From UniSim, import the inlet pressure, temperature and mass flow rate, as well as the Sales
pressure. This is done in the same way as for GAP.
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
Go to Step 6.
3.5.1.7 Step 6 - Setup the schedule
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
For the purposes of this example, we will be making use of the basic scheduling only.
To setup the basic RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
July, 2021
RESOLVE Manual
Examples Guide 1724
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button. This will display a screen allowing the user to
select the required start date from a list of the various model start dates.
Go to Step 7
3.5.1.8 Step 7 - Run the forecast
The simulation is now ready to be run.
Note that after each timestep, the run will automatically be paused: one can decide to
run another single step by using the icon again, or can decide to run the prediction
forecast until the end by using the icon.
To start, force RESOLVE to do a single timestep. Do this by pressing the "single step" button (
).
The first action done is the initialisation of both modules. In the case of GAP, this means loading
the tank data and initialising it (i.e. running a history if necessary until the start date of the
forecast is reached). In the case of UniSim, no initialisation is necessary.
RESOLVE will also obtain the composition names from GAP and UniSim. As these can be
different (i.e. "C1" in GAP may be called "Methane" in UniSim) it is important to tell RESOLVE
which composition name refers to which.
The list on the left is a composition from GAP; that on the right is a composition from UniSim.
Select the corresponding components, then select Add Individual Connection and repeat for
each pair of component, then select OK.
July, 2021
RESOLVE Manual
Examples Guide 1726
Go to Step 8.
3.5.1.9 Step 8 - Analyse the results
RESOLVE has a set of results - the amount of results reported in the RESOLVE model
is function of the variables that have been published by the user prior to the run itself.
Refer to step 5 for further details.
It is also possible to view all the results of a simulation for any of the client applications
in the application itself.
The results from the client applications are best viewed when the run has completed or is
paused: this is because RESOLVE is controlling the application and it is possible that viewing
application results will interfere with the control. For this reason, application user interfaces are
disabled while the run is proceeding.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon. These results can also be displayed as the run is proceeding.
The passing of data between GAP and UniSim can be verified by plotting the separator
temperature and the UniSim inlet temperature, as well as the mass rates.
The plot below shows the oil rate profile and the delivery pressure. It can be observed that the
delivery pressure remains above the target pressure of 1300 psig for the entire duration of the
forecast.
July, 2021
RESOLVE Manual
Examples Guide 1728
For the first year of the forecast, the delivery pressure remains close to 1300 psig while the
wells are being choked: this means that the process is limiting the production. For the remaining
period of the forecast, the wells are fully opened and the delivery pressure exceeds 1300 psig:
therefore the production system is limiting the production.
This constitutes an optimisation opportunity and RESOLVE can be used to optimise and
increase the production. This is described in detail in Example 7.2.1.
3.5.2 Example 3.2: GAP - Hysys Connection
3.5.2.1 Overview
1. Example Introduction
The objective of this section is to demonstrate how to setup a connection between a GAP
production model and a plant simulation model setup in Hysys.
We are currently producing an oil field, which is modelled in the GAP model below. This field
has a total of 9 producing wells. The wells have an option for gas lift but currently this is not
being utilised. The reservoir performance is captured using a decline curve.
At the separator, the associated gas is sent through a compression train so as to be able to join
an existing gas transportation line. Oil is pumped to a nearby plant for processing. The diagram
below represents the Process facilities.
To be able to join the export line, the gas must be compressed to 1300 psig. The separator
pressure is 150 psig, and at the current operating conditions the field is constrained at 11.525
MMscf/d to honour the export line pressure requirement.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
July, 2021
RESOLVE Manual
Examples Guide 1730
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both Hysys and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_3-Connection_to_Process_Modelling_Tools\Example_3_2-
GAP_Hysys
This folder contains a file "GAP Hysys Example.rsa" which is a "RESOLVE archive file" that
contains the RESOLVE file, Hysys file, GAP file and other associated files required to go
through the example. The archive file needs to be extracted either in the current location or a
location of the user"s choice.
Go to Step 1
3.5.2.2 Step 1 - Start a new file
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-
Hysys.rsl).
The Hysys model now needs to be setup to use Oilfield units. Open the Hysys model
'Process Model.hsc', this will be in the location where the example archive was extracted. Go to
Home | Unit Sets. Copy the 'Field' unit system, rename it 'Field_psig' and change the Pressure
unit to psig. Then click on OK, save the model and close Hysys.
July, 2021
RESOLVE Manual
Examples Guide 1732
Go to Step 2
The next step is to create instances of the various applications that we wish to connect through
RESOLVE. We shall load the instances on the main screen.
From the main menu, go to Edit System | Add Client Programs or select the icon.
From the resulting menu, select "Hysys". Click on the main screen where the Hysys icon is to be
located, and accept the default label ("Hysys").
Double-click on the Hysys icon - the following screen will appear. For the file name, browse to
the file "Process Model.hsc" as shown below.
Select the "Transfer upstream composition to Hysys feed stream" option in the
"Miscellaneous options" section above. This enables to specify that the composition at the
output of the GAP model will be passed to the input of the Hysys model.
Note that from this screen it is possible to run Hysys on a remote machine. Further information
on this can be found in the Running Hysys on a remote server section.
July, 2021
RESOLVE Manual
Examples Guide 1734
When OK is selected, Hysys will start and load the required case. It will then query the case for
its sources and sinks (input and output feeds) and will display these on the screen as shown
below.
We see that we have one input feeds and several output feeds.
The inputs and outputs displayed here can be cross-checked with the actual Hysys case. The
Hysys interface can be seen at the bottom of the screen with "Process Model.hsc" loaded into
it.
Go to Step 3.
Browse for the GAP production model titled "Surface Network.gap". Also select the feature
"Always save forecast snapshots". This saves a snapshot of each prediction timestep in
GAP, allowing the user to reload a copy of each snapshot and analyse the performance at that
date - this can be particularly useful for troubleshooting purposes.
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP production model contains nine wells and a separator. It is possible to check the GAP
model (i.e. The GAP model will be open on the windows taskbar) to confirm the contents of the
GAP file.
July, 2021
RESOLVE Manual
Examples Guide 1736
Go to Step 4
3.5.2.5 Step 4 - Connect GAP and Hysys
At this stage, the applications can be connected together graphically. To connect the systems,
go to "link" mode by pressing the icon. Connect the separator 'Sep1' from GAP to the 'Inlet'
of Hysys.
Note that it is also possible to make the connections using the "Connection wizard". This is
obtained by invoking Edit System | Connection Wizard under the main menu and is
especially useful when dealing with a large number of connections.
Through this connection, at every time step the following data will be passed from GAP to
Hysys:
Pressure
Temperature
Mass rate
Composition
Go to Step 5
3.5.2.6 Step 5 - Publish aplication variables
The next step is to import some variables from the UniSim and GAP models that will be added
to the RESOLVE reporting system, as RESOLVE stores only variables that have been imported.
RESOLVE can automatically build a list of the GAP and Hysys variables available and that can
be reported directly through the RESOLVE interface.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
Select the 'GAP' tab, and click Edit variables. A list of variables is available to import. These
consist of output variables such as solver results or cumulatives, or input variables such as
constraints or items' masking variables. If a variable is required and is not included in this list, it
July, 2021
RESOLVE Manual
Examples Guide 1738
is always possible to Copy and Paste the corresponding OpenServer string in the 'Variable
string' field.
From GAP, import the separator phase rates, separator pressure and temperature and mass
flow rate. Also import the phase cumulatives from the 'Cumulative variables' tab.
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
From Hysys, import the inlet pressure, temperature and mass flow rate, as well as the Sales
pressure. This is done in the same way as for GAP.
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
Go to Step 6.
3.5.2.7 Step 6 - Setup the schedule
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
For the purposes of this example, we will be making use of the basic scheduling only.
To setup the basic RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
July, 2021
RESOLVE Manual
Examples Guide 1740
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button. This will display a screen allowing the user to
select the required start date from a list of the various model start dates.
Go to Step 7
3.5.2.8 Step 7 - Run the forecast
The simulation is now ready to be run.
Note that after each timestep, the run will automatically be paused: one can decide to
run another single step by using the icon again, or can decide to run the prediction
forecast until the end by using the icon.
To start, force RESOLVE to do a single timestep. Do this by pressing the "single step" button (
).
The first action done is the initialisation of both modules. In the case of GAP, this means loading
the tank data and initialising it (i.e. running a history if necessary until the start date of the
forecast is reached). In the case of Hysys, no initialisation is necessary.
RESOLVE will also obtain the composition names from GAP and Hysys. As these can be
different (i.e. "C1" in GAP may be called "Methane" in Hysys) it is important to tell RESOLVE
which composition name refers to which.
The list on the left is a composition from GAP; that on the right is a composition from Hysys.
Select the corresponding components, then select Add Individual Connection and repeat for
each pair of component, then select OK.
July, 2021
RESOLVE Manual
Examples Guide 1742
Go to Step 8.
3.5.2.9 Step 8 - Analyse the results
RESOLVE has a set of results - the amount of results reported in the RESOLVE model
is function of the variables that have been published by the user prior to the run itself.
Refer to step 5 for further details.
It is also possible to view all the results of a simulation for any of the client applications
in the application itself.
The results from the client applications are best viewed when the run has completed or is
paused: this is because RESOLVE is controlling the application and it is possible that viewing
application results will interfere with the control. For this reason, application user interfaces are
disabled while the run is proceeding.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon. These results can also be displayed as the run is proceeding.
The passing of data between GAP and Hysys can be verified by plotting the separator
temperature and the Hysys inlet temperature, as well as the mass rates.
The plot below shows the oil rate profile and the delivery pressure. It can be observed that the
delivery pressure remains above the target pressure of 1300 psig for the entire duration of the
forecast.
For the first year of the forecast, the delivery pressure remains close to 1300 psig while the
wells are being choked: this means that the process is limiting the production. For the remaining
July, 2021
RESOLVE Manual
Examples Guide 1744
period of the forecast, the wells are fully opened and the delivery pressure exceeds 1300 psig:
therefore the production system is limiting the production.
This constitutes an optimisation opportunity and RESOLVE can be used to optimise and
increase the production. This is described in detail in Example 7.2.2.
3.5.3 Example 3.3: GAP - ProII Connection
3.5.3.1 Overview
1. Example Introduction
The objective of this section is to demonstrate how to setup a connection between a GAP
production model and a plant simulation model setup in ProII.
We are currently producing an oil field, which is modelled in the GAP model below. This field
has a total of 9 producing wells. The wells have an option for gas lift but currently this is not
being utilised. The reservoir performance is captured using a decline curve.
At the separator, the associated gas is sent through a compression train so as to be able to join
an existing gas transportation line. Oil is pumped to a nearby plant for processing. The diagram
below represents the Process facilities.
To be able to join the export line, the gas must be compressed to 1300 psig. The separator
pressure is 150 psig, and at the current operating conditions the field is constrained at 11.525
MMscf/d to honour the export line pressure requirement.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both ProII and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 1746
...\resolve\Section_3-Connection_to_Process_Modelling_Tools\Example_3_3-
GAP_ProII
This folder contains a file "GAP ProII Example.rsa" which is a "RESOLVE archive file" that
contains the RESOLVE file, ProII file, GAP file and other associated files required to go through
the example. The archive file needs to be extracted either in the current location or a location of
the user"s choice.
Go to Step 1
3.5.3.2 Step 1 - Start a new file
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-
ProII.rsl).
The ProII model now needs to be setup to use Oilfield units. Open the ProII model
'Process Model.prz', this will be in the location where the example archive was extracted. Go to
Input | Units of measure. Initialise the unit system from ENGLISH-SET and change the
pressure to use psig. Save the model and close Pro II.
July, 2021
RESOLVE Manual
Examples Guide 1748
Go to Step 2
The next step is to create instances of the various applications that we wish to connect through
RESOLVE. We shall load the instances on the main screen.
From the main menu, go to Edit System | Add Client Programs or select the icon.
From the resulting menu, select "ProII". Click on the main screen where the ProII icon is to be
located, and accept the default label ("ProII").
Double-click on the ProII icon - the following screen will appear. For the file name, browse to the
file "Process Model.prz" as shown below.
The options for component properties passing can be left to their default setting: in this case
only the properties of the pseudos are passed from to ProII.
When OK is selected, ProII will start and load the required case. It will then query the case for its
sinks (input streams) and will display these on the screen as shown below.
The inputs displayed here can be cross-checked with the actual ProII case. The ProII interface
can be seen by right-clicking on the ProII icon and selecting View in ProII.
Go to Step 3.
Browse for the GAP production model titled "Surface Network.gap". Also select the feature
"Always save forecast snapshots". This saves a snapshot of each prediction timestep in
GAP, allowing the user to reload a copy of each snapshot and analyse the performance at that
date - this can be particularly useful for troubleshooting purposes.
July, 2021
RESOLVE Manual
Examples Guide 1750
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP production model contains nine wells and a separator. It is possible to check the GAP
model (i.e. The GAP model will be open on the windows taskbar) to confirm the contents of the
GAP file.
Go to Step 4
3.5.3.5 Step 4 - Connect GAP and ProII
At this stage, the applications can be connected together graphically. To connect the systems,
go to "link" mode by pressing the icon. Connect the separator 'Sep1' from GAP to the
'FEED' of ProII.
July, 2021
RESOLVE Manual
Examples Guide 1752
Note that it is also possible to make the connections using the "Connection wizard". This is
obtained by invoking Edit System | Connection Wizard under the main menu and is
especially useful when dealing with a large number of connections.
Through this connection, at every time step the following data will be passed from GAP to ProII:
Pressure
Temperature
Mass rate
Composition
Go to Step 5
3.5.3.6 Step 5 - Publish aplication variables
The next step is to import some variables from the ProII and GAP models that will be added to
the RESOLVE reporting system, as RESOLVE stores only variables that have been imported.
RESOLVE can automatically build a list of the GAP and ProII variables available and that can be
reported directly through the RESOLVE interface.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
Select the 'GAP' tab, and click Edit variables. A list of variables is available to import. These
consist of output variables such as solver results or cumulatives, or input variables such as
constraints or items' masking variables. If a variable is required and is not included in this list, it
is always possible to Copy and Paste the corresponding OpenServer string in the 'Variable
string' field.
From GAP, import the separator phase rates, separator pressure and temperature and mass
flow rate. Also import the phase cumulatives from the 'Cumulative variables' tab.
July, 2021
RESOLVE Manual
Examples Guide 1754
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
From ProII, import the FEED output pressure, temperature and mass flow rate, as well as the
Sales output pressure. This is done in the same way as for GAP.
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
Go to Step 6.
3.5.3.7 Step 6 - Setup the schedule
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
For the purposes of this example, we will be making use of the basic scheduling only.
To setup the basic RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
July, 2021
RESOLVE Manual
Examples Guide 1756
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button. This will display a screen allowing the user to
select the required start date from a list of the various model start dates.
Go to Step 7
3.5.3.8 Step 7 - Run the forecast
The simulation is now ready to be run.
Note that after each timestep, the run will automatically be paused: one can decide to
run another single step by using the icon again, or can decide to run the prediction
forecast until the end by using the icon.
To start, force RESOLVE to do a single timestep. Do this by pressing the "single step" button (
).
The first action done is the initialisation of both modules. In the case of GAP, this means loading
the tank data and initialising it (i.e. running a history if necessary until the start date of the
forecast is reached). In the case of ProII, no initialisation is necessary.
RESOLVE will also obtain the composition names from GAP and ProII. As these can be
different (i.e. "C1" in GAP may be called "Methane" in ProII) it is important to tell RESOLVE
which composition name refers to which.
The list on the left is a composition from GAP; that on the right is a composition from ProII.
Select the corresponding components, then select Add Individual Connection and repeat for
each pair of component, then select OK.
July, 2021
RESOLVE Manual
Examples Guide 1758
Go to Step 8.
3.5.3.9 Step 8 - Analyse the results
RESOLVE has a set of results - the amount of results reported in the RESOLVE model
is function of the variables that have been published by the user prior to the run itself.
Refer to step 5 for further details.
It is also possible to view all the results of a simulation for any of the client applications
in the application itself.
The results from the client applications are best viewed when the run has completed or is
paused: this is because RESOLVE is controlling the application and it is possible that viewing
application results will interfere with the control. For this reason, application user interfaces are
disabled while the run is proceeding.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon. These results can also be displayed as the run is proceeding.
The passing of data between GAP and ProII can be verified by plotting the separator
temperature and the ProII inlet temperature, as well as the mass rates.
The plot below shows the oil rate profile and the delivery pressure. It can be observed that the
delivery pressure remains above the target pressure of 1300 psig for the entire duration of the
forecast.
For the first year of the forecast, the delivery pressure remains close to 1300 psig while the
wells are being choked: this means that the process is limiting the production. For the remaining
period of the forecast, the wells are fully opened and the delivery pressure exceeds 1300 psig:
July, 2021
RESOLVE Manual
Examples Guide 1760
This constitutes an optimisation opportunity and RESOLVE can be used to optimise and
increase the production. This is described in detail in Example 7.2.3.
1. Example Introduction
The objective of this section is to demonstrate how to setup a connection between a GAP
production model and a Excel spreadsheet setup to calculate the model economics.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP is registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 1762
...\resolve\Section_4-Connection_to_Excel\Example_4_1-GAP_Excel
This folder contains a file "GAP-Excel.rsa" which is a "RESOLVE archive file" that contains the
RESOLVE file, Excel file, GAP file and other associated files required to go through the
example. The archive file needs to be extracted either in the current location or a location of the
user"s choice.
Go to Step 1
Step 1 Objective:
Start a completely new RESOLVE project and initialise the units
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g. GAP-
Excel.rsl).
Go to Step 2
Step 2 Objective:
Create an instance of GAP in the RESOLVE model
The next step is to create instances of the various applications that we wish to connect through
RESOLVE. We shall load the instances on the main screen.
July, 2021
RESOLVE Manual
Examples Guide 1764
"Production").
For the file name, browse to the file "Production.gap" as shown above.
Also select the option to "Always save forecast snapshots" under snapshot mode section.
This saves a snapshot of each prediction timestep in GAP, allowing the user to reload a copy of
each snapshot and analyse the performance at that date - this can be particularly useful for
troubleshooting purposes.
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
When OK is pressed, GAP will start and load the required case.
It will then query the case for its sources and sinks (wells) and will display these on the screen as
shown below.
The icons can be moved by selecting the "move" icon on the toolbar ( ) and then dragging
them to the required positions.
Note that we have two producers - Well1 and Well 2. Well 1 ESP is Well 1 converted to an ESP
well and scheduled to come on-stream sometime during the prediction run (i.e. at that point in
time, Well1 will be shut off).
Go to Step 1 or Step 3
Step 3 Objective:
Create an instance of Excel in the RESOLVE model
In this step we are going to add an instance of Excel into the system.
From the main menu, go to Edit System | Add Client program or select the icon. From
the resulting menu, select "Excel".
Label the Excel case created "Economics". Double-click on the icon and browse for the
spreadsheet Economic_Calculation.xls".
July, 2021
RESOLVE Manual
Examples Guide 1766
The Excel file, Economic_Calculation.xls, can be found in the same directory as the other data
files. There is only one input and one output.
In this example, the input will be used to receive the oil, gas and water rates from the production
separator. The output is there by default and is not being used in this example as we do not
need to pass information from Excel to another model for instance.
After this screen has been validated, the Excel icon can be seen on the RESOLVE screen. The
input icon of Excel is to be connected to the separator icon from the GAP production model as
shown below:
When Excel is used in RESOLVE, it is possible to pass any variables from GAP to the Excel
spreadsheet.
In this specific case, production rates from GAP have to be placed in certain cells of the
spreadsheet used to allow Excel to perform its revenue calculations. A snapshot of the
spreadsheet is shown below. It is important to note that the spreadsheet used in a RESOLVE
model is defined by the user and therefore can achieve any objective / calculation required by
the user.
The spreadsheet used for this example is just used to provide an illustration of what can be
July, 2021
RESOLVE Manual
Examples Guide 1768
Forecast Section
The values passed in this section will NOT be overwritten at each timestep: they will be
stored in the Excel spreadsheet for each timestep and organised either in a vertical or
horizontal order, based on the user specification.
In this specific spreadsheet, the date, oil, water and gas rate from the separator are
passed to this section of the Excel spreadsheet in order to calculate instantaneous and
cumulative revenues.
In order to specify which variables of the GAP model have to be passed to which cell in the
Excel spreadsheet, it will be required to map the GAP variables to the Excel spreadsheet cells.
To do so, double-clicking on the Excel icon and go to the "Input Data" section. This will enable
to map the variables from GAP that are imported in the Excel spreadsheet, as illustrated in the
snapshot below.
In the Solver section, map the variables that are imported in the Solver section of the
spreadsheet
In the Forecast section, map the variables that are imported in the Forecast section
of the spreadsheet. Select the way the value have to be ordered when reported in the
spreadsheet (i.e. vertically or horizontally).
Once this has been done, all the variables have been mapped and the forecast can be run.
Go to Step 2 or Step 4
July, 2021
RESOLVE Manual
Examples Guide 1770
Step 4 Objective:
Setup the RESOLVE Schedule
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
For the purposes of this example, we will be making use of the basic scheduling only.
The advanced scheduling is described in the "Example_2_3" section.
To setup the basic RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
The timestep and schedule duration are also entered here as shown.
Here GAP and Excel will be synchronised every 2 months until the schedule completes on
01/07/2015.
Go to Step 3 or Step 5
Step 5 Objective:
Run the prediction forecast
July, 2021
RESOLVE Manual
Examples Guide 1772
The results of the run for the first timestep can be checked briefly by holding the mouse over the
connection icon between "separator" from GAP and "in1" in Excel. The plot below will be seen
indicating the liquid rate is 18000stb/d.
To run the rest of the forecast without stopping, press the icon. Note that the run can be
paused or stopped with other toolbar icons.
Go to Step 4 or Step 6
Step 6 Objective:
Analyse the Results
The objective of this example is to pass the produced oil, gas and water streams from the
production model to a spreadsheet for economic analysis.
GAP performs its network solve and optimisation at each timestep. The phase production rates
are passed to Excel which calculates an instantaneous revenue. This data is then stored in
forecast section and added up at each timestep to obtain cumulative revenue.
When running the model, this can be seen in the Excel spreadsheet, as illustrated below.
Results of the RESOLVE model can be accessed as described in the "Results" section.
Go to Step 5
July, 2021
RESOLVE Manual
Examples Guide 1774
1. Example Introduction
This example illustrates how to setup a direct connection between a black oil reservoir model
(i.e. Eclipse) and a compositional process model (i.e. UniSim) and how the PVT consistency
can be kept within the model using the black oil delumping technique.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
Eclipse and UniSim are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 1776
...\resolve\Section_5-Advanced_RESOLVE_Examples\Example_5_1-
BlackOil_Delumping
This folder contains a file "Black_Oil_Delumping.rsa" which is a "RESOLVE archive file" that
contains the RESOLVE file, Eclipse and UniSim files required to go through the example. The
archive file needs to be extracted either in the current location or a location of the user"s
choice.
Go to Step 1
3.7.1.2 Step 1
Step 1 Objective:
Start a new RESOLVE project and initialise the units
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g.
Black_Oil_Delumping.rsl).
Go to Step 2
3.7.1.3 Step 2
Step 2 Objective:
Create an Eclipse instance in the RESOLVE model
The next step is to create an Eclipse instance in the RESOLVE model and load it in the main
RESOLVE screen.
If PVM is used, it is essential that the PVM instance has been started prior to this
step being performed.
From the main menu, go to Edit System| Add Client Program or select the icon.
From the resulting menu, select "Eclipse" (this is the E100 black-oil Eclipse driver).
The cursor, when held over the main screen, will change to indicate that an instance of the
July, 2021
RESOLVE Manual
Examples Guide 1778
Double-click on the Eclipse icon - the following screen will appear. Browse to the location of the
Eclipse data file.
Go to Step 1 or Step 3
3.7.1.4 Step 3
Step 3 Objective:
Set up the Eclipse model options
Before the simulation is run, some further changes can be made to the configuration of the
Eclipse link.
In this case, the Eclipse model is directly connected to the process model: there is no surface
network model linked to Eclipse, therefore all the well control and well scheduling is handled by
the Eclipse model itself.
In this case, the IPR generation option selected is not important, as no IPR data is exchanged
with the connected application.
This option can therefore be left to the default setting.
However, two options are available to control the wells that are connected to the process model
(i.e. here the producing wells only):
July, 2021
RESOLVE Manual
Examples Guide 1780
Also it is possible to plot some Eclipse results to be viewed in RESOLVE. The option to do this
is on the Miscellaneous tab as shown below:
Go to Step 2 or Step 4
July, 2021
RESOLVE Manual
Examples Guide 1782
3.7.1.5 Step 4
Step 4 Objective:
Create a UniSim instance in the RESOLVE model
The next step is to create instances of the various applications that we wish to connect through
RESOLVE. We shall load the instances on the main screen.
From the main menu, go to Edit System| Add Client Program or select the icon.
From the resulting menu, select "UniSim". The cursor, when held over the main screen, will
change to indicate that an instance of the application can be made.
Click on the main screen where the UniSim icon is to be located, and accept the default label
("UniSim").
Go to Step 3 or Step 5
3.7.1.6 Step 5
Step 5 Objective:
Establish the connections between the Eclipse and the UniSim models
Connect the Prod1 icon in Eclipse to the Feed1 icon in UniSim by clicking into the first icon and
dragging the connection to the second.
Connect the Prod2 icon in Eclipse to the Feed2 icon in UniSim by clicking into the first icon and
dragging the connection to the second.
July, 2021
RESOLVE Manual
Examples Guide 1784
Go to Step 4 or Step 6
3.7.1.7 Step 6
Step 6 Objective:
Publish the variables to report in the RESOLVE results section
The next step is to select some variables from the UniSim model that will be added to the
RESOLVE reporting system.
To do this for the UniSim model, right click on the UniSim icon in RESOLVE and select
"Output variables". A progress bar will be displayed: this is RESOLVE querying the plant
model for all the output variables that can be exported to RESOLVE.
Once this procedure is ended, a screen similar to the following will be displayed:
The left hand list is a hierarchical list of all the variables that can be queried in UniSim. They are
ordered at the top level by flowsheet: in this example, there are two flowsheets ("Main" and
"COL1") where "Main" is the main flowsheet and "COL1" is the Depropanizer column. Below
this is a list of streams (marked with a coloured arrow, as shown) and equipment items. When
one of these is opened up (as with "SalesGas", above) a list of variables that pertain to that
item will be displayed. These are the variables that can be added to the RESOLVE reporting
system.
To add a variable, highlight the variable on the left and press the "Add" button. Some variables
are arrays, most usually over components (e.g. the component molar fraction array in the screen
capture above). If the user select this variable, all the array variables will be passed across at
once.
In this example, we have chosen to monitor the MolarFlow and MassFlow variables of the
SalesGas stream.
Go to Step 5 or Step 7
3.7.1.8 Step 7
Step 7 Objective:
Set up the PVT data transfer option between Eclipse and UniSim
In this case, the fluid PVT properties in the Eclipse model are described using a black oil
model, whereas the fluid PVT properties in the UniSim model are described using a detailed
compositional description.
In order to ensure PVT consistency between the two applications, it will be necessary to use the
BLACK OIL DELUMPING technique in RESOLVE.
At each RESOLVE timestep, the Eclipse model passes the black oil PVT properties of
the fluid to RESOLVE.
The equation of state used in UniSim (i.e. the DOWNSTREAM composition) has been
specified in the delumping options of RESOLVE.
RESOLVE uses the Target GOR method to calculate the unique composition that will
correspond to the fluid GOR passed by Eclipse: the Target GOR essentially
recombines the oil and gas phases from the downstream composition until the final
composition has the same GOR than the Eclipse fluid. This will enable to obtain a
detailed composition for the reservoir fluid at the timestep considered.
The components from both the Eclipse fluid composition and the UniSim composition
are mapped as usually done for compositional connections in RESOLVE.
In order to setup this technique in RESOLVE, the following procedure can be followed:
July, 2021
RESOLVE Manual
Examples Guide 1786
Double click on the UniSim module icon and set the "Transfer composition to UniSim
feed streams" option to "Transfer upstream composition to UniSim feed stream"
as illustrated below.
July, 2021
RESOLVE Manual
Examples Guide 1788
The UniSim model already has a fluid equation of state definition specified: it is
possible to import it directly to RESOLVE by going to the "Import" button.
Select the UniSim flowsheet and stream to import the equation of state parameters and
composition from, here the Flowsheet "Main" and the Stream "Feed 1" for instance.
The following screen will appear, illustrating the fact that 20 components are present in
the UniSim compositional description of the fluid.
July, 2021
RESOLVE Manual
Examples Guide 1790
Clicking on the "Setup" button will allow to view the composition imported, as
illustrated below.
Go to Step 6 or Step 8.
3.7.1.9 Step 8
Step 8 Objective:
Setup the RESOLVE Schedule
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
For the purposes of this example, we will be making use of the basic scheduling only.
The advanced scheduling is described in the "Example_2_3" section.
To setup the basic RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
July, 2021
RESOLVE Manual
Examples Guide 1792
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button. This will display a screen allowing the user to
select the required start date from a list of the various model start dates.
The timestep and schedule duration are also entered here as shown.
Here the Eclipse and UniSim models will be synchronised every 1 months until the schedule
completes on 01/01/2002.
Go to Step 7 or Step 9
3.7.1.10 Step 9
Step 9 Objective:
Run the prediction forecast
To start, force RESOLVE to do a single timestep. Do this by pressing the "single step" button (
).
The first action done is the initialisation of both modules. In the case of Eclipse, this means
initialising the reservoir model (i.e. running a history if necessary until the start date of the
forecast is reached). In the case of UniSim, no initialisation is necessary.
Once the single solve run button is selected, RESOLVE will reload the client modules and the
next step is to map/setup the compositions accordingly.
From the previous steps under the lumping/delumping section, the downstream composition
(i.e. the UniSim model composition in this case) has been setup. With this in place, the interface
below comes up where the compositions can then be mapped. Note that it is possible to
perform the composition mapping alternatively by going to Run | Edit Composition Tables to
map the components from the Eclipse derived composition to the UniSimcomposition.
July, 2021
RESOLVE Manual
Examples Guide 1794
The screen above is displayed: as the Eclipse model is a black oil model and a fluid
composition is derived from it at each timestep using the "Target GOR" method, select
the "External Delumping" option at the top of the screen, as illustrated below.
For each one of the connections specified on the left hand side of the screen, it will be
necessary to map the compositions.
As in this case all the components are in the same order, it is possible to do so by
selecting the "Add All" option.
Once the compositional mapping has been performed for each connection, the
following screen will be displayed: each connection will be displayed with a green tick
illustrating the status of the compositional mapping for this connection. Ensure the
compositions for Feed 1 and Feed 2 from Eclipse are mapped across.
July, 2021
RESOLVE Manual
Examples Guide 1796
This step terminates the PVT setup for this black oil delumping model.
To run the rest of the forecast without stopping, press the icon. Note that the run can be
paused or stopped with other toolbar icons.
Go to Step 8 or Step 10
3.7.1.11 Step 10
Step 10 Objective:
Analyse the Results
The main objective of this example is to illustrate how to setup a direct connection between a
black oil reservoir model (i.e. Eclipse) and a compositional process model (i.e. UniSim) and
how the PVT consistency can be kept within the model using the black oil delumping technique.
By going to the Results | View Forecast Plots section, it will for instance be possible to
compare the oil produced by the Prod1 well at the Eclipse level and at the UniSim level: both of
them are consistent, as illustrated below.
In addition to this, it will be possible for instance to retrieve the composition of the fluid passed
to every feed of the UniSim model by going to the Results | View Forecast Results (Tables)
section, as illustrated below. Selecting Feed1 will provide the results transferred across at every
timestep.
July, 2021
RESOLVE Manual
Examples Guide 1798
Once the composition is displayed, it will be possible to select the "Properties" section to
obtain the corresponding black oil PVT properties.
3.7.2.1 Overview
1. Example Introduction
Rule Based solver has been implemented in GAP starting from IPM 9.0. It has been put forward
as a method that allows user solving the network while honouring the constraints imposed on the
system without doing full field optimisation. Constraints for Rule Based solver are satisfied
following a set of rules that are generally used by field engineers to limit production. For more
information on Rule Based solver, please refer to GAP manual, section 2.8.5.2.
The Rule Based solver is particularly useful for models with degeneracy issues. In cases when
all the wells in the model are identical from the optimisation stand point, i.e. all wells have similar
(or identical) water-cut and GOR values. Such a system has multiple solutions, as it is possible
to choke different wells without affecting the total system results. This generally result in
fluctuations of individual wells production profiles, while total system constraints are honoured.
Rule Based solver will avoid fluctuations as it will choke wells without giving preferences to
particular ones.
The Rule Based solver also improves calculations time in comparison to the full model
optimisation, as it requires less iterations.
However, since Rule Based solver it is not an optimisation algorithm, it may result in non-
optimum production; e.g. within the same constraints the system may produce more water and/
or less hydrocarbons.
In this case calculations of the Rule Based solver can be improved by applying Well
Optimisation Weighting factors in GAP, which will give preference to the selected wells.
This example will consider creation of an automated workflow to distribute those weighting
factors based on the well water-cut and GOR. The created workflow is generic. Therefore, once
it is built it can be used alongside other GAP models to distribute optimisation weighting factors
for Rule Based solver.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE GAP
1 1
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP is registered. Note that this operation is not required if it has been done previously.
July, 2021
RESOLVE Manual
Examples Guide 1800
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Location
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_5-Advanced_RESOLVE_Examples\Example_5_2-
GAP_Rule_Based_Solver
This folder contains a file "GAP_Rule_Based_Solver.rsa" which is a "archive file" that contains
the RESOLVE and GAP files required to go through the example. The archive file needs to be
extracted either in the current location or a location of the user's choice.
Go to Step 1
3.7.2.2 Step 1 - Start a new RESOLVE file
Step 1 Objective: Start a new RESOLVE project and setup project options.
Start RESOLVE and go to File| Archive| Extract. Navigate to the above mentioned folder (see
Files Location above), select the GAP_Rule_Based_Solver.rsa file and extract its content into a
selected location. When the “Open Master File?” question is prompted, select “No”. This step
ensures that underlying model is extracted into the folder, in this case GAP.
The project will solve GAP network without running a forecast. (Once the workflow is built its
usage can be extended to the forecast runs without additional modifications.) It is therefore
required to setup System properties accordingly. For this go to Options| System Options and
set the Forecast mode to “Single solve/optimisation only”.
The next step is to define instances of the various modules which we would like to define in
RESOLVE. In the given example it would be GAP module and Workflow object, which we will
add to the main screen.
From the main menu go to Edit System| Add Client Program| GAP or select the
July, 2021
RESOLVE Manual
Examples Guide 1802
The cursor, when held over the mains screen, will change to indicate that an instance of an
application can be created.
Click on the main screen where the GAP icon is to be located and give it a name. Default “GAP”
name will be used in this example.
Double click on the GAP icon and in the displayed dialog window do the following:
Set the File name by browsing to the GAP model extracted earlier;
Set Predictive mode to “Always non-predictive (GAP performs instantaneous
solves)”
Check “Rule based optimisation” box
When OK is selected, RESOLVE will launch GAP and will load the required case. GAP model
will be queried on the input and output feeds, which will be displayed in the screen as shown
below.
The icons can be moved by selecting the button on the toolbar or by holding “Shift” button
on the keyboard and then dragging them to the required position.
For the purpose of this example GAP wells and separator icons can be hidden as they will not
be connected to other modules. To hide child icons "right click" on the GAP module and select.
July, 2021
RESOLVE Manual
Examples Guide 1804
GAP module icon will wells and separator and will display square in the bottom right corner.
With the GAP model loaded the next step is to create Workflow module. This can again be
found by going to Edit System| Add Client Program| Workflow or from the toolbar as shown
below.
Once this is selected, click within RESOLVE window and give Workflow object a name. In this
example default name “Workflow” will be used.
Communication between the workflow and GAP will be established via OpenServer. Therefore
the OpenServer object should also be added into the RESOLVE model. This can be done from
Edit System| Add data| OpenServer or from the toolbar as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1806
Once selected click within RESOLVE window and give OpenServer object a name. In this
example default name – “OpenServer” – will be used.
At this point RESOLVE model should include three modules: GAP, Workflow and OpenServer,
as shown below.
For the purpose of this project Workflow will distribute well optimisation weighting factors, which
will be used by the GAP Rule Based Solver. Therefore it is required to instruct RESOLVE to
execute Workflow before the GAP module. This can be done by linking them using with the
tool as shown below.
Go to Step 3.
Building blocks of the workflow will be obtained from the palette, which can be displayed using
button. The elements can be added to workflow by first selecting them in the palette and
July, 2021
RESOLVE Manual
Examples Guide 1808
Double click on the Assignment once added to the workflow. In the displayed dialog window
change assignment name to “Settings” and define initial parameters.
The following internal workflow variables will be used to hold workflow settings:
GAP_Module_Name – name of the GAP module given on Step 2 above;
MIN_WEIGHTING – minimum optimisation weighting that can be assigned to the well;
MAX_WEIGHTING – maximum weighting factor that can be assigned to the well;
WCT_Mult – multiplier in the equation of the water-cut weighting factor;
WCT_Shift – shift in the equation of the water-cut weighting factor. Controls how the
water production is optimised: -1 - reduce water, 0 - increase water;
WCT_Power – power in the equation of the water-cut weighting factor;
GOR_Mult – multiplier in the equation of the GOR weighting factor;
GOR_Shift – shift in the equation of the GOR weighting factor. Controls how the gas
production is optimised: -1 - reduce gas, 0 - increase gas;
GOR_Power – power in the equation of the GOR weighting factor;
The above variables are not yet defined in the workflow. Hence, when OK is selected the
following window will be displayed asking to select the type of the variable.
Five screens will be displayed sequentially for each of the variables above. The type selection
for each variable should be done according to the table below:
Variable Name Initial Value Variable Type
GAP_Module_Name "GAP" string
MIN_WEIGHTING 0.1 double precision
MAX_WEIGHTING 10 double precision
WCT_Mult 5 double precision
WCT_Shift -1 integer
WCT_Power 2 integer
GOR_Mult 5 double precision
July, 2021
RESOLVE Manual
Examples Guide 1810
GOR_Shift 0 integer
GOR_Power 0 integer
Once the settings are defined the next step of the workflow is to extract information regarding
the GAP model in question. In particular we need to know the number of wells in the GAP model.
The workflow will go through each of the wells and assign weighting factors based on the well
water-cut and GOR. This will be organised via loops and number of wells in the system will be
used as a maximum value for a loop counter variable.
The number of wells in the GAP model can be obtained using Operation element
Once added to the workflow double click on the Operation and change its name to “Count
wells” and select Add global function.
In the displayed window make set category of operation as Generic openserver functions
and operation as Get a variable (using a direct OpenServer connection).
July, 2021
RESOLVE Manual
Examples Guide 1812
Also, define the name of return variable as “num_wells” and select OK.
When the “Create this variable?” window is displayed set the “num_wells” variable type as
integer.
Now that the number of wells is known it is required to create a procedure inside the workflow
that will distribute weighting factors for all the wells. This will be achived using two loops that will
go through the wells. The first loop will be used to identify minimum and maximum values of the
water-cut and GOR present in the system. The second loop will calculate and distribute
weighting factors.
To create general structure of the workflow at this stage two Loop elements will be added into
the system. To add a Loop element select the corresponding element from the palette.
When two loops are added double click on them one by-by-one and define parameters as
shown in the figure below; both loops are identical.
The loop counter variable – “well_index” – will vary from 0 to num_wells-1, as well indexing in
GAP is zero based.
July, 2021
RESOLVE Manual
Examples Guide 1814
Before going into individual loops add a Terminator element at the end.
Then select connection tool ( button on the tool bar) and link elements together.
When linking elements, make sure that “Continue” connection is displayed after loops.
Otherwise right click on the loop and select Reverse outputs as shown in the figure below.
1. At the start of the workflow minimum water-cut variable value will be set to zero and the
2. Inside the loop workflow will extract water-cit from the well in GAP model (Well_WCT)
3. If the well water-cut is more than maximum then new maximum value will be assigned.
if Well_WCT<Min_WCT is true, then Min_WCT=Well_WCT
4. If the well water-cut is less then minimum then new minimum will be assigned:
if Well_WCT>Max_WCT is true, then Max_WCT=Well_WCT
Doing this for all the wells in the system on-by-one will return minimum and maximum water-cut
values at the end of the loop. The same logic is followed for the GOR.
According to the above logic the first step inside the loop is to extract the well water-cut and
GOR. To do this add an Operation element into the workflow and link it as shown in the figure.
Once added double click on the Operation and create two functions as shown above:
First function will read water-cut from the current well and will assign its value to the
July, 2021
RESOLVE Manual
Examples Guide 1816
“Well_WCT” variable
When prompted assign both new variables – “Well_WCT” and “Well_GOR” – with
double precision type.
The returned water-cut and GOR will be compared against minimum and maximum values. This
will be done by 4 sequential If…then logical elements. Each of the logical elements will have a
corresponding Assignment that will be executed is the condition is true.
July, 2021
RESOLVE Manual
Examples Guide 1818
Double click on the If…then elements and define conditions as shown in figures below.
Once conditions are defined it is required to define the actions that will be taken when those
conditions are true. To do this double click on the Assignment elements and define values as
shown below.
July, 2021
RESOLVE Manual
Examples Guide 1820
July, 2021
RESOLVE Manual
Examples Guide 1822
When linking pay attention to “Yes” and “No” links coming out of the If…then elements. If linked
incorrectly double click on the If…then element and then select Decision to swap links.
3. Calculate well weighting factor. The factor is calculated as a sum of minimum well
weighting , weighting for water and weighting for gas.
4. Weighting for water is calculated using minimum and maximum water-cut values. Well
with maximum water-cut will have 0 weighting for water, while well with lowest water-cut
will be assigned with maximum weighting for water defined by the WCT_max_weighting
variable. Wells with intermediate water-cuts will be linearly interpolated between zero and
maximum weighting value.
5. For gas the same logic will be followed if it is required to maximise gas production; if it is
required to minimise gas, then zero weighting for gas will be assigned to the well with
highest GOR and GOR_max_weighting value to the well with lowest one.
July, 2021
RESOLVE Manual
Examples Guide 1824
The first step of the loop is again to extract well water-cut and GOR. This is done via Operation
exactly as it was done in the first loop. Therefore the “Get WCT/GOR” Operation can simply be
copied from the first loop.
The next loop step will require a decision to be made whether it is required to maximise or
minimise gas produced by the system; water will always be minimised.
Add three Assignment elements and If…then element as shown in the figure by selecting
them in the palette and clicking in the workflow editor window.
The first Assignment element is used to normalize water-cut and GOR values and calculate
individual weighting factors.
First, the values will be normilised based on the minimum and maximum retrieved in the Loop-1:
Second, the normilised parameters will be used to calculate individual weighting factors:
Double click on the Assignment and input the above equations as shown below. The power
and absolute value calculations will be defined using commands.
When asked to create variable set all new variables as double precision.
The next element will simply be used to calculate total well weighting by summing up individual
ones:
July, 2021
RESOLVE Manual
Examples Guide 1826
When asked to create variable set all new variables as double precision.
Once the well_weighing is estimated it should be verified against the maximum value
(MAX_WEIGHTING). For the weighting factors far away from unity the solver may not be able to
converge calculations, hence weighting factors should be limited.
Double click on the If...then element and set the condition as shown:
If the above conditions is true, then the well_weighting will be reassigned with the
MAX_WEIGHTING:
The last step of this loop is to pass calculated well_weighting to the well model in GAP. To do
this add Operation element, double click on it and add global function.
July, 2021
RESOLVE Manual
Examples Guide 1828
Link the elements as shown below. Make sure that "Yes" and "No" connections from the If…
then element are pointing to the correct Assignments.
The Workflow is now complete and should look as shown below. If necessary elements can be
moved selecting "move" button on the tool bar or by holding Shift key and dragging the
elements to the required location.
These variables are compared against the well water-cut (Well_WCT) and GOR (Well_GOR).
This logic would not work if minimum values are assigned with zero. For example, if minimum
value of water-cut in the system is 10% and MIN_WCT is initialise and set to 0, then condition
Well_WCT<MIN_WCT will never be true, as such MIN_WCT will remain 0 and actual minimum
will not be determined.
Hence, for the logic to work minimum values should be set to high values
MIN_WCT=100
MIN_GOR=1e6
To define the above select button on the tool bar. The displayed dialog window allows user
creating and managing internal workflow variables.
In the window select MIN_GOR variable and then Edit button. In the displayed window set
initialisation to Every call and input 1e6 as a starting value.
Do the same for MIN_WCT setting starting value to 100.
July, 2021
RESOLVE Manual
Examples Guide 1830
Display variable monitoring window by selecting button and chose variables for monitoring
using arrows in between frames.
Once variables are selected step through the macro elements one-by-one using button.
Monitor variables and ensure that values are flowing as expected.
Select twice and progress the workflow up to the Loop-1 element. In that case variables
form Settings as well as number of wells should be defined.
July, 2021
RESOLVE Manual
Examples Guide 1832
Progress through the workflow by keep pressing button. Within the first loop monitor
minimum and maximum water-cut and GOR variables as well as well water-cut and GOR.
Make sure that at the start of the loop all variables are set to zero except MIN_WCT and
MIN_GOR.
By the end of the first loop make sure that minimum and maximum values for water-cut and
GOR reflect the actual minimum and maximum values present in the GAP network.
Keep selecting button and progress to the Loop-2. At this stage monitor weighting factors
and make sure that they are passed to GAP model.
Verify the values obtained against manual calculations and make sure that they are passed to
GAP and are defined in the well Constraints.
Once the workflow is complete, close the Workflow Editor window and return to the main
RESOLVE screen. Save RESOLVE model (File| Save) and go to Step 4
July, 2021
RESOLVE Manual
Examples Guide 1834
Before running any further calculations go to GAP model and remove all the Well Optimisation
Weighting factors from it, if present (some may have been passed when debugging the
workflow). From GAP interface go to Constraints Edit Constraints Table. In the displayed
window select Well as equipment type, scroll to the right and ensure that there are no
optimisation weightings left.
From main GAP window go to Solve Network| Run Network Solver. In the displayed window
click Next, then set the Mode to “No Optimisation” and click Calculate. This will run network
solver without any constraints and thus will estimate its potential.
Once solved select Main and put mouse pointer over the Separator to display results.
July, 2021
RESOLVE Manual
Examples Guide 1836
Let us now solve network twice with account for Maximum Liquid constraint using different
solver options:
Optimise with all constraints – In this case GAP will solve network and adjust well
head chokes in such a way that production is optimal, i.e. within 18000 STB/day it will
minimise water content (more oil) and maximise gas.
Rule Based – In this case GAP will solve network using Rule Based solver, which will
satisfy constraints but solution may not be optimal.
Keep record of results as well as time taken to solve network, which is reported in the Solver
Log.
Save GAP model with constraint applied by going to File| Save File and return to RESOLVE
Run RESOLVE model by going to Run| Start or selecting button. RESOLVE will execute
the workflow distributing weighting factors and then solve GAP using Rule Based solver.
Once model is solved verify Separator results. Time taken to solve the model can be obtained
from RESOLVE Logs
July, 2021
RESOLVE Manual
Examples Guide 1838
The above results shows some improvements in comparison to the standard Rule Based
solver. It can be seen that the higher gas and lower water rates are obtained within the same
liquid constraints.
Go to workflow Settings block and change the GOR Multiplier to zero, so that weighting factors
are distributed only with account for water:
Oil Rate Gas Rate Water Rate Liquid Rate Time taken
STB/day MMscf/day STB/day STB/day sec
Optimise with all constraints 74041.6 55.9 15957.5 89998.8 ~12
Rule Based 64647.8 48.3 25351.7 89999.2 ~1
Rule Based with with gas and
68261.7 52.5 21738.0 89999.8 ~2
water weightings
Rule Based with weightings for
70033.9 52.5 19963.3 89997.2 ~2
water only
From the above table it can be seen that distributing Optimisation Weighting factors for wells
improved results obtained from the network (lower water rate within the same liquid constraint
and higher gas rate).
At the same moment it substantially saved model runtime. This element may be important for
the cases when complex networks are considered.
The developed workflow is generic and can be used with any other GAP models to distribute
well weighting factors. Workflow settings can be adjusted depending upon the model
constraints, e.g. if the objective is to maximise gas rate without account for water, then the
appropriate wighting factors can be changed.
3.7.3 Example 5.3: Connection to transient flow simulator
This example illustrates the use LedaFlow transient multiphase flow simulator in combination
with GAP steady state network solver for detailed flow assurance study.
July, 2021
RESOLVE Manual
Examples Guide 1840
3.7.3.1 Overview
1. Example Introduction
As operators start to explore more remote hydrocarbon plays, our production systems are
exposed to more challenging environments and fluid behaviour. In these contexts, any
operational decision can have far reaching implications upon recovery. As such, our decision
making process must be rooted in understanding of the environment with respect to the physics.
This is where (in recent years) the flow assurance discipline has taken a central role in guiding
operations.
In the field, turndown of production rate is often required so that routine maintenance operations
can be performed on parts of the production system, such as ESP maintenance, wellhead
maintenance, and pigging operations that would warrant such turndown of the rate. As the
production system is subjected to a lower throughput, the concept of turndown stability becomes
important. Lower throughput, or even shut-in can lead to operationally difficult scenarios such as
wax blockages, hydrate formation, asphaltene deposition, riser slugging etc. all of which are
undesirable. Traditionally, this has been the domain of the flow assurance discipline, where
modelling has been performed exclusively by transient simulators which account for pressure/
temperature changes in seconds/hours.
In the production context, the shut in time of days/weeks is negligible, since production is usually
considered in much larger time frequencies (recovery is considered over decades). As such,
using transient simulators for production forecasting is not practical due to the long run times,
which limits their applicability to forecasting. Thus in terms of field planning/forecasting, steady
state tools are used, where the entire production system is modelled over decades with
reasonable run times, capturing the full reservoir, well and surface network response. In reality,
the long term is made up of the aggregated short term, and thus the two (steady state and
transient) responses must be considered together.
This example will illustrate how steady state and transient multiphase flow tools can be
integrated to form a single model, which will be used to both run forecasting and evaluate
detailed system transient response when it is necessary.
The field in question is a small offshore gas filed, which is currently being produced via 4 wells.
Field production is delivered to the platform via subsea pipeline and a single riser. The field
production should be maintained at 90 MMscf/day while possible.
The gas reservoir in question has a strong aquifer support and as such in the future large
quantities of water can be expected. The production platform is sensitive to production surges,
and as such any slugging that develops in the riser results in consequences that vary from being
(i) undesirable to (ii) damaging of production facilities downstream.
As a part of the field development planning it is required to generate production profile for 6
years starting from 01/01/2014 (today’s date). During the forecast it is also required to closely
monitor flow regime in the riser to detect and analyse slugging. No slug mitigation activity is
considered as part of this study, however the models should allow introduction of additional
logic for such activity.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP and LedaFlow are registered. Note that this operation is not required if it has been done
previously.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Location
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_5-Advanced_RESOLVE_Examples\Example_5_3-
GAP_Ledflow_integration
Go to Step 1
July, 2021
RESOLVE Manual
Examples Guide 1842
The cursor, when held over the mains screen, will change to indicate that an instance of an
application can be created.
Click on the main screen where the GAP icon is to be located and give it a name. Default “GAP”
name will be used in this example.
Double click on the GAP icon and in the displayed dialog window (i) set the File name by
browsing to the GAP model extracted earlier and (ii) check the box “Rule based solver”.
July, 2021
RESOLVE Manual
Examples Guide 1844
When OK is selected, RESOLVE will launch GAP and will load the required case. GAP model
will be queried on the input and output feeds, which will be displayed in the screen as shown
below.
The icons can be moved by selecting the button on the toolbar or by holding “Shift” button on
the keyboard and then dragging them to the required position.
By default GAP will display all the wells and separators from the network. For the purposes of
this project it is also required to bring additional nodes for the riser top and bottom. These are
defined in the GAP model as joints “Riser_Top” and “Riser_Base”.
To bring additional nodes from GAP model again double click on the GAP module icon, go to
“Sources/Sinks” tab, select “Add extra nodes” button”, add required nodes from the list and
select OK.
July, 2021
RESOLVE Manual
Examples Guide 1846
Coming back to the original graphical screen of RESOLVE two additional nodes will be added.
Go to Step 3.
Once added, double click on the LedaFlow icon, brows for the extracted “Riser.ldm” file and
click “Load”.
July, 2021
RESOLVE Manual
Examples Guide 1848
Once the model is loaded flip to the Execution tab and setup the LedaFlow run options as
shown below. Then click OK.
Coming to the main RESOLVE screen the LedaFlow module will display two nodes – “Node1”
and “Node2”, which correspond to the inlet and outlet of the riser. These nodes can be linked to
“Riser_Bottom” and “Riser_Top” accordingly using link element as shown below.
The links will be used to pass data from GAP to LedaFlow at each timestep of the forecast.
Go to Step 4.
The components required are (1) ProsperCalculator, (2) WellSource-Online and (3) Workflow.
The first two can be added from the menu Edit System| Add Data| ProsperCalculator/
WellSource-online or from the toolbar as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1850
The workflow can be added from the menu Edit System| Add Client Program| Workflow or
from the toolbar.
Once all three components are added it is required to fill them in with data. Double click on the
“WellSource-Online” and browse for the “Riser.out” file extracted previously. This will
automatically import the data from the predefined PROSPER file. Data can also be input
manually using “Edit pipe data” button.
July, 2021
RESOLVE Manual
Examples Guide 1852
Created links will define automatic data flow: “WellSource-Online” object will provide pipe
details, “Riser_Base” will provide fluid PVT data and rate flowing through riser.
It is only required to define the missing data for “ProsperCalculator” – injection gas rate, which
will be set to zero. Double click on the “ProsperCalculator” and define the injected gas
parameters as shown below; other parameters would be left blank.
Go to Step 5.
Double click on the Workflow element to display the editor window. Once displayed, press
“Palette” button and add 3 Assignments, If..Then and Terminator blocks as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1854
Double click on the first Assignment block and set a variable “n” that will define from what row of
the pressure profile results will be extracted. ProsperCalculator runs calculation from the bottom
of the riser up, as such reading results from one of the last few rows will be close to the top of
the riser. Therefore the value of “n” will be set to:
ProsperCalculator.GradientOut.ProfileCount-2
The “If...Then” block will be verify whether Slug Flow is observed at the top of the riser. The
expression is as follows:
ProsperCalculator.GradientOut.RegimeProfile[n] = 6
The left hand side corresponds to the flow regime index read from the ProsperCalculator
results, row number “n”. The right hand side is number 6, which corresponds to slug flow.
July, 2021
RESOLVE Manual
Examples Guide 1856
The remaining two Assignment blocks will be used to activate or deactivate LedaFlow
simulation. This is done via the check box “Perform simulation when module is solved”.
The check box flag can be changed dynamically form the workflow using the following
parameter:
LedaFlow.Resolve.SolveOnExecute
When “If..Then” condition is not true the workflow would be routed to the top Assignment – No
route. This Assignemnt should turn off LedaFlow execution. Double click on it and set the
July, 2021
RESOLVE Manual
Examples Guide 1858
This completes the workflow. The Workflow Editor window can now be closed and main
RESOLVE graphical screen can be displayed.
It is required to ensure that Workflow is executed after the ProsperCalculator, so that it will verify
gradient result from the current timestep. This can be done by linking two modules using linking
tool as shown below.
Go to Step 6.
July, 2021
RESOLVE Manual
Examples Guide 1860
Go to Step 7.
Starting the forecast it will be observed that initially LedaFlow simulation is not executed. Later
in the forecast run LedaFlow shell window start to appear every timestep indicating the
LedaFlow simulation is running.
From that moment onwards results will be published in the LedaFlow database and can be
loaded for review.
It is also possible to pause the RESOLVE run at any timestep and view result from the LedaFlow
module. To do this press the Pause ( ) on RESOLVE toolbar and wait until RESOLVE
displays the “Simulation is paused” message in the Calculation tab. When the run is paused,
double click on the LedaFlow module and select Results tab where it will be possible to view
results of the last LedaFlow simulation.
July, 2021
RESOLVE Manual
Examples Guide 1862
Go to Step 8.
Looking at the same graph from the top it can be observed that yellow areas indicating large
liquid volume fractions over time propagate towards the end of pipeline, i.e. the slugs are
travelling all the way to the riser top.
It is possible to take a slice at a particular point in time and compare against results displayed
in LedaFlow (animated 2D plots). Below is such a comparison at 200 sec. Spikes on the
RESOLVE 3D plot should be compared against green Liquid Volume Fraction (VF - total liquid)
plot on LedaFlow 2D plot.
July, 2021
RESOLVE Manual
Examples Guide 1864
Continuing the forecast it can be observed that WGR value of production fluid will continue to
rise. Eventually all the zone with slug flow will lump together. In that case slug flow pattern will be
observed in riser segments.
July, 2021
RESOLVE Manual
Examples Guide 1866
Well test data is available for an oil well. During the well test, oil and gas were separated in a
two-stage separation process (field path to surface). The measurements available are the first
stage oil and gas rates.
The PROSPER model of this well uses a Black Oil PVT model, and the BO inputs (solution
GOR, API, gas gravity) correspond to a single-stage separation process (reference path to
surface).
In order to use this well test data in PROSPER, it is required to correct the field measured rates
to the reference separator train (i.e. to calculate the surface rates that would be measured if the
fluid was taken to surface conditions through the reference train).
For more details on the context behind the PVT Transformation, please refer to the PVT
Transformation section.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE
1
3. Files Locations
The EOS.prp file required to complete this example is located in the samples installation folder.
...\resolve\Section_5-Advanced_RESOLVE_examples\Example_5_4-
PVT_Transformation
Go to Step 1.
3.7.4.2 Step 1: Create a new file
Create a new project using File| New or the icon . Reset the units to Oilfield units by going
to Options | Units and clicking Reset.
Go to Step 3.
3.7.4.4 Step 3: Add the Path to Surface Objects
From the 'Add Data Objects' menu, add two Path to Surface objects using the following naming
conventions:
Field_PTS
Reference_PTS
July, 2021
RESOLVE Manual
Examples Guide 1868
Double-click on the Field_PTS object. Click on the Create Separator Train icon and enter the
following pressure and temperature data.
Double-click on the Reference_PTS object. Click on the Create Separator Train icon and
enter the following pressure and temperature data. This will create a single-stage separation
process. Click OK and return to RESOLVE.
Go to Step 4.
July, 2021
RESOLVE Manual
Examples Guide 1870
Go to Step 5.
3.7.4.6 Step 5: Configure the workflow
1. Double-click on the PVT Transformation workflow and run the workflow using the icon.
In the EOS tab, browse for the EOS.prp file, and using the drop-down menus set the following
options:
Field Path to Surface: Field_PTS
Reference Path to Surface: Reference_PTS
2. Enter the Anchor Point tab. This tab is used to define the trusted measurement that will be
used to calculate the total mass rate. In this example, the available measurement is the oil rate
at the outlet of the first stage separator.
July, 2021
RESOLVE Manual
Examples Guide 1872
3. Enter the Recombination tab. This tab is used to define options used to estimate the
composition of the produced fluid, by recombining the separator oil and gas to a target gas to
oil ratio. In this example, the first stage oil and gas rates are known.
Select Separator-1 for the gas rate and oil rate measurement.
Go to Step 6.
3.7.4.7 Step 6: Run the calculation and analyse the results
Click Calculate. The following results are calculated. This provides the standard condition rates
if the fluid was taken to surface through the reference path to surface, and these are the rates
that should be used in the models.
Note that as the oil rate is corrected, the WCT is also corrected (the water rate is assumed to
be constant through both processes).
Click on Detailed Results: this provides further results and in particular it calculates quantities
which may not have been measured in the field. For example, it can be observed that in the
field, the stock tank oil rate (which was not measured in this example) is 2097 STB/d, and the
stock tank vented gas (which was not measured in this example) is 0.26 MMscf/d. Oil shrinkage
is also calculated between the separator stages.
The stock tank oil rate is 2097 STB/d under the field separation conditions, and 2012 STB/d
under the reference conditions. This shows that separation conditions can have a strong impact
on surface rates (and consequently on the models using these rates).
July, 2021
RESOLVE Manual
Examples Guide 1874
1. Example Introduction
This is a dexterity example to illustrate the use and functionality of data objects, and how they
can be used to solve problems within the RESOLVE framework. It will serve as a general
introduction to the use of data objects in RESOLVE, so there are no pre-requisites for this
example.
It will also illustrate the use of visual workflows and touch on the scenario management features
of RESOLVE.
The problem will use several EOS-PVT data objects as inputs to a blending object, with logic
subsequently applied to target the properties of the resulting composition.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE GAP
1 1
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP is registered. Note that this operation is not required if it has been done previously.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 1876
...\resolve\Section_6-Data_Objects\Example_6_1-
Compositional_blending_and_workflow
This folder contains a file "DataObjects.rsa" which is a "RESOLVE archive file" that contains all
the files required to go through the example. The archive file needs to be extracted either in the
current location or a location of the user's choice.
Step 1 Objective: Start a new RESOLVE project and initialise the units
Start RESOLVE, and open a new project using File | New or the icon .
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g.
DataObjects.rsl).
To add a data object, go to Edit System | Add Data. Alternatively, click on the data object
toolbar button, and click on the EOS-PVT object type:
July, 2021
RESOLVE Manual
Examples Guide 1878
After this, click on the Resolve screen and give the new object a name - 'heavy' in this case.
Repeat this twice more to create two more EOS-PVT objects called 'medium' and 'light', to end
up with:
Three different EOS descriptions will now be imported into these objects. Double-click on the
first object ('heavy'): the data entry screen for EOS data will appear. Click on the 'Import' button
and browse to the EOS file 'heavy.prp' in the samples directory. When the file is loaded, the
data will appear on the data entry screen.
Press 'OK' on this screen, and repeat for the other two objects, importing 'medium.prp' and
'light.prp' respectively.
As expected, these EOS descriptions refer to the relative 'weights' of the fluids, with GORs
ranging from 500 to 5000 scf/STB.
We will now proceed to add a 'blending' object to the model to blend the compositions together.
As before, click on the Data Object toolbar button and create an instance of the 'Comp-Blend'
object. Accept the default name of 'Comp-Blend'.
The next step is to make the EOS objects inputs to the Comp-Blend object. This is done simply
by connecting from the EOS object to the Comp-Blend object.
Click on the first of the EOS objects ('heavy'). You will note that all the other objects are
highlighted with a red border. This is because this EOS object is potentially an input object for
any of the other objects in the system. (A list of potential inputs for all data objects, and their
July, 2021
RESOLVE Manual
Examples Guide 1880
behaviour as a result of the connection, is provided here). In this case it should be connected to
the Comp-Blend object.
Repeat the procedure with the 'medium' and 'light' objects, to end up with:
Blending m odel
We need to tell the model the proportions of each of the inputs that will comprise the final blend.
This is done by double-clicking on the 'Comp-Blend' object to invoke its data entry screen. On
this screen, note that there is a row for each of the input compositions. For each input, the rate
type can be adjusted depending on the data available (oil rate/mass rate/etc) and the rate can
be entered. Enter the data as followed (note that the rate type is set to oil rate for all three
inputs):
The Comp-Blend object outputs a new EOS object containing the blended composition. This
can be read from the results (as we will see below), so there is no need to add anything else to
run the model. However, we will want to perform some operations on the blended composition
so we need to copy that into a new EOS object.
Add a new EOS object as before, and call it 'blended'. Connect the Comp-Blend object to the
new object, to end up with:
The model is now ready to run. Before doing this, turn the model from a forecast (the RESOLVE
default) to a single solve (no optimisation) from the Options | System Options menu item
(forecast mode). Now run the model as normal.
Blending calculation
July, 2021
RESOLVE Manual
Examples Guide 1882
to obtain:
Clearly, this represents the result of the blending of the three input compositions.
It is also possible to represent the results of the blend graphically, alongside the three inputs (C1
only displayed):
Note that the data for the three input compositions is reported, along with the 'blended'
composition which we would expect to be the same as that for the output composition of the
'Comp-Blend' object.
In this step a workflow will be added to reduce the GOR of the blended composition to below a
target by reducing the amount of the 'light' composition in the mixture.
The idea is to perform a solve of the system, and after each solve to read the results and
perform the calculation again, with new boundary conditions, if required.
To do this a workflow must be implemented at the 'PostSolve' level. Invoke this by clicking on
the workflow toolbar button, or by going to the Events/Actions | Workflows | PostSolve menu
item.
An empty PostSolve workflow will be displayed. The workflow at this level has a single start
July, 2021
RESOLVE Manual
Examples Guide 1884
This will be done by invoking an EOS calculation directly. Add a Operation element to the
workflow, placing it on the worksheet next to the 'Start' element. To add an Operation element,
click on the palette icon in the workflow toolbar ( ). Click on the Operation item, and then
click next to the 'Start' element in the workflow canvas.
Double-click on the new element to enter the data entry screen. Under the workflow name, give
it a label such as 'calculate blended GOR'. (A comment can also be added; this will be
displayed when the mouse is held over the item in the worksheet).
Select "Add operation" and then under "category of operation", the GOR calculation will be
found within the "EOS thermodynamic calculations" which is the flashCalculation DLL.
Drop down the list underneath "select operation" and select "Flash to surface, results stored in
FlashResultsStd". The cell in the grid for the EOS in argument contains a drop down list which
allows an existing variable to be passed into the function. RESOLVE has populated this list with
the data objects that were added to the framework in the previous steps:
July, 2021
RESOLVE Manual
Examples Guide 1886
Select the 'blended' object from the list, and this will be added as an argument to the flash
calculation.The function will then be assigned accordingly. It has detected that it takes a single
argument, which is an EOS-PVT data object. Select "OK".
For this, we will add a decision element from the pallette ( ) as follows:
July, 2021
RESOLVE Manual
Examples Guide 1888
Place this to the right of the previous calculation element, and double-click to enter the data
entry screen. As before, give it a name (e.g. 'check GOR'). The elements will be connected into
a workflow later.
We want to check the GOR from the results of the previous calculation with our target GOR. In
the left hand side column of the grid, drop down the list and select the 'blended' object. Click into
the cell and add a dot ('.') to invoke a sub-property of the object; a list of available properties will
be displayed:
'intellisense'
Select the 'FlashResultsStd', as shown. This property contains all the results of the preceding
flash calculation. Add another dot, and select SOLGOR from the final list.
The condition to apply is that SOLGOR > 2500. On completion, the screen should appear as
follows:
This will be conditional on the SOLGOR still being greater than the target. Add an assignment
element below the decision element just added, give it a name (reduce light rate), and enter the
assignment shown:
Note that the index [2] refers to the light stream. The indexing is in the same order as the
streams appear on the data entry screen of the Comp-Blend object.
To recap, the logic that we would like to apply is that if the GOR > 2500, we would like to reduce
the rate of the light oil and redo the solve with this new rate.
July, 2021
RESOLVE Manual
Examples Guide 1890
The final step is to connect the discrete objects that were constructed in the earlier steps to
make a workflow. Connection is made by first invoking the 'link' icon , and then dragging
and releasing between elements.
Note that the 'Yes'/'No' labels on the decision element are applied arbitrarily when the lines are
drawn. If they are not correct when they first appear, they can be adjusted by double-clicking on
the decision element and changing the option in the 'Decision' button.
One of the advantages of visual workflows is that they are easily debugged. The entry of
complex logic can be beset with difficulties, so the ability to be able to query the logic of the
workflow is very valuable.
On the worksheet, a breakpoint should be set at the point where the GOR calculation is made.
This is done by clicking on the breakpoint element of the toolbar , and then clicking on the
element at which the breakpoint is to be set.
July, 2021
RESOLVE Manual
Examples Guide 1892
Breakpoint highlighting
The RESOLVE model can now be run as normal. If the breakpoint has been set, the calculation
should be suspended as soon as it enters the workflow. The worksheet will be brought to the
front and the element at which the breakpoint has been set should be highlighted to indicate that
this is the element that is about to be stepped over:
The next stage is to run the elements of the workflow in a step-by-step manner. Before doing
this, we can look at some of the variables that the workflow is manipulating in the 'watch'
window. Invoke this by clicking on the toolbar button:
The watch window consists of two lists. The one on the left is a list of the available variables.
These are selected and moved to the right, where their values are displayed. For the purpose of
this demonstration, select the 'blended' variable (remember this refers to the EOS object which
is the result of the blending calculation). When it appears in the right hand pane, it is indicated
as an object (as opposed to a simple type such as a double-precision value, whose value would
be displayed). To view more details, double-click on the entry in the right hand pane. A new grid
will pop up in which all the properties of the object will be available:
At the bottom of the list of properties is the FlashResultsStd property, which you may recall is
interrogated in the workflow to obtain the SOLGOR from the flash calculation. If this is browsed
to now, it will be noted that all the fields are zero; this is because the calculation has not yet been
carried out.
Clear the popup (the watch window can stay open) and perform a single step of the workflow to
perform the flash calculation. The single step is invoked from the toolbar:
July, 2021
RESOLVE Manual
Examples Guide 1894
After the step, go back to the properties of the 'blended' property as before, and view the
FlashResultsStd. It will now be seen that the values are populated as a result of the calculation:
Note that the SOLGOR is greater than the threshold of 2500 scf/STB.
Close the popup window and take another step. The worksheet should proceed to the 'reduce
light rate' step as a result of the decision element returning 'Yes'. This element should reduce the
flow from the 'light' composition. The result of the step can again be viewed in the debugger by
selecting the 'Comp-Blend' object in the Watch window:
This logic looks like it is working, so we can remove the breakpoint and allow the calculation to
continue. A breakpoint can be removed by selecting the 'breakpoint' button from the toolbar and
clicking on the element as before. After it is removed, continue the run by clicking the 'Run'
button on the workflow toolbar:
The calculation will proceed, with the calculation log indicating multiple iterations called from this
workflow:
The final result for the required rate of 'light' in order to reduce the GOR of the blend to below
2500 scf/STB can be gleaned from the results:
July, 2021
RESOLVE Manual
Examples Guide 1896
The results of the run - light oil flow reduced from 4000 STB/d to 3000 STB/d
This step demonstrates the automatic generation of scenarios by variation of an input sensitivity
variable.
In this case the input sensitivity variable is to be the rate of 'heavy' oil entering the blend. For
each rate, the calculation will determine the required rate of 'light' oil to create a blend with a
GOR of 2500 scf/STB.
Step 1
Every scenario that is run will calculate a rate of light oil. It is important that this rate is reset at
the start of every calculation (this is more due to the lack of sophistication of the algorithm which
In addition, the rate of heavy oil, which is the sensitivity variable, needs to be set at the start of
every scenario.
To perform these actions, we first define our sensitivity variable by going to Variables | User
defined variables, and entering the following:
We then define a workflow which will be executed at the start of every scenario run. The 'Start'
workflow can be displayed by invoking the Events\Actions | Workflows | Start menu item, or
clicking on the toolbar button:
A single assignment element needs to be added to set both the light and heavy oil rates in the
blend object appropriately. Enter the data to obtain the following:
July, 2021
RESOLVE Manual
Examples Guide 1898
50 scenarios will be generated, varying the heavy oil rate from 2000 to 6000 STB/d. To do this,
invoke Scenarios | Sensitise on inputs. Enter the following data:
The contents of this screen should be self-explanatory: the minimum and maximum values of the
range of input values are entered along with the total number of scenarios for this input variable
(note that more than one input variable can be sensitised on, and this will increase the total
number of scenarios to be run multiplicatively).
Click on 'Generate Scenarios' to proceed. The scenario manager will be displayed, appearing
as follows:
Note that each scenario is distinguished only by its initial state (i.e. the HeavyOil rate), with the
workflows for each scenario being identical. To verify this, double-click on one of the 'Initial
State' entries:
Recall that it is this variable that is set into as the heavy oil rate by virtue of the assignment
made in the 'Start' workflow, which was set up above.
The scenarios can be run as normal. They run quite quickly, so there's no need here to resort to
a cluster. Use the Run | Run scenarios menu item, select all the scenarios in the list (default)
and press OK.
July, 2021
RESOLVE Manual
Examples Guide 1900
The final step is to generate a plot of the sensitivity variable (the heavy oil rate) against the
calculated variable (the light oil rate). This is done simply by using the Scenarios | Sensitivity
Results menu item:
Select the light oil rate (as shown) for the y-axis. The x-axis has only one option, which is the
input sensitivity variable. Select all the scenarios in the left hand list. When finished, select the
plot button in the lower right corner.
As expected, the more heavy oil in the mix, the more light oil is required to reach the GOR target
of 2500 scf/STB. However, the 'stepped' nature of the plot is not satisfactory, and is a result of
the algorithm only taking fixed steps until the GOR falls below the target.
As noted, the problem with the previous run was that the results, although having broadly the
correct trend, were not physically realistic in that several values of the heavy oil rate gave the
same result for the light oil rate.
This was a result of the algorithm simply cutting back the light oil rate until the target was no
longer exceeded; once the GOR was below the target, no attempt was made to try and meet the
target exactly by subsequently increasing the light oil rate.
In this step, a slightly more sophisticated algorithm will be introduced to meet exactly (or at least
within a tolerance) the target GOR.
July, 2021
RESOLVE Manual
Examples Guide 1902
For this step, we will be working on the PostSolve workflow. Clearly, this needs to be adjusted
for all the scenarios. For this reason, we will work on the workflow for the first scenario in the
scenario manager, and then copy it to all the other scenarios.
Start by opening the scenario manager (Scenarios | Browse/Edit, or use the toolbar) and
double-click on the PostSolve workflow of the first scenario:
We are going to introduce our bisection algorithm after the GOR has been calculated. By
passing in our current GOR, target GOR, and current value of adjustable variable (light oil rate)
we will receive back a new value for our adjustable variable for the next iteration.
Add a new Operation element (as before with the GOR calculator) and place it on the screen
near the GOR calculation element. The bisection algorithm is in the maths library
(PxMathLib.dll); load this, and select the Bisect function of the Bisection object.
Note that the first argument is an instance of a bisection object. We do not yet have one of
these, so we must create one.
The bisection object is needed as the routine is to be called multiple times (iterations) and the
state of the calculation (e.g. the current bounds of the bisection) needs to be stored between
calls. This state is stored within the bisection object, as will be seen.
To create an instance of a bisection object, use the 'variables' toolbar button. This allows local
and global variables to be defined which can be used by the workflow logic:
On the resulting screen, create a variable called 'bisect'. The variable type should be 'user type',
and the user type should be 'bisection'. Finally, this variable should be set to initialise every
timestep (not every call). This is very important, because it means that it keeps its value
between calls to the workflow and is not reset every time we execute the workflow.
While we are on this screen, add an integer variable called 'iter' and a double-precision variable
called 'NewRate'. These can be initialised every call. Their use will be explained below.
July, 2021
RESOLVE Manual
Examples Guide 1904
We can now populate the bisection calculator element. The arguments for the Bisect call are, in
order:
1. bisect = bisect - the object just created
2. target = 2500 - the value of the GOR target
3. initIncrement = 1000 - the initial step size for the light oil rate
4. curVal = Comp_Blend.OilRate[2] - the current value of the light oil rate (from which the next
value will be derived)
5. curTarget = blended.FlashResultsStd.SOLGOR - the current value of our target variable (the
GOR)
6. tol = 2 - absolute tolerance (2 scf/STB in GOR calculation). Don't exit until this is reached.
7. newVal = newRate - this is the rate that is returned and which should be applied to the light oil
rate if iteration is required
In addition, we set the return value of the function call to the variable we just created called 'iter'.
This will be used to indicate whether the algorithm has converged or not. As indicated on the
For the next step, the decision element should be changed to check whether the value of 'iter' is
non-zero. As before, this will determine whether we continue with the run or iterate.
Then, the assignment of the light oil rate should be changed to make the rate = newRate (as
determined from the bisection algorithm).
Finally, the connections have to be remade to call the bisection algorithm immediately after the
GOR calculation. This is done simply with the connection tool that was used before. To remove
a connection, drag over the existing connection and it will vanish.
July, 2021
RESOLVE Manual
Examples Guide 1906
This now needs to be copied to all the other scenarios. In the scenario manager, right-click on
the workflow on which you were working, and select 'copy this item to all other scenarios'.
The model is now ready to run. It is possible, as before, to debug the workflow by placing a
breakpoint at the start of the workflow and running just the first scenario. This is recommended
to get a feel for the bisection algorithm and to gain debugging experience. Alternatively, the
whole set of scenarios can be run at this point.
Clearly, the results are now almost completely smooth and make more sense.
So far the starting point for the blending calculation has been entered statically, either in the
data entry screen or the initial state of a given scenario. In this step, a GAP model is added to
supply one of the composition objects with dynamic data from a model.
Add the dc.gap GAP model to the system, and connect the separator to the 'medium'
composition object. The composition will be updated at every timestep, and the rate will feed
through automatically to the blend object.
Before proceeding, recall that the last couple of steps have involved running scenarios through
the scenario manger. The set of workflows that are stored in the forecasting part of the model
are now 'out of date', as we updated the scenario workflows directly.
Therefore, go to the Scenarios | Browse/Edit, and right-click on the last scenario. We choose
the last scenario as the heavy oil rate of 6000 STB/d is suitable. From the resulting menu, select
'Set this scenario as the current schedule'.
July, 2021
RESOLVE Manual
Examples Guide 1908
We also want to store the GOR calculated by the workflow (which we hope will be constant at
the target of 2500 scf/STB). To this end, we create a second user variable called SolGOR:
We now adjust the workflow to assign this new variable to the result of the calculation. Return to
the PostSovle workflow, and add a new assignment element just before the 'continue run'
termination element is hit. At the end, the workflow should look similar to the following:
The final action that is required before running is to turn the model back into a forecast model.
This is done, as usual, from the Options | System options menu item. The forecast data
(Schedule | Forecast data) should be set as follows:
July, 2021
RESOLVE Manual
Examples Guide 1910
As the run proceeds, it is possible to plot the rate of the medium oil stream (which is supplied by
GAP) with the rate of light oil (calculated from the workflow). We can also plot the resulting GOR:
In this final step, we recast the model to attempt to improve the performance.
While the model developed in the previous step worked well, it did perform a lot of iteration in
which GAP was rerun despite the fact that the GAP boundary conditions were not changing.
Ideally, we would make the workflow switch execution to the 'blend' data object, as this is where
the workflow has made the change. This would mean that GAP was solved only once for each
timestep.
The PostSolve workflow, as implemented in this example, will always re-solve from the start of
the timestep. However, a workflow implemented inside a workflow driver can switch execution
to any module. In this step, we will move the current PostSolve workflow to a new workflow driver
object.
The first step is to create a workflow application object in the Resolve framework:
July, 2021
RESOLVE Manual
Examples Guide 1912
The resulting object can be placed anywhere in the interface, and the default name ('Workflow')
can be accepted.
We would now like to disable the current PostSolve workflow and export it to the new object.
The disable can be performed from Events/Actions | Options: simply uncheck the PostSolve
workflow option.
To export the workflow from the current PostSolve worksheet, enter the workflow and click on
the export toolbar button:
This will pop up a file save screen. Save the .vwk file in an appropriate location.
To import the workflow in the new object, double-click on the workflow object in Resolve and
then click on the import toolbar button:
Browse to the file that was created earlier. A couple of warning messages will be displayed,
indicating that the file has termination points that the workflow object does not recognise; this is
not a problem.
The workflow object does not recognise 'Redo Solve' as a termination point. In the workflow
object context, the termination point needs to be given a module or data object to which
execution can be switched. Add a new termination object from the palette icon ( ):
July, 2021
RESOLVE Manual
Examples Guide 1914
Place this on the worksheet next to the current Redo Solve element. Decouple the 'Reduce light
rate' element by clicking on the link icon , dragging and releasing from the 'Reduce Light
rate' element to the 'Redo Solve' element. Connect the 'Reduce light rate' element to the new
terminator.
Now double-click on the new terminator and, in the drop down list, select the 'Comp-Blend' data
object. The final worksheet should appear as follows:
The final step is to force this workflow to be executed at the end of the sequence of modules; by
default, as it is not connected to anything, it will be executed in the first group of modules. This
change is made from the Run | Edit calculation order menu item. The Workflow object should
be made a dependency on the 'blend' object (the final one in the sequence):
The model should then be able to run, and the results should be the same as the previous step.
However, it will be noted in the calculation log, that GAP is solved only once per timestep, and
the entire run is therefore much faster.
The objective of this section is to demonstrate how to setup simple calculations and workflows
using Data objects in RESOLVE.
There will be certain instances where modelling objectives require some other detailed
calculations to be performed as part of primary calculations within models. A typical example
when optimising a gas production system, it may be necessary to perform flow assurance
calculations such as hydrates or wax formation and monitor them. If unfavourable conditions are
encountered at any point, then certain actions can be taken to mitigate them. Data objects in
RESOLVE are particularly useful for achieving these kinds of objectives amongst others.
This example will illustrate how post process calculations can be performed within a PVT data
object to monitor wax formation based on evolving fluid compositions from an oil field. The
example used below is a cluster of nine oil wells producing from two reservoirs. The wells are
natural flowing wells at the moment but have the ability to be gas-lifted at a later date (not yet in
view). The entire system has a maximum liquid rate constraint of 50,000stb/d set at the
separator.
Wells W1 and W2 are producing from reservoir "T2" while the other wells produce from
reservoir "Res". The fluid from reservoir T2 has more heavy fractions and likelyhood to produce
wax. Presently a maximum oil rate constraint of 1500stb/d has been placed at the manifold joint
(w_mani) which represents the wellhead of both wells. This is to prevent high production rates
which reduces the fluid temperature that can result in wax formation. The wellhead chokes can
July, 2021
RESOLVE Manual
Examples Guide 1916
be altered to achieve this. With an overall objective to optimise production from the entire
system, having a fixed constraint of 1500stb/d is not applicable. This is because as fluid
composition as well as pressure and temperature changes, the wax appearance temperature
may be crossed which will result in wax formation. In essence, the possibility of wax formation
should be dynamically checked as the calculations proceed and the optimum rate constraints
should be applied.
To achieve this, the fluid compositions at each timestep will be monitored and the wax
appearance temperatures calculated. This will then be compared to the fluid temperature. If at
any point in time the fluid temperature falls below the wax appearance temperature, it is
required to reduce the flow rates from the system to prevent wax dropout and ensure stable flow.
This will be captured in a workflow.
The objective of the study is thus to generate a forecast to maintain production targets as much
as possible while mitigating flow assurance problems due to wax formation. If wax formation is
detected, the maximum oil rate for W1 and W2 is to be reduced by 200stb/d in stepwise
fashion.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
1 1 1
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_2-
Dynamic_wax_appearance_and_workflow
This folder contains a file "GAP_PVT_DataObject.rsa" which is a "RESOLVE archive file" that
contains the RESOLVE file, GAP file and other associated files required to go through the
example. The archive file needs to be extracted either in the current location or a location of the
user"s choice.
Go to Step 1
Start RESOLVE and go to File | Archive | Extract. Select the GAP_PVT_DataObject.rsa file
July, 2021
RESOLVE Manual
Examples Guide 1918
and extract its contents into a selected location. When the "Open Master File?" question is
prompted, select "No". This step ensures the underlying models are extracted into the folder, in
this case, the GAP model.
Start RESOLVE, and open a new project using File | New or the icon and ensure that the
models are set to be reloaded when the forecast starts by going to the Options | System
Options section.
This is important in this case as the workflow manager is going to be used to modify
the client application models, so we need to ensure that we are starting from the same
initial state every time we run the forecast.
Go to Step 2
3.8.2.3 Step 2 - Setup modules
Step 2 Objective:
Import the GAP model and setup the EOS PVT Data object
The next step is to create instances of the various applications that we wish to define/connect in
RESOLVE. We shall load the instances on the main screen.
From the main menu, go to Edit System | Add Client Programs or select the icon.
From the resulting menu, select "GAP". The cursor, when held over the main screen, will change
to indicate that an instance of the application can be made.
Click on the main screen where the GAP icon is to be located, and accept the default label
("GAP").
July, 2021
RESOLVE Manual
Examples Guide 1920
For the file name, browse to the file "GAP4.gap" as shown above.
Note that from this screen it is possible to run GAP on a cluster. Further information on this can
be found in the Setting up a Cluster section.
When OK is selected, GAP will start and load the required case. It will then query the case for its
sources and sinks (input and output feeds) and will display these on the screen as shown
below.
The icons can be moved by selecting the "move" icon on the toolbar ( ) and then dragging
them to the required positions.
With the GAP model loaded, the next step is to input the EOS data object. This can be found by
going to "Edit System| Add Data" and then selecting "EOS PVT". Alternatively, the following
icon can be used
July, 2021
RESOLVE Manual
Examples Guide 1922
Once this is selected, click within the RESOLVE window and give the data object a name. In
this case, the default name "EOS-PVT" will be used.
The EOS PVT object is going to be picking up the composition of the entire fluid stream as
provided by the separator in the GAP model. All that needs to be done here is to link the
separator source to it as shown below.
If the EOS data object is clicked on, it will open up the familiar Petex fluid composition data
input where the fluid composition from the separator will be defined. Please note that there is no
need to define any composition on the interface at the moment. This will be dynamically
populated by GAP as the calculations proceed.
July, 2021
RESOLVE Manual
Examples Guide 1924
Go to Step 1 or Step 3
3.8.2.4 Step 3 - Define workflow item
Step 3 Objective:
Define workflow item and publish required application variables.
The next step is to setup the workflow to monitor the wax appearance temperature. This
example shall use an external workflow object to illustrate this approach. Note that the visual
workflows section under "Events/Actions" can also be used to achieve the objective.
From the main menu, go to Edit System | Add Client Programs or select the icon. From
the resulting menu, select "Workflow". The cursor, when held over the main screen, will change
to indicate that an instance of the application can be made. Click on the main screen where the
Workflow icon is to be located, and accept the default label ("Workflow").
Link up the GAP item to the Workflow item. The resulting connections should be as below:
With this in place, the next step is to define the workflow. As defined earlier, the objective is to
monitor the wax appearance temperature. If there is likelyhood of wax developing, the maximum
oil rate constraint for wells W1 and W2 should be reduced by 200 stb/d.
To begin, variables have to be published from the different client modules prior to using these
variables in RESOLVE. This procedure is described in the "Publish Application Variables"
section.
To start using the tool select Variables | Import application variables. This will bring up the
interface below where the variables to be used from the client modules are published.
Separator temperature
Maximum oil rate constraint for joint "w-mani"
Liquid and oil rates for joint "w-mani"
On the Solver (output) variables tab, select publish the liquid and oil rates for the manifold joint
(w-mani) and also the separator pressure.
July, 2021
RESOLVE Manual
Examples Guide 1926
On the constraints tab, click on re-scan to populate the available constraint variables in GAP
and then publish the maximum oil rate constraint for joint w-mani.
Select "OK" and makes the variables available for plotting by selecting "Add to plot".
July, 2021
RESOLVE Manual
Examples Guide 1928
The next step is to publish a user defined variable which shall be assigned to the wax
appearance temperature to be used by the workflow. To define this, select "Variables| User
defined variables". Once selected, a generic variable interface shall appear where any
variable can be declared. In this case, we shall call the wax apperance temperature Twax as
shown below.
Go to Step 2 or Step 4
3.8.2.5 Step 4 - Setup workflow
Step 4 Objective:
Setup Workflow and actions.
To setup the workflow, double click on the Workflow item within the RESOLVE window. A similar
example on how to setup visual workflows can be found within Example 2.3.
The first step is to calculate the wax appearance temperature (Twax). This will be done using an
Operation element. To add an Operation element, click on the palette icon in the workflow
toolbar ( ). Click on the Operation item, and then click next to the 'Start' element in the
workflow canvas.
July, 2021
RESOLVE Manual
Examples Guide 1930
Once entered in the model, double click on the Operation element and define a label e.g. Calc
Wax Temp. Click on 'Add global function', select the category of Operation as 'EOS
thermodynamic calculations', and select the operation 'Return the wax appearance temperature
for a composition':
The input EOS will be available in the EOS_PVT data object as provided by GAP at each time-
step. We will also like to assign the calculated temperature to the user-defined variable Twax.
Both EOS PVT data object and Twax variable will be available as part of RESOLVE user
variables and need to be entered in the appropriate sections as shown above.
The next step is to input a decision element to monitor if the fluid temperature at the separator
falls below the calculated wax formation temperature. To add a decision element click on the
palette ( ) and then click on the 'If...then' icon:
July, 2021
RESOLVE Manual
Examples Guide 1932
Place this below the previous calculation element, and double-click to enter the data entry
screen. As before, give it a name (e.g. 'Any Wax?'). The elements will be connected into a
workflow later.
Double-click on the decision element and input the condition as shown below.
If the condition is true, the action will be to reduce the maximum oil rate constraint on Wells W1
and W2 by 200stb/d. This is executed by inserting an assignment element from the pallete (
) as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1934
Double click on the assignment element and define the corresponding action. The label is also
changed as specified below.
Next step is to insert Terminator elements from the palette ( ) as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1936
With these steps completed, the various workflow items can now be linked together using the
link icon . The linking shall be done from the Start element towards the Terminators. Note
the flow diagram progression: If the condition is true, then then the workflow proceeds to the
action. If false, the run continues.
It is sometimes possible that the progression of the logic from the decision element to the
actions is not correctly represented when the linking is done e.g. The action being linked as a
"No" instead of a "Yes" or vice-versa. This can be easily rectified by double-clicking on the
decision element and setting the status of the action elements correctly as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1938
The last step is to ensure that if the condition is taken and the action is true, the GAP model can
be resolved. This is similar to Redo solve and will be achieved by double clicking on the
"Terminator - 2" element and setting it to "GAP".
Once this is done, select "OK" and the workflow should be as shown below.
July, 2021
RESOLVE Manual
Examples Guide 1940
Go to Step 3 or Step 5
3.8.2.6 Step 5 - Enter schedule
Step 5 Objective:
Setup the RESOLVE Schedule
Before the simulation is started, it is necessary to specify the run schedule in RESOLVE.
For the purposes of this example, we will be making use of the basic scheduling only.
To setup the basic RESOLVE schedule, invoke the schedule screen from the main menu using
Schedule | Forecast data.
The start date can be selected from the start dates of the various connected modules by clicking
on the "Select from client modules" button. This will display a screen allowing the user to
select the required start date from a list of the various model start dates.
The timestep and schedule duration are also entered here as shown.
Go to Step 4 or Step 6
July, 2021
RESOLVE Manual
Examples Guide 1942
To run the the forecast, press the icon. Note that the run can be paused or stopped with
other toolbar icons.
The calculated results can be observed by going to Results | View forecast plots. Below shows
the plots of separator temperature and the oil rate at the manifold joint. It can be seen that the oil
rates are reduced by 200stb/d at different points in time up till sometime in 2011 when it is
reduced to zero as there is now always a potential for wax formation even at low rates. It can be
confirmed from the solver logs, that at this point, the GAP model is re-solved three times until the
wells are practically closed.
July, 2021
RESOLVE Manual
Examples Guide 1944
3.8.3.1 Overview
1. Example Introduction
The objective of this section is to demonstrate how to setup a simple tight reservoir model in
RESOLVE. As explained in the User guide, the Tight reservoir workflow is a feature within
RESOLVE which can be employed to tight reservoirs such as tight gas, shale gas, tight oils etc.
where standard transient analytical inflow models do not properly capture the inflow responses
of the systems.
The following example is a cut down version of a real field example which has been structured to
show data entry and how the history match is performed. The focus is on developing an inflow
performance relationship for a horizontal well with multiple hydraulic fractures in a tight gas
system (permeability of 1 micro darcy).
Once the PdTd curves have been matched to field data, the next step is to integrate this
reservoir object with a GAP model and perform a prediction. Although the focus here is on a
single well system, this procedure can be readily extended to a large number of wells using a
workflow.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE REVEAL
1 1
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_3-Tight_Reservoir
This folder contains a file "Tight reservoir.rsa" which contains the RESOLVE archive for the
completed example. Extract this RESOLVE archive model to the appropriate destination and
start with a new RESOLVE file. Go to Step 1
July, 2021
RESOLVE Manual
Examples Guide 1946
Start RESOLVE, and open a new project using File | New or the icon
The next objective is to define the PVT model for the tight reservoir system. The gas PVT model
will be used and will be defined in RESOLVE using the black oil data object.
Once this is selected, click within the graphical interface to insert the data object. Double click
on the BO object to define the fluid PVT. The PVT type will be set as Gas and once this is done,
the dry gas PVT data will be entered. Data entry is similar to Black oil PVT in all the IPM tools.
The data below can be entered for the PVT model.
PVT Model
With the black oil PVT model, the user can use the Black oil model directly without tuning or
match it to PVT measured data.
Go to Step 2
3.8.3.3 Step 2 - Setup tight reservoir
Step 2 Objective:
Setup the Tight Reservoir data object
The next step is to set up the Tight reservoir data object in RESOLVE and link this to the Black
oil PVT data object that was created in previous step.
July, 2021
RESOLVE Manual
Examples Guide 1948
To do this, click on the data objects selector as shown below. An alternative way is to go via the
main menu and select "Edit System | Add Data | Tight Reservoir".
Once done, insert the Tight Reservoir data object into the model and then link the BO PVT data
object to it using the link button.
By double clicking on the Tight reservoir data object, it is possible to see that the PVT section
has been pre-defined based on the options selected for the BO PVT data object.
Go to Step 3
The next step is to setup the reservoir model. This is structured into four sections: the
dimensions of the well drainage area, petrophysical parameters, reference conditions and the
relative permeability information. The average data set for the reservoir block to be used in this
example is provided below.
July, 2021
RESOLVE Manual
Examples Guide 1950
Porosity 0.01fraction
Horizontal permeability 0.0001 mD
Vertical anisotropy 1
Reservoir pressure 10000 psig
Reservoir temperature 200 degF
Depth 20000 feet
Rock compressiblity 1e-5 (1/psi)
To setup the section, double-click on the Tight reservoir module and click on "Reservoir". The
well in question is draining from 50 acres with a thickness of 100ft. A length/width ratio of 1 is
assumed. This creates a square drainage volume and the orientation can be changed to plan,
horizontal or 3D view.
Next, specify the reference condition of the system. No flow and fixed boundary conditions can
be introduced at this point. The reference temperature and pressures should be defined. If
contacts are available, these should be entered to initialise the pressures and saturations within
the blocks respectively.
July, 2021
RESOLVE Manual
Examples Guide 1952
The relative permeability model implemented is the Stone 1 model. Data to define for the model
are shown below and can be obtained directly from SCAL analysis.
Go to Step 4
3.8.3.5 Step 4 - Define well parameters
Step 4 Objective:
Define the well parameters.
The well configuration including fractures (if any) will be defined within this module. The
horizontal well to be modelled has 1250 ft horizontal section which is centrally positioned in the
Y and Z directions.
July, 2021
RESOLVE Manual
Examples Guide 1954
Any fracture information is described within the next tab. Defining parameters include the
fracture half length and height, number of fractures and dimensionless fracture conductivity. This
generates evenly spaced fractures along the well. It is important that the fracture dimensions do
not exceed the reservoir geometry.
Go to Step 5
3.8.3.6 Step 5 - Import well history
Step 5 Objective:
Import well history
The next step is to input the production history. This is found in the associated Excel
spreadsheet in the folder location called "TightReservoir_History.xls". Data entry can be based
on cummulative volumes or rates. Generally, the production data should be entered in terms of
all the rates produced and flowing bottom hole pressure (FBHP). If tubing head pressures (THP)
are available instead, then it is possible to convert the THPs to FBHPs using the BHP from
WHP calculation in PROSPER.
For the example at hand, we have bottom hole pressures and instantaneous volumes available
in the spreadsheet. It is important to set the units and the rate type before pasting the data into
RESOLVE.
Copy the data from the spreadsheet and click on the left hand arrow as shown below and select
July, 2021
RESOLVE Manual
Examples Guide 1956
"Paste" icon. This should populate the data into the cells. There are also options to edit the
entered data i.e. copy, paste, delete e.t.c. From this interface as well, it is possible to export
data and model built so far into REVEAL and create a separate REVEAL file. It is also possible
to run the simulation as it is. In this case, we need to match the model first.
Go to Step 6
3.8.3.7 Step 6 - Analysis
Step 6 Objective:
History matching and analysis
The next objective is to history match the model. This will be done by creating the PdTd curves
which are obtained by running the simulation in REVEAL in the background. The simulation
response can then be compared to the history data and a match can be manually or
automatically done.
Select "Create PdTd" . This should create a REVEAL model with all the reservoir and well
parameters entered and run the model to create the PdTd responses. It will be seen that a
mean average rate is defined by the model from the production history and based on the
simulation control specified. It is however possible to modify the reference rate. In general, a
stable low reference rate provides good quality PdTd curves. Also note that the process of
creating the PdTd curves does take some time to complete.
Once the calculations are finished, select "Update plot" to show how the simulation and history
curves compare.
July, 2021
RESOLVE Manual
Examples Guide 1958
Matching the curves can be done manually or automatically. Manual matching involves directly
changing the permeability, porosity, PVT modifiers or even applying early/late time weighting.
Each time a manual change is done, select "Update plot" to see how the curves compare.
The Auto Match feature can also be used here: this needs some guidance for a good initial
starting point before it can do the match. Select "Auto Match" and repeat the process a couple
of times, to match the simulated response to the history data. This is done by applying some
permeability and porosity multipliers. As mentioned in the Userguide, the automatch employs a
least squares regression algorithm. it is important that the simulated results and history data are
somewhat close from the start for reliable results. This means the model should be properly
checked in terms of input data quality, PVT e.t.c. to prevent inconsistent results from the
regression.
Once a match is obtained, the PdTd curves are then ready and can then be exported for further
use. For example, one can plot transient IPR curves which will be obtainable from the end of the
history period. This will give an idea of well inflow response if coupled for further use. Beyond
this, the PdTd curves can also be inspected, plotted e.t.c. under "Edit PdTd".
It is important to note that a transient IPR is different to a typical analytical (steady-state) IPR.
The calculation procedure for a transient IPR is explained here.
A transient IPR requires the entire rate history to be present to calculate the FBHP for future
times. The IPR itself is at the end of the time-step rather than at the start of the time-step. The
following picture illustrates how a transient IPR is calculated for different times after the end of
history:
July, 2021
RESOLVE Manual
Examples Guide 1960
For example, the transient IPR at time 'time1' above is calculated by assuming a series of
constant rates from the end of history to the time 'time1'. Each constant rate will have an
associated FBHP calculated at the end of the time-step using the full superposition of the
historical rate values. These calculated FBHPs are then plotted against the rate values to give
the transient IPR at time 'time1'. This procedure is repeated to calculate the IPRs for the
different times. The resulting transient IPRs for these different times are shown below:
Once the analysis is complete, press "Finish" and save the RESOLVE file.
Moving forward, the information in the entire Tight Reservoir object can be exported to a .rdo file
and imported into GAP:
July, 2021
RESOLVE Manual
Examples Guide 1962
Refer to the GAP manual for information on importing this object and setting up the well.
3.8.4.1 Overview
1. Example Introduction
The objective of this section is to demonstrate how to enter data and perform a Multi Well
Allocation (MWA) calculation in RESOLVE. As explained in the user guide, the MWA Data
Object is used to allocate the total production measured in a field to individual wells using an
integrated model. This is a powerful tool which proves valuable in situations where we do not
have direct measurements for all the wells in the field and need to achieve a better
understanding of how the field is performing simply based on total field rates. Access to the
production on a well-by-well basis is important as this allows a variety of tasks to be achieved
(optimisation, history matching etc).
The field in question is an oil field with two reservoirs and six wells in total. Both reservoirs have
reservoir pressures greater than the bubble point pressure, and hence the producing GORs for
the wells in these reservoirs are known (reservoir A = 800 scf/STB and reservoir B = 500 scf/
STB).
The available field data are measurements at the well head (FWHP, FHWT), choke
measurements (choke upstream and downstream conditions) as well as gauge measurements
(pressure and temperature) for each well. The measured total phase rates (oil, water and gas)
at the separator are known.
The objective of this example is that given these field measurements, we need to determine the
individual phase rates for each well using the MWA and Field Data objects in RESOLVE. In
particular, since the GORs for both the reservoirs are known, the variables that need to be
calculated are the liquid rates and WCTs for each well. The example will also use a workflow to
run the MWA calculation which demonstrates some of the properties/functions available to run
the calculation via a workflow.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE GAP
1 1
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\samples\Resolve\Section_6-Data_Objects\Example_6_4-Multi_Well_Allocation
July, 2021
RESOLVE Manual
Examples Guide 1964
completed RESOLVE model for this example. Extract this RESOLVE archive model to the
appropriate destination and start with a new RESOLVE file (go to Step 1).
3.8.4.2 Step 1 - Start a new RESOLVE file
Step 1 Objective: To start a new RESOLVE file and add the data objects/applications.
Start RESOLVE, and open a new project using File | New or the icon
The next step is to add the MWA data object and the Field data object. To do this, browse to
Edit System | Add Data | MWA or go to the option in the menu bar:
Click anywhere on the RESOLVE canvas: this will add the MWA object.
Next, add the Field data object (Edit System | Add Data | Field Data or via the menu bar):
Click anywhere on the RESOLVE canvas as this will add the Field data object.
We will now add the GAP instance in RESOLVE and associate the GAP model to it. From the
main menu, go to Edit System | Add Client program or select the icon on the shortcut bar
and from the resulting menu, select "GAP". Click on the RESOLVE canvas to add the GAP
instance.
Double click on the GAP instance and associate the GAP model provided with the example "Oil
Field.gap". Press OK
July, 2021
RESOLVE Manual
Examples Guide 1966
At this point it is a good idea to save the model (Go to File | Save).
3.8.4.3 Step 2 - Field data object
Step 2 Objective: To setup the Field data object and add the MWA object.
Double click on the Field data object to open its interface. We will enter the following field
measured totals:
Click on 'Go' to add the separator and all wells from the GAP model as equipment in the Field
data object.
This will create two additional tabs in the Field data object for the equipment. Click on the 'Well'
tab. We will enter the following available measured data for the wells:
July, 2021
RESOLVE Manual
Examples Guide 1968
Remember to scroll down and enter all the data provided in the table above. Click on the
different well names from the list on the left hand side and enter the data for the remaining wells.
No data will be entered in the separator tab for the time being.
This completes setting up the field data object. Click OK to go back to the main RESOLVE
screen.
3.8.4.4 Step 3 - Workflow
Step 3 Objective: To setup the visual workflow that will be used to run the MWA calculation.
We will now add a workflow element which will be used to run the MWA calculation. Add the
Workflow object (Edit System | Add Client Program | Workflow or via the menu bar):
Double click on the Workflow element to start building the workflow. The steps of the workflow
will be to load the field data into the MWA object in RESOLVE, and then edit the tool
interactively. Note that it is also possible to do this directly in RESOLVE by connecting the Field
data object to the MWA tool directly and running the calculation via the MWA tool.
Click on the palette and add the two operation elements and the terminator as shown.
July, 2021
RESOLVE Manual
Examples Guide 1970
The first operation element will be used to load the field data into the MWA object. Double click
on 'Operation-1' element. Change the workflow item name to 'Load Field Data' and click 'Add
global function' to load the measured data as shown:
July, 2021
RESOLVE Manual
Examples Guide 1972
Click on OK and double click on 'Operation-2'. This operation element will have a function that
allows the MWA tool to be edited interactively.
July, 2021
RESOLVE Manual
Examples Guide 1974
The MWA calculation will be run from the visual workflow defined in the previous step. Click on
the run button in the workflow to perform the calculation:
This will open the MWA object interface since this is called through the workflow. Given the
available data here, the phase rates can be calculated for each well using two methods:
1. VLP
This method uses the entered WHP and gauge pressure and estimates a rate using a VLP
curve:
July, 2021
RESOLVE Manual
Examples Guide 1976
2. Choke
This method uses the choke measurements and estimates a rate that satisfies the choke
performance:
An additional method of calculating the rate is the IPR method which is not being used here. The
IPR method requires a measurement of the FBHP and using this measurement and the IPR
curve, the rate can be calculated. If a well has an ESP, then the ESP inlet and outlet conditions
can be used to estimate the rate using the pump performance curve.
As explained in the overview to this example, since we know the producing GORs for the wells,
we will be fixing these values in the MWA calculation screen. We will be using the VLP and
choke methods here:
Enter the GOR values as 800 scf/STB and 500 scf/STB for the reservoir A (Wells 1A, 2A and
3A) and reservoir B (Wells 1B, 2B and 3B) wells respectively. The choke and VLP methods are
activated by entering the number 1 in the options as shown above. Enabling the 'VLP/IPR
guidance' for all the wells allows an initial estimate of the rate for the non-linear regression (VLP/
IPR intersection) which helps towards finding an appropriate solution.
As can be seen from the results, the WCT for wells in reservoir A is ~8.3% and for wells in
reservoir B is ~8%. The oil, water and gas phase rates are also calculated by the MWA tool.
The calculated values for the gauge pressure, temperature and choke outlet pressure by the
MWA tool agree well with the field measurements.
Clicking on the 'Regression Options' tab shows the overall results and also the Chi2 value; the
Chi2 value is an indicator of the 'goodness of fit' of the regression algorithm. The smaller this
number, the better the fit which means that the MWA calculated phase rates will be reliable.
Here, a small value is indeed obtained which means that the MWA calculation has converged to
a satisfactory solution.
July, 2021
RESOLVE Manual
Examples Guide 1978
Depending on the amount of data entered and the variables to be calculated, multiple solutions
can exist since the MWA uses mathematical regression to calculate the individual phase rates
for the wells.Therefore, we use a combination of different methods (choke and VLP) to
independently calculate the rates such that the algorithm converges on a unique solution.
Additionally, by fixing known quantities (i.e. GORs for the wells), we further move towards a
unique solution for the well WCTs and liquid rates removing additional unknowns that would
otherwise need to be calculated.
Press OK and save the RESOLVE model. This completes the MWA example.
3.8.5.1 Introduction
1. Example Introduction
This example is designed to illustrate the usage of the Well Builder data object in RESOLVE.
As explained in the user guide, this data object is used to create exportable REVEAL (.XML)
well descriptions and also to couple these with REVEAL specific RESOLVE data objects such
as the SAGD Data Object.
The objective of this example is to demonstrate how to setup a few detailed well descriptions in
RESOLVE. Particularly, four well objects will be created that represent a typical SAGD system:
In this example, two wells will be designed, an upper injector well and a lower producer well and
there are two stages of production. The first stage consists of a pre-heating phase where both
the wells circulate steam (without any steam going into the reservoir i.e. wells do not have
reservoir connections) which causes the reservoir to heat up. This corresponds to well
descriptions 1 and 2 above. The second stage consists of a production phase where only the
top injector injects steam into the reservoir and the lower well produces oil due to the process of
gravity drainage. this corresponds to well descriptions 3 and 4 above.
This example provides step by step instructions on how to build the Well Objects only. The next
example explains some of the theory behind SAGD modelling, the purpose of the SAGD Object
in RESOLVE and also some instructions on setting up the SAGD Data Object.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE
1
Before starting with this example, it will be necessary to make sure that the PxWellObject Data
Object driver is registered .
July, 2021
RESOLVE Manual
Examples Guide 1980
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_5-Well_builder
This folder contains a file "Well_builder.rsl" which contains the RESOLVE file for the completed
example.
3.8.5.2 Step 1: Producer pre-heater well descriptions
Start a new RESOLVE instance and create a new file.
From the RESOLVE Edit System Menu select the Add Data sub menu and select the Well
Object menu command (alternatively select the Well Object entry from DataObject toolbar drop
down menu):
July, 2021
RESOLVE Manual
Examples Guide 1982
Producer Preheater
Click anywhere on the RESOLVE canvas to add the Well Object and enter the name
Producer_ph when prompted. Double click on the Well Object icon to open the Well Builder
User Interface.
General Description
Note the User Interface contains a row of Tab items from left to right. Initially the main
Completion Designer Tab is disabled.
Select the Well name field and change the name from Well_1 to producer_ph. This field
represents the unique identifier for this Well Object.
The remaining attributes on this screen are optional and we will leave them empty for the
purpose of this tutorial.
Click on the 'Nex't button or select the 'Reference location' tab to enter information describing
the location and situation of the well
Reference Location
This screen defines the situation and reference datum for the current well. Select Situation
setting as (default) Land, change the reference datum (ZMD) to Kelly Bushing.
The table allows us to define the elevation offset of our selected reference datum (KB) above/
below an absolute reference and similarly the earth reference datum Mud Line (Ground Level)
above/below the absolute reference. A separate absolute reference is not a requirement so we
can enter the following information.
July, 2021
RESOLVE Manual
Examples Guide 1984
Deviation Survey
A range of survey type data-sets are supported and it is possible to convert between these by
changing the survey type selection.
Select 'Survey type' and choose 'Relative' to change the grid headings to MD, Inclination and
Azimuth. Enter the reference (XYZ) as (000). Ensure the MD unit setting is set to feet and enter
the Producer Survey from the left hand side of the table below.
Producer Injector
0 0 90 0 0 90
100 0 90 100 0 90
July, 2021
RESOLVE Manual
Examples Guide 1986
At this stage the graphic should be updated to display an indication of the trajectory path and
the Completion Designer Tab should now be enabled. Select the Completion designer Tab
heading.
Completion Designer
For this tutorial we are going to build the well schematic description using the grid only but it
should be noted it is possible to switch to a point and click approach (using equipment menu
and/or the Equipment Browser) at any time.
July, 2021
RESOLVE Manual
Examples Guide 1988
Click on the Add command on the Data Grid and select DrillRegion from the SubType drop
down list as shown below
The DrillRegion attributes screen will appear. Here, we are going to drill a 10 inch hole from
the Mud Line down to the bottom of our survey. Enter 10 inches for the diameter and leave the
Length and Top (MD) attributes unchanged.
The value of the heat transfer coefficient in the 'Heat Transfer' tab will be left to its default value
(8 BTU/hr/ft2/F). Click OK.
To review the Drill Region or any other equipment item attributes at any time, select the
corresponding row of the equipment in the Data Grid and Select the Edit command .
July, 2021
RESOLVE Manual
Examples Guide 1990
Next select the Data Grid Add command (as before) and select equipment SubType Casing.
At this stage we can directly enter a Casing OD/ID to fit the hole and click on OK. However click
on the Tubular Goods Lookup and select the record corresponding to 9 5/8 in. 72 lb/ft. Click OK.
Enter the 'Top MD' as 100 ft and part length as 5400 ft. Click on OK to close the Casing
Attributes screen. At this stage we have extended Casing along the length of the drill region.
Now select the Data Grid Add command as before and select equipment SubType Tubing. At
this stage again click on the Tubular Goods Lookup and select the record corresponding 3 1/2
in. 12.7 lb/ft Click on OK to return to the tubing attributes screen. Enter a Part Length of 5400
feet and set the Top(MD) to 100 feet. Click OK.
July, 2021
RESOLVE Manual
Examples Guide 1992
Next Select the Data Grid Add command (as before) and select equipment SubTyp Screen.
Leave the Bottom (MD) as 5500 feet and set the Part length = 3826 feet.
Click on the icon in the Menu Bar commands area. A warning message will appear to
indicate our well is not completed. This is OK as we are defining our Pre-heating phase well
description.
July, 2021
RESOLVE Manual
Examples Guide 1994
Close the main Well Builder window using the File | Exit command and choose Yes when
prompted to save the changes. We have now successfully built our first well description.
Additional notes
1. The addition of the Mule Shoe, WEG, Ball Plug or Screen equipment types as the deepest
base pipe equipment type (base of tubing) ensures the base pipe in the exported REVEAL
simulation model is not isolated. This is because the base pipe by default terminates closed,
and in order to allow flow into the base pipe any one of these equipment types need to be
added.
2. Here, we have effectively run our 3 ½ tubing through the Screen Equipment @ 5500 ft. In
reality, the tubing and screen do not overlap. However for the well builder object this highlights
extended tubing concept encapsulating ID/OD, roughness and heat transfer attributes of the
base pipe or second tubing string. All completion equipment jewellery items e.g. Pump, SC-
SSSV, Mandrel and Packer items added to the well schematic will inherit the ID/OD,
roughness attribute values of the underlying tubing equipment item.
3. The equipment needs to be added in the order of decreasing pipe diameters as is done
here, i.e., starting from a drill region and working inwards towards the screen.
4. The data grid automatically rearranges the equipment in an order of increasing Top MD. In
the situation where two equipment have the same top MD there is no particular order in which
the table is arranged.
Click anywhere on the RESOLVE canvas to add the Well Object and enter the name
Injector_ph when prompted.
Double click on the Well Object icon to open the Well Builder user interface. As before change
the Well name from Well_1 in the General description tab change this to injector_ph click on
Next.
July, 2021
RESOLVE Manual
Examples Guide 1996
As before change the ZMD setting to Kelly Bushing and enter the following.
Click on Next (or select the Deviation Survey) Tab heading. Change the Survey type to
'Relative', ensure the MD unit is set to feet and enter the Injector survey from the Well Survey
Table in Step 1. The Start X, Start Y and Start Z fields will be left to 0, 0, 0.
July, 2021
RESOLVE Manual
Examples Guide 1998
Click on the Completion Designer Tab. The equipment for the 'injector_ph' well will be identical
to the 'producer_ph' well. We will use the 'Copy' 'Paste' functionality of the data grid to copy the
details from the 'producer_ph' well and paste it in the 'injector_ph' well.
Firstly, ensure the ID/OD unit selection in the Data Grid is set to inches and Top, Bottom, Length
units are in feet. Then close the 'injector_ph' well description using the cross at the top right
corner and when prompted, save the changes.
Double click on the 'producer_ph' well created earlier and in the Completion Designer tab,
select the rows of data and press 'Copy':
Close the 'producer_ph' well description and when prompted, do not save the changes.
Proceed to the Completion Designer tab in the 'injector_ph' well and click on the 'Paste' button.
The data grid will finally appear as follows:
Note that the Total MD of both our pre-heating wells are identical so no adjustment is required,
however the two wells have different paths i.e. the deviation surveys are different.
July, 2021
RESOLVE Manual
Examples Guide 2000
Additionally, the Paste command can be used to transfer sub-assemblies or entire well
descriptions between separate Well Objects (with different underlying trajectories) and between
Resolve Models. Any equipment item not present in the paste command target is automatically
cloned and added to the model.
Close the 'injector_ph' well description and when prompted, save the changes. This completes
the injector pre-heater well.
Click on the Completion Designer Tab. Ensure the ID/OD unit selection in the Data Grid is set
to inches and Top, Bottom, Length units are in feet
Enter the following equipment in the data grid in the order given below (largest to smallest
diameters):
Casing Casing 9 5/8 in. 72 lb/ft 8.125 9.625 150 5500 5350
Tubing tubingBase 3 1/2 in. 12.7 lb/ft 2.75 3.5 150 5500 5350
July, 2021
RESOLVE Manual
Examples Guide 2002
Adding the DrillRegion, Casing, Tubing and Screen types has been explained in Step 1. The
IsolationPacker and Perforation types are explained here.
In the IsolationPacker screen, enter the bottom MD and part length as shown below:
For the perforations, enter the bottom MD and part length as shown:
Click on the icon in the Menu Bar commands area. An error message will appear to
indicate the flowing radius of the Completed section of the well is not defined.
Select the Perforations row from the Data Grid and select 'Edit' (alternatively double click on the
completion/Perforation icon in the well schematic). Navigate to the Completion/Perfs screen
July, 2021
RESOLVE Manual
Examples Guide 2004
and enter 5 inches as the Flowing Radius (leave the remaining default settings and ensure
Completed selection = Yes).
Close and Save the Producer Well description. This completes the setup of the 'Producer' well.
when prompted.
Right Click on the 'Injector_ph' Well Object and select copy object to clipboard:
Next right click on the 'Injector' Well Object icon and choose 'Paste object from clipboard'. This
time we have cloned the entire well including the underlying trajectory.
Double click on the Injector Well Object. In the 'Completion Designer' tab enter the data below to
add the missing Packer and completion icon.
Sub type StringID Description ID OD Top Bottom Length
3.5
IsolationPacker Casing 2.75 3.5 835 836 1
"IsolationPacker
Perforations Inflow Perforations#1 2.75 4050 5500 3826
July, 2021
RESOLVE Manual
Examples Guide 2006
Select the Perforations row (record) from the Data Grid and select edit (alternatively double
click on the completion/Perforation icon in the well schematic). Navigate to the Completion/
Perfs screen and enter 5 inches as the Flowing Radius (leave the remaining default settings
and ensure Completed selection = Yes) as before.
Click on the General Description tab and change the Well name to 'Injector'. Note that any of the
Well descriptions created so far can be exported individually in REVEAL format by going to File
| Save As from the Well Object menu bar.
Close and Save the Injector Well description and save the RESOLVE file. This completes the
Injector well and concludes the example.
3.8.6.1 Introduction
1. Example Introduction
An introduction to SAGD systems, the objectives of the modelling and capabilities of the SAGD
Data Object in RESOLVE have been provided in the user guide. It is suggested that the user
goes through this introduction in the user guide before proceeding with the example here to
understand the context behind the SAGD object in RESOLVE.
SAGD injection and production well pairs have been designed for production in a heavy oil
reservoir. Detailed well descriptions are available for the deviation and equipment types along
the well. Additional data available include the fluid PVT description and the reservoir
description.
The previous example described the procedure for creating well descriptions using the Well
Builder Object in RESOLVE, and therefore the steps to create well descriptions are not
discussed here. The objective of this example is to demonstrate how to setup the SAGD Data
Object.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE REVEAL
1 1
Before starting with this example, it will be necessary to make sure that the following RESOLVE
Data Object drivers are registered PxWellObject and PxSAGD.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 2008
...\resolve\Section_6-Data_Objects\Example_6_6-SAGD
This folder contains a file "SAGD_final.rsl" which contains the RESOLVE file for the completed
example. To begin with the example, a "SAGD_start.rsl" file is provided: open this file to begin
this tutorial. Go to Step 1.
3.8.6.2 Step 1: Add the SAGD data object
From the Resolve Edit System Menu select the Add Data sub menu and select the SAGD
menu command (alternatively select the SAGD Object entry from Data Object toolbar drop
down menu):
Click anywhere on the Resolve canvas to add the SAGD Object and click on OK.
Next select the Resolve | Edit System | Link command or select the link icon:
Click on each of the 4 Well Builder object icons in turn and drag (with mouse depressed) to
establish a connection with the SAGD system object icon as shown.
July, 2021
RESOLVE Manual
Examples Guide 2010
Double click on the SAGD Data Object icon to enter its properties.
The workflow to use the SAGD data object is top down going through the different sections. It is
possible to jump to the different sections by clicking on the image (e.g. click on 'Reservoir' to go
to the reservoir section).
Go to Step 2.
PVT Data
Enter the following PVT Data
GOR (scf/STB) 1
Number of Steps 20
Click the 'Viscosity vs Temperature' tab and enter the following Viscosity v/s temperature data:
July, 2021
RESOLVE Manual
Examples Guide 2012
50 180000
68 22000
86 5500
104 2100
122 1000
158 350
194 170
230 100
266 65
302 45
338 34
374 27
410 22
446 18
482 16
518 14
554 12
572 11.5
626 10
Reservoir data
Select 'Next' or Press the 'Reservoir' button.
Enter the following reservoir data:
July, 2021
RESOLVE Manual
Examples Guide 2014
Click on the 'RelPerms' tab and enter the following rel perm data:
Krow 0.2 1 3
Krog 0.2 1 3
July, 2021
RESOLVE Manual
Examples Guide 2016
Wells data
Click 'Next' or press the 'Wells' Button.
General tab
Enter Surface Temperature = 40 F
Next proceed to map the internal SAGD wells to individual Well Objects. This mapping allows
us to associate the fixed Well List which the SAGD object expects to the available wells that
have been linked to the SAGD object in RESOLVE. The expected Well List consists of four
wells: two wells for the pre-heating phase (producer_preheater and injector_preheater) and two
wells for the production phase (producer and injector). We have already associated four wells to
the SAGD object in the previous step and here we will associate them via the mapping. More
information on this mapping step is available in the user guide.
The individual Well Mappings are established through the General Tab of the SAGD Data
Object Wizard. Select the internal SAGD Data Object Well to be mapped (e.g.
producer_preheat) in the well (left hand) list and then select the target Well Data Object well
description from the well mapping (right hand) list e.g. Producer_preheat.
producer_preheat Producer_preheat
Producer Producer
Injector injector
July, 2021
RESOLVE Manual
Examples Guide 2018
0 1000
1500 950
3000 870
4000 800
5000 700
6000 500
6500 300
6750 0
Click on calculate and the SAGD object will compute the estimated minimum chamber rise time
and nominal oil rate.
July, 2021
RESOLVE Manual
Examples Guide 2020
The pre-heating phase duration is based on an Analytical method used for Chamber Rise
computation and minimum nominal rate computation taken from Butler, Roger M. Horizontal
Wells for the recovery of oil, gas and bitumen (Chapter 11), Petroleum society Monograph
Number 2.
The given wells do not have any associated production history since these are recently drilled
wells. We are now ready to export the model to REVEAL. A full REVEAL numerical simulation
model can be created directly from the data object which includes a detailed description of the
wells and a reservoir grid is automatically created to match the well flow path. This gridding and
subsequent calculations ensure tight coupling between the complex well and the reservoir and
also captures the near wellbore effects.
The REVEAL model enables investigation of the near-wellbore characteristics and provides 3D
visualization of steam chamber development, cross flow, heel-toe effects etc. Both 2D and 3D
models can be exported to REVEAL. The 2D model is intended to be used for surface network
analysis: this means that the rates/pressures calculated from the REVEAL model at the
wellhead can be passed to a surface network for further surface network studies. The 3D model
is intended to be used for further detailed modelling in the reservoir which can include aspects
like well design, well control, steam chamber growth, sensitivities on injection/production cycles
etc.
Click on the Calculate and Export command. A warning will appear stating the pre-heated wells
are not completed (this is OK). Choose Yes.
The REVEAL simulation model auto-generated by the SAGD Data object includes:
Schedule based on the estimated chamber rise, nominal rates and the enthalpy of steam at
the reference conditions.
Pre-configured gridding refinement based on the 2D or 3D model selection
Pre-configured SAGD specific plots including Sub-cool and inter-well annulus pressure
PID controller based control script
The producer is controlled to maintain optimum Sub-cool between minimum and
maximum rate constraints
The injector rate is controlled to maintain reservoir pressure and balance inter-well
thermal connectivity
The REVEAL model can then be run to perform these studies. Close the SAGD Data Object
and save the RESOLVE file. This completes the tutorial.
An introduction to ICD Analysis design considerations and the objectives of using the ICD
Analysis Data Object in RESOLVE have been provided in the user guide. It is suggested that the
user goes through this introduction before proceeding with the example here to understand the
context behind the ICD Analysis object in RESOLVE.
The previous examples described the procedure for creating well descriptions using the Well
Builder Object in RESOLVE, and therefore the steps to create well descriptions are not
discussed here. Detailed well descriptions are available which include the deviation survey and
equipment types for the well. Additional data available include the fluid PVT description and the
reservoir description.
The objective of this example is to demonstrate how to use the ICD Analysis data object to
design the well and maximise oil production.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE REVEAL
1 1 or more
The number of REVEAL licenses required depends on the number of concurrent scenarios run.
July, 2021
RESOLVE Manual
Examples Guide 2022
It is possible to restrict the number of concurrent REVEAL instances/licenses used (by default
this is set to 4): refer to the main user guide for instructions on this.
Before starting with this example, it is necessary to make sure that the following RESOLVE data
object drivers are registered: PxWellObject and ICD Analysis
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_7-ICD Analysis
This folder contains a file "ICD Analysis_Final.rsl" which contains the RESOLVE file for the
completed example. To begin with the example, a "ICD Analysis_Start.rsl" file is provided:
open this file to begin this tutorial. Go to Introduction: Objectives.
3.8.7.2 Introduction: Objectives
The case study guide that follows describes how to use the ICD Analysis object in order to
design a well with inflow control devices/valves. The aim is to come up with a design that
maximises our objective of production/NPV by comparing various well configurations. Various
scenarios will be run with different methods to optimise production, and these will be compared
to determine the best method for the given well.
We will look at the following situations in order to come up with the optimal design:
Consider the design benefits of a range of ICD strengths
Investigate the use of ICVs and their dynamic control to maximise production
Look at the impact of running the simulation for a finite time versus running to an abandonment
constraint
Demonstrate the use of human knowledge to take best device configuration from existing
results and improve NPV with better device selection
Consider if our optimum design changes due to reservoir uncertainty
Once the model is built and the scenarios created, the example will follow the following stages
for the design:
Phase 1 Perform comparisons for different well configurations by running the model to an
abandonment constraint of WaterCut = 95%
Phase 2 Can we create additional scenarios by inspection of the results so far to optimise the
well?
Phase 3 Run the same well model with an abandonment constraint of WaterCut 95%, however
we want to understand the impact of different reservoir conditions (reservoir uncertainty)
Go to Step 1
Black oil PVT data has been entered as shown in the screenshot above. Tables have also been
generated that cover the entire range of pressures and temperatures expected for this system.
Connect the Black oil PVT data object to the ICD analysis data object using the icon.
July, 2021
RESOLVE Manual
Examples Guide 2024
Well description
A detailed well description has been provided in the Well Builder data object. This includes the
reference location, deviation survey and equipment information. The well has been modelled up
to the surface; the completed (perforated) section of the well consists of alternating 120 feet
long sections which includes one ICD followed by a packer section (5 feet long).
Lithology information has also been entered in the data object: this information will be used to
generate layers when connecting to the ICD analysis data object.
Connect the well builder object to the ICD analysis data object:
Go to Step 2.
3.8.7.4 Step 2: Create the reservoir description
When the well object is connected to the ICD analysis object, RESOLVE will automatically
generate a reservoir table wherein permeability and porosity profile may be entered by MD or
by grid block length for regions beyond the deviation survey.
Our example model "ICD Analysis_Start.rsl" contains a well builder description which includes
a lithology survey and hence the permeability and porosity profile is automatically imported on
linking the Well Builder and ICD Analysis data objects. The system automatically adds reservoir
region layers at the beginning and end each equal in length to half the completed length of the
well. The data in the layer grid table can be edited if necessary.
If we had chosen to use a well builder data object where lithology data has not been entered
July, 2021
RESOLVE Manual
Examples Guide 2026
then the system will divide the completed section of the well into distinct completed regions
(layers). We would then need to enter the permeability and porosity profile directly through the
ICD Analysis Reservoir Data screen by adding and/or inserting layers as necessary to match
our permeability/porosity log survey.
Note that the layer grid is auto-generated on well builder data object connection ONLY if no
existing reservoir layer grid exists in the ICD analysis object.
Next we enter Permeability porosity values for the region of the reservoir outside of the
deviation survey extent.
MD Gridblock Zone Permeability Porosity Anisotropy
length
(feet) (mD) (fraction) (fraction)
(feet)
11005 700 1 80.99655 0.167711 0.1
11010.71 5.7039 1 67.31756 0.160957 0.1
12000 639 1 80.99655 0.167711 0.1
Dip (degrees) 0
Reference depth (feet) 9300
Net thickness (feet) 250
Max gridblock length (feet) 100
An anisotropy value of 0.1 is entered for all layers and the zone =1. Here we are going to use a
single zone and aquifer where:
July, 2021
RESOLVE Manual
Examples Guide 2028
The "View" tab shows us the permeability/porosity profile in the reservoir along with the isolated
regions/ICD locations in the well:
Go to Step3 .
Next choose the "Add Equipment" command, label the new equipment as "EQ-0.4" and enter
the following details:
Type = Equalizer TM, Equalizer TM = EQ Helix : 40 : 6 5/8 :" 0.4
Add additional equipment repeating process above so that we have a range of equipment with
increasing strength from 0.2 to 3.2 labeled as EQ-0.2, EQ-0.4 etc.
Next we add a device to our equipment list to represent a closed device:
Type = ICV, ICV = ICV1, Discharge coefficient = 1, flow area = 0 in2
Finally, add a screen equipment item to our equipment list and label the equipment "Screen".
The equipment list will appear as follows:
Creating scenarios
When the well builder object is linked to the ICD analysis object, a single scenario will be
created based on ICD equipment and position layout found in the well description. We can now
add new scenarios for the cases we wish to run.
For the first scenario, select "ICV Open/Closed" and set the device positions as group 1, 2 etc.
This represents a case where we have ICVs in the well, and the ICVs can be controlled
independently (fully open/closed). Add another scenario and select the "ICV Gradient" method.
Similar to the first scenario, however instead of fully open/closed settings the ICVs can now
have fractional openings. Add a third scenario and select the "GA" method. This represents a
case with ICDs, and the data object will use a genetic algorithm to find the best configuration.
Our new scenarios should be as follows:
Next, we will add a set of scenarios representing fixed ICD locations, with once ICD type for
each scenario. For the fourth case, set the ICD Method to "ICD" and each device position set to
EQ-0.2. Add another scenario with position 1, position 2 etc. set to EQ-0.8. Repeat this
procedure to add a scenario representing each of our ICDs at all device positions, so that we
have a range of scenarios representing ICDs at increasing strengths. We can add another
July, 2021
RESOLVE Manual
Examples Guide 2030
With the simulation time is unset, the simulation will run until the abandonment constraint is
reached and this is WaterCut = 1/(1+ alpha). For our case where alpha = 0.05, the well will be
abandoned when the WaterCut reaches a value of 95%.
We will first run some comparison cases with ICDs alone, where the ICDs are the same for
each case. Select scenarios 4 to 8 and choose the "Run Selected" command (select all rows by
clicking button and dragging mouse). Note that in doing so, 5 simultaneous instances of
REVEAL will be run (corresponding to 5 scenarios) and will hence utilise 5 REVEAL licenses.
The number of scenarios selected can be reduced according to the licenses available and run
in turn.
When our simulations are complete we can (with scenarios 4 to 8 selected) click on "Show
Results" to compare the results.
Plot shows Eq0.2 vs Eq0.8 vs Eq1.6 vs Eq3.2 versus screen (at all device positions at early times)
The comparison graphs above can be used to inspect a variety of results, such as the rate/
pressure drop across the ICD, production rates, fluid saturations in the reservoir etc. The
scrollbar at the top of the screen will show these results with time.
Click on the NPV icon at the top of the screen to view the final results. The NPV is calculated
as follows:
July, 2021
RESOLVE Manual
Examples Guide 2032
From the results above we can see that as the ICD strength increases, the cumulative water-cut
reduces. The cumulative oil production and final NPV go through a maximum with increasing
ICD strength: this suggests that there is an optimum ICD size for maximising NPV. For the
screen, the result is closest to the ICD equipment with lowest strength (here EQ-0.2).
These results are reasonable, since with higher ICD strength, higher WCT/high perm regions
are choked, giving us more oil. If the ICD pressure drop is excessive, then eventually we lose
production due to the high choking.
If we observe the cumulative oil production, we can see that as ICD strength increases, the well
is able to produce for longer and we also get more oil production (scenarios 4,5 and 6) .
However because this extra oil is obtained later in time, higher ICD strength may not be
economically favourable. Therefore, design decisions made by comparing only the cumulative
oil production will not be the best economically.
The advantage of working with NPV (i.e. discounted to present terms) is that we account for the
fact that different simulations run for different times in our comparisons. Note that the definition
of NPV here does not cover aspects like capital expenditure, operational costs etc. however
using the production results for the different scenarios a user-defined NPV can be readily
calculated outside of the data object.
Comparing scenarios 4, 5 and 6 using the graph above, we can see that the NPV curves cross
each other in early times. This shows that with higher ICD strength, we reduce water-cut and
increase oil production, but the impact of this oil production is achieved in longer term
simulations. However if the ICD pressure drop is too excessive (scenario 7), then this case is
the worst amongst ICDs. Having screens gives the least NPV which shows that for this example,
July, 2021
RESOLVE Manual
Examples Guide 2034
by a closed device and screen. The plot in the region (points 3-7 above shows) a decreasing
NPV with increasing strength followed by a NPV = 0 (this represents the closed device in all
device positions) and screen (near identical to lowest strength Equalizer EQ-0.2 as seen
before)
The remaining plot shows the remaining GA cases based on various device combinations
determined by the GA algorithm
The plot above shows the Open/Closed vs ICV Gradient versus GA methods at ~2000 days.
We may observe from the result of the simulation we have “dead zones” where devices are
closed and open (ICV closed/open approach). Where the devices are substituted for a range of
controllable ICV devices (ICV gradient) the solution also tends towards an either fully open or
closed state at each position corresponding to the high peak regions of the permeability profile.
ICV devices at positions 2, 3, 5, 6, and 7 are fully closed in both the ICV methods at this time-
step (2000 days).
To investigate the results of the GA method in more detail, we can select the GA method from
the Scenario table and select the "Scenario Info" command. Here we see objective function
result for all combinations of equipment at each device position used by the GA algorithm. The
equipment layouts are listed by index, where the index number (starting from one) follows the
order in which the equipment are added. For example, layout 1-1-1-1-1-1-1-1- represents EQ-
2 at each position.
The best GA realization is represented by the following devices at positions 1-8:
7--6--5--4--5--4--4--7
Screen--Closed--EQ3.2--EQ1.6--EQ3.2--EQ1.6--EQ1.6--Screen
The best GA realization does not necessarily represent the optimum device configuration.
Given that there is an element of "randomness" in genetic algorithm calculations, the optimum
layout does not necessadily represent the absolute maximum. Nevertheless, comparing GA with
the ICV cases shows that the best result here is the ICV_OPENCLOSED scenario.
We can conclude that the theoretical oil return with ICVs where the device fractional open status
is fully controlled to maximize the objective function is improved for this example compared to
utilising the same fixed device at each position (ICD/GA methods). Of course it is necessary to
consider the feasibility and logistics of controlling our well over its entire life and if this is
possible is the increased NPV return enough to justify the increased investment.
The GA method however does account for the impact of each selected device applied over the
July, 2021
RESOLVE Manual
Examples Guide 2036
life of the well and hence gives us a better indication of best configuration rather than a design
based on one time-step.
Scenario 9 yields a better NPV than our GA case, but the ICV Open/closed case is still better.
Finally, let us create another scenario starting from Scenario#9, however instead of the ICD we
consider a screen in the middle of the well:
Screen—Closed—Closed—Screen—Closed—Closed—Closed—Screen
Create and run this new scenario as before and compare the results to Scenarios 1, 3 and 9:
This new configuration (Scenario#10) produces a higher NPV than all the ICD/GA cases so far:
this configuration does not use any ICDs but simply has screens and closed devices for zonal
isolation. It is important to note however that if the reservoir petrophysical profile is uncertain,
then such a configuration will not give the best results, and ICDs or ICVs will be required.
Therefore we can see that by comparing the GA cases to the ICV Open/Closed case and using
our understanding of the reservoir properties and device positions we can further optimise the
well design. However the new ICD cases created do not give us a higher NPV than the ICV
Open/Closed case. This is expected as the ICV Open/Closed scenario is reactive (explained
July, 2021
RESOLVE Manual
Examples Guide 2038
above). Having said this, obtaining dynamic control throughout the life of the well may not be
economically feasible as compared to having a well with just screens here: this study needs to
be done outside of the data object.
It must be stressed that these results are only valid for this particular example case, and cannot
be generalised. Each reservoir-well is different, and to obtain the optimum configuration a
number of scenarios can be run and compared as done here. Furthermore, reservoir uncertainty
also needs to be considered in the analysis, as this can change our "best case" scenario. This
is discussed further in the next section.
The same scenarios that were created before have been run for this new permeability profile
and the results are stored in the data object named "ICD Analysis-2". Creating and running
scenarios has been discussed previously, hence we will be looking at the results directly here.
A brief summary of the results of the main methods are presented below, along with a
comparison to the case with the original permeability profile.
GA (Scenario 3)
The best case GA configuration for the original permeability profile was:
Screen--Closed--EQ3.2--EQ1.6--EQ3.2--EQ1.6--EQ1.6--Screen
With the new permeability profile, the following configuration is obtained:
Screen—Eq-0.4—EQ3.2—Closed—Closed—Closed—EQ1.6—Screen
The difference in the configuration lies partly in the fact that we have a new permeability profile,
and partly in the element of "randomness" present in genetic algorithms. However overall the
configurations are similar, where we have screens at the heel and toe of the well, followed by
high strength ICDs and "Closed" equipment in the middle of the well.
Summary
We can see that the scenario results change when we consider reservoir uncertainty. Given that
our permeability profiles are not very different, overall the trends are similar, with the ICV Open/
Closed being the best scenario. The configurations for the GA and best ICD cases however is
different. Reservoir uncertainty, therefore, is another aspect that needs to be considered in the
overall well design.
July, 2021
RESOLVE Manual
Examples Guide 2040
If the user does not have access to LedaFlow, the example can be completed up to the
generation of the performance curve, which will constitute a good dexterity example for the use
of the Case Manager.
3.8.8.1 Overview
1. Example Introduction
The field is an offshore oil field consisting two reservoirs. There are currently three wells
producing from each reservoir which all meet at a common manifold (either Manifold A or B).
The production from Manifold A commingles with the production from Manifold B at Manifold B.
The commingled fluid is then transported through a long 12km pipeline across undulating terrain
to a platform.
The GAP model of the field is shown below. The long delivery flowline is not modelled in GAP,
and will be modelled in LedaFlow. The objective of the study is to use a transient flow model to
assess whether or not the flow in the flowline is steady-state, and if gas lift which can be
installed at the riser base is required.
The production system has a pressure vs rate response, and this should be taken into account
when modelling the pipeline in the transient simulator. The Case Manager will be used to build
the overall performance curve for the system, and this will be passed to the transient simulator.
As a result of the transient simulation, it will be possible to assess the operating point of the
system and if the flow is steady-state.
To build the performance curve for the production system, the CaseManager will be used to
generate a number of cases in which the separator pressure is input, GAP is solved, and the
mass flow rate retrieved.
One of the advantages of building a RESOLVE model to perform this study is that if changes
are made to the GAP model to reflect new conditions in the field (changing WCT, new wells,
declining PQ curve…), the new PQ curve will be automatically populated to the transient model
by running the model again.
3. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP is registered. Note that this operation is not required if it has been done previously.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Make sure that the Case Manager Data Object is registered: this is the case if the Case
Manager appears in the list of Data Objects. This is automatically performed by RESOLVE and
can be verified by selecting Drivers | Register Data Object from the main menu. If the
LedaFlow Data Object is not installed use the Register command to browse to the installation
directory of IPM and register LedaFlow.dll.
4. Files Location
July, 2021
RESOLVE Manual
Examples Guide 2042
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_8-Case_Manager
This folder contains a file ‘Case Manager Example.rsa’ which is an archive file that contains
the RESOLVE and GAP files required to go through the example. The archive file needs to be
extracted either in the current location or a location of the user’s choice.
Double-click on the CaseManager, this will display the CaseManager input window. For a
general description of the CaseManager and the several tabs it consists of, please refer to
section 2.9.4.17 of this user guide.
To define a variable, enter its name, the variable type, a default value, and click ‘Plus’. The
default value will be used during debugging of the workflow, or if the variable is not set when the
cases or defined. Please add the following variables.
July, 2021
RESOLVE Manual
Examples Guide 2044
Associate the GAP model by selecting GAP, and browsing for ‘Oil Field.gap’.
Click on ‘Open models’: this will open the GAP model, which will be helpful to write and debug
the CaseManager workflow. In the GAP model, make sure that the unit system selected is
Oilfield, and save the file.
July, 2021
RESOLVE Manual
Examples Guide 2046
Type ‘OpenServer.GAP[0].SOLVENETWORK’, click ‘Load’, select the second entry, then OK.
Fill in the following enumeration: this will perform a solve network with optimisation on the
production system. Exit the ‘Solve GAP’ operation.
In Retrieve Results, we retrieve the separator temperature, total and hydrocarbon mass rates,
and we compute the mass fractions of oil, gas and water as follows.
This concludes the CaseManager workflow. The user can step through the workflow and watch
the variables to verify that it is performing as intended.
At this point we have created a workflow that instructs the CaseManager how to solve a case in
general: how to input the separator pressure, how to solve the network and how to compute the
results. We will now create a workflow outside of the CaseManager that:
July, 2021
RESOLVE Manual
Examples Guide 2048
Double click on the LedaFlow icon and associate the dump file flowline.ldm, and click Load.
Then go to the Execution tab and untick ‘Purge results on completion of simulation’. This
ensures that the results of the simulation are kept in the LedaFlow database and can be
accessed from within LedaFlow.
July, 2021
RESOLVE Manual
Examples Guide 2050
Enter the following separator pressures. These will be used to generate the performance curve
of the production system.
The workflow consists of two sub-flowsheets, shown below. Create a first sub-flowsheet called
‘Get GAP PC’ and double click on it to enter the sub-flowsheet.
July, 2021
RESOLVE Manual
Examples Guide 2052
First, we get the number of cases to be run from the number of entries in the ‘Results’ DataSet.
Reset all the cases in the CaseManager: select ‘Add global function’, select ‘Workflow and
case management’ and then ‘Remove all cases from this object’.
July, 2021
RESOLVE Manual
Examples Guide 2054
Create a new case for each ‘i’, and set the input separator pressure. Both operations are under
the ‘Workflow and case management’ category. Each case is named as “P = xx”, where xx is
the separator pressure of the case considered, and the associated workflow is “Workflow1”
which we created in the CaseManager.
The ‘Extract Results’ loop loops over the cases, and it is identical to the ‘Create cases’ loop.
Retrieve the results: the output variables should be retrieved and returned in the appropriate
column in the ‘Results’ DataSet.
If the user does not have access to LedaFlow, the workflow just created can be run to extract the
performance curve of the entire production system as shown below. This would then mark the
end of the example.
July, 2021
RESOLVE Manual
Examples Guide 2056
Exit this sub-flowsheet, and create a second sub-flowsheet called ‘Run LedaFlow’. The workflow
to be built is the following.
First, create two arrays of doubles, called PQPressure and PQRate. These arrays will contain
the PQ response of the production system, and are going to be passed to LedaFlow as the
‘IPR’ of the well element.
The ‘Get PQ’ loop loops over the cases, and is identical to the previous loops created. Next, we
copy the pressure and mass rate data from the ‘Results’ DataSet to the two arrays that we just
created. The commands shown below reverse the order of the table (pressures are now in
descending order in the arrays).
July, 2021
RESOLVE Manual
Examples Guide 2058
Set options in LedaFlow: this sets the phase split calc method to ‘Mass Fractions’: these will be
set to the values that we have calculated from the CaseManager. The inlet temperature is also
set. LedaFlow does not allow a different mass fraction or temperature for each IPR point, and
hence the average is used.
In this example, we are only passing the mass fractions and the inlet temperature dynamically to
LedaFlow. Note that if desired we could be passing more data dynamically such as the PVT
(which has been pre-set in the LedaFlow model in this example).
Finally, send the command to run the LedaFlow model. Here the simulation is run for 6000s.
The following plot shows the mass flow rate at the delivery point, as well as the pressure at the
flowline inlet.
July, 2021
RESOLVE Manual
Examples Guide 2060
It can be observed that the system reaches steady-state after around 2500s. The operating
point corresponds to an inlet pressure of 242 psig, and a mass flow rate of 98 lbm/s. To check
the consistency of the results, it can be verified that this point effectively lies on the IPR supplied
to LedaFlow.
3.8.9.1 Overview
1. Example Introduction
This example builds on Example 2.1. In this example, the reservoir model (REVEAL) was
dynamically linked to the surface network model (GAP) through RESOLVE, and this allowed to
perform a production forecast of the integrated model, taking into account the surface network
and the reservoir models.
The objective of this example is to perform a sensitivity analysis on these reservoir parameters,
and to study their effect on the entire coupled RESOLVE model. The Sensitivity Tool data object
will be used to achieve this objective.
The field in questions is an oil field, and it is currently being produced by four wells. The field
also includes four gas injectors. The production network and the injection network are both
dynamically linked to the reservoir model.
July, 2021
RESOLVE Manual
Examples Guide 2062
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE GAP REVEAL
2 2 1
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP and REVEAL are registered. Note that this operation is not required if it has been done
previously.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Make sure that the Sensitivity Tool Data Object is registered: this is the case if the Sensitivity
Tool appears in the list of Data Objects. This is automatically performed by RESOLVE and can
be verified by selecting Drivers | Register Data Object from the main menu. If the Sensitivity
Tool Data Object is not installed use the Register command to browse to the installation
directory of IPM and register SensitivityTool.dll.
3. Files Location
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_9-Sensitivity_Tool
This folder contains a file ‘Sensitivity Tool Example.rsa’ which is an archive file that contains
the RESOLVE, REVEAL and GAP files required to go through the example. The archive file
needs to be extracted either in the current location or a location of the user’s choice.
3.8.9.2 Step 1 - Start a new RESOLVE file
Start RESOLVE and go to File| Archive| Extract. Navigate to the above mentioned folder (see
Files Location above), select the 'Sensitivity Tool Example.rsa' file and extract its content into
a selected location. When the “Open Master File?” question is prompted, select “No”. This step
ensures that underlying model is extracted into the folder.
Create a new project using File| New or the icon .
July, 2021
RESOLVE Manual
Examples Guide 2064
Double-click on the IPM-OS icon, select IPM Application | Resolve, and associate GAP-
REVEAL.rsl which was extracted previously. On clicking OK, this will start a second RESOLVE
window and open the specified file: this consists in a reservoir model, integrated with a
production system and a gas injection system.
The Sensitivity Tool contains four tabs. The first two tabs, ‘Physical model’ and ‘Sensitivity
variables’ are used to define the workflow that will be followed by the Sensitivity Tool to perform
the cases and define the input and output variables of the workflow. The tables ‘Dependent’ and
‘Independent’ are used to run the cases and analyse the results.
July, 2021
RESOLVE Manual
Examples Guide 2066
From REVEAL, we can obtain the OpenServer strings of these variables. For instance, the
string for the pore volume multiplier is: “Reveal.Script.Reservoir.Data[15][0].Value”. The
REVEAL model is contained with a RESOLVE model, under an instance called ‘Reservoir’.
Therefore this value can be accessed by using “Resolve.Module[{Reservoir}].
Reveal.Script.Reservoir.Data[15][0].Value”.
To add this variable, enter the variable name under ‘Description’, the OpenServer string under
‘OpenServer tag’ and click ‘Add’.
Add the following variables with their OpenServer strings. In the field named ‘Values’, the values
Define the ouput variables CumOil, CumGas, CumWater and CumInjGas with the following
OpenServer strings and units. Note that these strings use [_] which is used to obtain the last
value in the table of results (effectively the cumulative variables at the end of the run).
The variables that we have just defined are populated in the CaseManager variables list. The
‘OpenServer variables’ are populated in the ‘inputVariables’ and ‘resultVariables’ DataStore. A
variable would be created for each ‘non-model’ variable (this is not the case here).
July, 2021
RESOLVE Manual
Examples Guide 2068
In the Workflows tab, we can see the workflow template that we have read in previously. This
workflow is designed to automatically set the input variables which have been defined using a
direct OpenServer string, run the RESOLVE model, and retrieve the output variables defined
with an OpenServer string.
It is possible that a desired input or output variable does not have an associated OpenServer
string, for instance if the input variable is a multiplier. In this case, the workflow will need to be
amended so as to execute any task required to set the input to the underlying models. This is
however not the case here, as all input and output variables have been defined using an
OpenServer string.
Enter the ‘Set inputs’ sub-flowsheet and add in the ‘Write REVEAL script’ block, which is
required when modifying REVEAL variables. Note that the workflow is set up to automatically
set ‘OpenServer’ variables that have been defined.
Go back to the main workflow and enter ‘Retrieve Results’ sub-flowsheet. This contains the
workflow designed to retrieve the variables defined using an OpenServer string. Go back to the
main workflow.
The workflow can be tested if desired to ensure that it is performing as expected. Exit the
CaseManager by clicking OK on the bottom-right corner.
3.8.9.5 Step 4 - Run the model
The model is now ready to be run. In this example, an ‘Independent’ analysis will be run: here the
variables are changed one by one with respect to a reference case. This enables to build a
Tornado plot to analyse the impact of each variable on the results, independently from one
another. The total number of cases run in an independent analysis is the sum of the number of
sensitivity values entered for each variable.
PxCluster should be started before running the cases. This can be done from IPM Utilities.
July, 2021
RESOLVE Manual
Examples Guide 2070
In the displayed window select the big folder button to start the local cluster (not required if a
remote cluster is already setup). Further information on clustering can be found in the Setting up
PXCluster section.
In case a limited number of RESOLVE licenses are available it may be required to limit the
number of jobs running in parallel. This can be done by selecting ‘Cluster options’ button.
To launch the cases, click on ‘Run on cluster’. When the run is completed, a tornado plot with be
displayed, which shows the impact of each variable with respect to the base case. The results
can be exported using the ‘Copy to clipboad’ button for further analysis.
Using the Line Plots functionnality, outputs can be plotted as a function of inputs.
July, 2021
RESOLVE Manual
Examples Guide 2072
Production from oil or gas fields is associated with various events that are carried out
throughout the field life and are aimed to repair and maintain equipment, increase production
and improve recovery. These events and activities sometimes have adverse effect on the
instantaneous field production when parts of the system need to be shut down.
These downfalls in production should be carefully planned and accounted for. This is particularly
important when making decisions with regards to the future contractual commitments of the
company.
Nowadays planning of field activities is performed based on models which can be used to
generate a field production profile using assumptions made by engineers with regards to those
field events. Implementation of those assumptions in the model is fairly easy and can be
achieved through various instruments the software may provide –schedules, DCQ tables,
workflows, macros, etc.
However, the uncertainty in the assumptions may significantly affect results of the forecast and
subsequently the decisions that are made based on those results. This example approaches
the uncertainty of the planned events in a statistical manner. The statistical data used is
obtained from the previous experience of the field production.
2. Field Description
The production system in question consists of 2 reservoirs. The system includes 6 production
wells and 2 main separators (LP and HP). The high pressure separator in the system is
constrained by a maximum gas and liquid rate.
It is possible to route wells to either separators if one of them is shut for maintenance or broken.
The system layout in GAP is shown in the figure below.
The objective of the project is to generate a production profile and estimate cumulative
production for the field over the next year taking into account various field events.
July, 2021
RESOLVE Manual
Examples Guide 2074
The events listed above can be easily accounted for in GAP using a schedule for wells,
pipelines and separators, as well as downtime or production deferement.
However, in the past the short term planning based on the model forecasts showed some
discrepancies form the actual field behaviour over the covered period of time due to incorrect
assumptions made with regards to field planned and unplanned events, which may lead to the
violation of the company contractual obligations.
It is therefore required to develop a more holistic approach towards the short term forecasting
and planning using previous company experience and taking into account the uncertainties
inherent to all planned events. This will be done using a statistical approach, where each
uncertain parameter is assigned a probability distribution.
For example, the planned well stimulation job can be set for a particular date and it is planned to
have a certain resulting PI. Based on the previous field experience, planned acid jobs can be
shifted earlier or postponed for 2 weeks depending on other events in the field. The resulting PI
of the well in question may also vary in the range of 1.5 STB/day/psi.
Once the assumptions for all parameters have been defined, multiple realisations of the model
will be run to allow for a statistical analysis of the model results.
5. Model Architecture
To define assumptions and generate a set of input values, a statistical analysis tool such as
Crystal Ball can be used. It is then required to pass data to GAP model for calculations and run
it multiple times to obtain distribution of cumulative production and average rates. The later can
then be analysed and used for decision making.
The integration between the field model and Crystal Ball is performed in RESOLVE by means
of the Crystal Ball data object.
6. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP is registered. Note that this operation is not required if it has been done previously.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Make sure that the Crystal Ball Data Object is registered: this is the case if the Crystal Ball Data
Object appears in the list of Data Objects. This is automatically performed by RESOLVE and
can be verified by selecting Drivers | Register Data Object from the main menu. If the
CrystalBall drivers are not installed use the Register command to browse to the installed
program directory and register Probabilistic.dll.
7. Files Location
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 2076
...\resolve\Section_6-Data_Objects\Example_6_10-Crystall_Ball.
This folder contains a file ‘CrystalBall GAP Example.rsa’ which is an archive file that contains
the RESOLVE and GAP files required to go through the example. The archive file needs to be
extracted either in the current location or a location of the user’s choice.
Double click on GAP and browse for the GAP model extracted from the RESOLVE archive on
Step 1. Then select OK.
RESOLVE will open GAP and display wells and separator nodes.
July, 2021
RESOLVE Manual
Examples Guide 2078
Go to Step 3.
3.8.10.4 Step 3 - Add Crystal Ball data object
From the ‘Add DataObject’ menu browse for ‘Probabilistic’ and select ‘Crystal Ball’.
It is required to activate Crystal Ball utility such that it is always loaded when Excel application is
launched. This way RESOLVE will be able to load Crystal Ball in Excel for setup and
calculations. The startup of Crystal Ball is controlled by the Application Manager that is available
in the Crystal Ball folder under the Windows start menu:
Double click on the ‘Crystal Ball’ icon to set it up, and go to Step 4.
3.8.10.5 Step 4 - Setup the 'Physical Model' tab
The first tab of the Crystal Ball object requires the user to select a model and choose/build a
workflow.
July, 2021
RESOLVE Manual
Examples Guide 2080
The RESOLVE model in question has only GAP module added, therefore this module will be
automatically selected in the ‘Analysis on model’ drop down list.
As a starting point it is possible to select a workflow from the list of templates. Click ‘Select
model template’ button and choose ‘GAP forecast’ template. This template is designed to set
and retrieve OpenServer variables, and to run the GAP forecast. It can also be edited, and this
will be discussed later on.
Go to Step 5.
3.8.10.6 Step 5 - Setup the Crystal Ball spreadsheet
The ‘Crystal Ball model’ tab allows defining Crystal Ball variables and mapping them to
variables within RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 2082
This will open Excel with the standard Crystal Ball template generated by RESOLVE. The
template is divided into 2 tables: one for the assumptions variables (inputs) and one for the
forecasts variables (outputs).
To define and input parameter, enter the variable name in the column named ‘Assumption/input
variable names’. Move the cursor to the right column named ‘Crystal ball assumption cells’ and
select ‘Define Assumption’ in the Crystal Ball tab of the Excel ribbon.
A triangular distribution will be chosen for all the variables, using the parameters shown below.
When a distribution has been defined the assumption cell will be highlighted in green.
For instance, for the well downtime, the distribution is the following.
July, 2021
RESOLVE Manual
Examples Guide 2084
Note that to be able to input a distribution of dates, the cell format should be set to 'Date' before
defining the distribution.
The next step is to define output parameters. Output parameters will be an output of the GAP
model, therefore it is only required to initialise the cells with name, value of zero and click on
'Define Forecast'.
Once the cell is defined as ‘Forecast’ it will be highlighted with blue. The final Crystal Ball table
should now look as follows.
July, 2021
RESOLVE Manual
Examples Guide 2086
The last step in Excel is to define number of trials, which is done on the ribbon.
The workflow template that was selected previously is designed such that all parameters
mapped to OpenServer tags will be automatically passed to the GAP model. For non-
OpenServer variables some modifications of the workflow will be required, and this will be
performed further below.
Select ‘Get variables from spreadsheet’ button in the Input variables frame. This will read the
assumption variables defined in Excel.
To map the variables select ‘Variable tag’ button in the table, e.g. ‘Well1A_Downtime’. In the
displayed window choose the second radio button, enter the variable name and copy the
OpenServer string from GAP. The ‘Get Unit’ function is optional, and enables to verify that
RESOLVE is able to access the string.
The above mapping should be done for all variables. For non-OpenServer variables type ’na’
instead of the OpenServer tag. Once the mapping is complete, enter the test values shown
below, which will be used when testing the workflow. The table should now look like the
following.
July, 2021
RESOLVE Manual
Examples Guide 2088
We will now modify the workflow to account for the non OpenServer variables, go to Step 7.
3.8.10.8 Step 7 - Modify the workflow
Some Crystal Ball variables are mapped to non-OpenServer variables, namely
Date, duration and resulting PI of acid job;
Date and duration of LP separator maintenance.
These parameters will still be passed to the GAP model using OpenServer. However, they will
not be transferred to the model directly, like downtime factors, but will be setup in the equipment
schedule, as they correspond to date driven events.
The above elements require some changes to the workflow controlling the model. To modify the
workflow select the ‘Debug model workflow with test values’ button.
This will display the underlying Case Manager. It will be seen that the CaseManager has been
populated with the variables previously defined. The OpenServer variables are stored inside the
DataStore ‘inputVariables’, and an additional variable is created for each non OpenServer
variable.
July, 2021
RESOLVE Manual
Examples Guide 2090
We are going to create one additional variable, a DataStore. When the LP separator is shut
down, the LP wells will be routed towards the HP separator, and the DataStore will be used to
hold the required pipeline routings. To do this, enter the following variable name, set its type as
User -> DataStore and press Add.
Once added select ‘Edit’ in the table and define 2 columns for the DataStore.
Select OK and in the displayed table type in names of pipelines that will rerouted during the LP
separator workover and corresponding mask flags.
The next step is to modify the workflow. Switch to the ‘Workflows’ tab where the workflow editor
is displayed. The workflow is split into 3 blocks:
Set inputs – this is sub-flowsheet that passes data to the GAP model;
Run prediction – operation that initialises and runs GAP forecast;
Retrieve results – sub-flowsheet that reads results from GAP.
Enter the ‘Set inputs’ sub-flowsheet. The original workflow is shown below, and its role is to
automatically set the OpenServer variables into GAP.
July, 2021
RESOLVE Manual
Examples Guide 2092
This should be modified by adding a few extra blocks, used to input the non OpenServer
variables.
The loop will be used to schedule pipeline re-routing, the last two blocks will be used to
schedule LP separator masking and performing the acid job for ‘WellB3’.
July, 2021
RESOLVE Manual
Examples Guide 2094
If pipeline ‘MaskFlag’ is 0, then the pipeline schedule is defined in reverse, i.e. to unmask it
initially and then mask it later.
The ‘LP Bypass’ block is defined such that it masks the LP separator and opens the
‘bypassLP’ pipeline. Once maintenance is finished they are returned to their initial conditions.
Finally, the ‘B3 Acid Job’ schedules 3 events for ‘WellB3’ by turning the well off on the acid job
date, and turning it on after a number of days with a different PI.
July, 2021
RESOLVE Manual
Examples Guide 2096
This workflow can be test for debugging, and if it has been successfully executed, it should
result in the following schedules in the GAP model.
LP Separator:
Well 3B:
Pipe ‘1BtoLP’:
Pipe ‘1BtoHP’:
Pipe ‘bypassLP’:
Once the ‘Set inputs’ is modified move up to a higher level by selecting the button on the tool
bar. It is now required to modify ‘Retrieve results’ sub-flowsheet, to retrieve cumulative
production at the end of the run.
Enter the ‘Retrieve results’ sub-flowsheet. The original workflow looks as follows, and is
designed to automatically retrieve OpenServer output variables (there are none in this
example).
July, 2021
RESOLVE Manual
Examples Guide 2098
The first block retrieves number of records in the forecast table in GAP.
The resulting value is used as index to extract the cumulative gas and oil production from ‘Gas
Sales’ and ‘Oil Sales’ separators, and assign them to the CumGas and CumOil variables
respectively.
The last block is used to calculate average rates for the year.
This finishes the modifications of the workflow. For debugging purposes step through the
workflow to verify whether it performs as expected. Please ensure that all input variables are
initialised with values before doing so. This can be verified in the ‘Variables’ tab.
Then click ‘OK’. This will close CaseManager and return back to Crystal Ball. Go to Step 8.
PxCluster should be started before running Crystal Ball. This can be done from IPM Utilities:
July, 2021
RESOLVE Manual
Examples Guide 2100
In the displayed window select the big folder button to start the local cluster. The example may
also be ran on a distributed cluster is this is available. Further information on clustering can be
found in the Setting up PXCluster section.
In case a limited number of RESOLVE licenses are available, it may be required to limit the
number of jobs running in parallel. This can be done by selecting ‘Cluster options’ button.
July, 2021
RESOLVE Manual
Examples Guide 2102
Once all options are defined, select ‘Run on cluster’ button, which will populate cases and
submit them to the PxCluster for calculation.
Results will then be displayed in the table as they are being calculated. Once the run is finished
Crystal Ball results will also be populated and diagrams will be built. The results can then be
analysed using the tools of Crystal Ball from the Excel spreadsheet, or generate reports from
Crystal Ball.
Production from oil or gas fields is associated with various events that are carried out
throughout the field life and are aimed to repair and maintain equipment, increase production
and improve recovery. These events and activities sometimes have adverse effect on the
instantaneous field production when parts of the system need to be shut down.
These downfalls in production should be carefully planned and accounted for. This is particularly
important when making decisions with regards to the future contractual commitments of the
company.
July, 2021
RESOLVE Manual
Examples Guide 2104
Nowadays planning of field activities is performed based on models which can be used to
generate a field production profile using assumptions made by engineers with regards to those
field events. Implementation of those assumptions in the model is fairly easy and can be
achieved through various instruments the software may provide –schedules, DCQ tables,
workflows, macros, etc.
However, the uncertainty in the assumptions may significantly affect results of the forecast and
subsequently the decisions that are made based on those results. This example approaches
the uncertainty of the planned events in a statistical manner. The statistical data used is
obtained from the previous experience of the field production.
2. Field Description
The production system in question consists of 2 reservoirs. The system includes 6 production
wells and 2 main separators (LP and HP). The high pressure separator in the system is
constrained by a maximum gas and liquid rate.
It is possible to route wells to either separators if one of them is shut for maintenance or broken.
The system layout in GAP is shown in the figure below.
The objective of the project is to generate a production profile and estimate cumulative
production for the field over the next year taking into account various field events.
pressure (LP) separator in the system. For this period production will be performed through
the high pressure separator subjected to rate constraints.
The events listed above can be easily accounted for in GAP using a schedule for wells,
pipelines and separators, as well as downtime or production deferement.
However, in the past the short term planning based on the model forecasts showed some
discrepancies form the actual field behaviour over the covered period of time due to incorrect
assumptions made with regards to field planned and unplanned events, which may lead to the
violation of the company contractual obligations.
It is therefore required to develop a more holistic approach towards the short term forecasting
and planning using previous company experience and taking into account the uncertainties
inherent to all planned events. This will be done using a statistical approach, where each
uncertain parameter is assigned a probability distribution.
For example, the planned well stimulation job can be set for a particular date and it is planned to
have a certain resulting PI. Based on the previous field experience, planned acid jobs can be
shifted earlier or postponed for 2 weeks depending on other events in the field. The resulting PI
of the well in question may also vary in the range of 1.5 STB/day/psi.
Once the assumptions for all parameters have been defined, multiple realisations of the model
will be run to allow for a statistical analysis of the model results.
5. Model Architecture
To define assumptions and generate a set of input values, a statistical analysis tool such as
@Risk can be used. It is then required to pass data to GAP model for calculations and run it
multiple times to obtain distribution of cumulative production and average rates. The later can
then be analysed and used for decision making.
July, 2021
RESOLVE Manual
Examples Guide 2106
The integration between the field model and @Risk is performed in RESOLVE by means of the
@Risk data object.
6. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP is registered. Note that this operation is not required if it has been done previously.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Make sure that the @Risk Data Object is registered: this is the case if the @Risk appears in
the list of Data Objects. This is automatically performed by RESOLVE and can be verified by
selecting Drivers | Register Data Object from the main menu. If the @Risk driver is not
installed use the Register command to browse to the installed program directory and register
Probabilistic.dll.
7. Files Location
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_11-@Risk
This folder contains a file ‘@Risk GAP Example.rsa’ which is an archive file that contains the
RESOLVE and GAP files required to go through the example. The archive file needs to be
extracted either in the current location or a location of the user’s choice.
July, 2021
RESOLVE Manual
Examples Guide 2108
Double click on GAP and browse for the GAP model extracted from RESOLVE archive on Step
1. Then select OK.
RESOLVE will open GAP and display wells and separator nodes.
Go to Step 3.
3.8.11.4 Step 3 - Add @Risk data object
From the ‘Add DataObject’ menu browse for ‘Probabilistic’ and select ‘@Risk’.
July, 2021
RESOLVE Manual
Examples Guide 2110
The RESOLVE model in question has only GAP module added, therefore this module will be
automatically selected in the ‘Analysis on model’ drop down list.
As a starting point it is possible to select a workflow from the list of templates. Click ‘Select
model template’ button and choose ‘GAP forecast’ template. This template is designed to set
and retrieve OpenServer variables, and to run the GAP forecast. It can also be edited, and this
will be discussed later on.
Go to Step 5.
3.8.11.6 Step 5 - Setup the @Risk spreadsheet_2
The ‘@Risk model’ tab allows defining @Risk variables and mapping them to variables within
RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 2112
This will open Excel with the standard @Risk template generated by RESOLVE. The template
is divided into 2 tables: one for the assumptions variables (inputs) and one for the forecasts
variables (outputs).
To define and input parameter, enter the variable name in the column named ‘Assumption/input
variable names’. Move the cursor to the right column named ‘@Risk assumption cells’ and
select ‘Define Distribution’ in the @Risk tab of the Excel ribbon.
A triangular distribution will be chosen for all the variables, using the parameters shown below.
When the distribution has been entered, the mean is displayed in the cell.
For instance, for the well downtime, the distribution is the following.
July, 2021
RESOLVE Manual
Examples Guide 2114
In order to a distribution of dates, please set the 'Date Formatting' to Enabled as shown below.
The next step is to define the output parameters. Output parameters will be an output of the
GAP model, therefore it is only required to set the cells to 'Add Output'.
The last step in Excel is to define number of iterations (or realisations), which is done on the
ribbon.
July, 2021
RESOLVE Manual
Examples Guide 2116
The workflow template that was selected previously is designed such that all parameters
mapped to OpenServer tags will be automatically passed to the GAP model. For non-
OpenServer variables some modifications of the workflow will be required, and this will be
performed further below.
Select ‘Get variables from spreadsheet’ button in the Input variables frame. This will read the
assumption variables defined in Excel.
To map the variables select ‘Variable tag’ button in the table, e.g. ‘Well1A_Downtime’. In the
displayed window choose the second radio button, enter the variable name and copy the
OpenServer string from GAP. The ‘Get Unit’ function is optional, and enables to verify that
RESOLVE is able to access the string.
The above mapping should be done for all variables. For non-OpenServer variables type ’na’
instead of the OpenServer tag. Once the mapping is complete, enter the test values shown
below, which will be used when testing the workflow. The table should now look like the
following.
We will now modify the workflow to account for the non OpenServer variables, go to Step 7.
3.8.11.8 Step 7 - Modify the workflow
Some @Risk variables are mapped to non-OpenServer variables, namely
Date, duration and resulting PI of acid job;
Date and duration of LP separator maintenance.
July, 2021
RESOLVE Manual
Examples Guide 2118
These parameters will still be passed to the GAP model using OpenServer. However, they will
not be transferred to the model directly, like downtime factors, but will be setup in the equipment
schedule, as they correspond to date driven events.
The above elements require some changes to the workflow controlling the model. To modify the
workflow select the ‘Debug model workflow with test values’ button.
This will display the underlying Case Manager. It will be seen that the CaseManager has been
populated with the variables previously defined. The OpenServer variables are stored inside the
DataStore ‘inputVariables’, and an additional variable is created for each non OpenServer
variable.
We are going to create one additional variable, a DataStore. When the LP separator is shut
down, the LP wells will be routed towards the HP separator, and the DataStore will be used to
hold the required pipeline routings. To do this, enter the following variable name, set its type as
User -> DataStore and press Add.
Once added select ‘Edit’ in the table and define 2 columns for the DataStore.
July, 2021
RESOLVE Manual
Examples Guide 2120
Select OK and in the displayed table type in names of pipelines that will rerouted during the LP
separator workover and corresponding mask flags.
The next step is to modify the workflow. Switch to the ‘Workflows’ tab where the workflow editor
is displayed. The workflow is split into 3 blocks:
Set inputs – this is sub-flowsheet that passes data to the GAP model;
Run prediction – operation that initialises and runs GAP forecast;
Retrieve results – sub-flowsheet that reads results from GAP.
Enter the ‘Set inputs’ sub-flowsheet. The original workflow is shown below, and its role is to
automatically set the OpenServer variables into GAP.
This should be modified by adding a few extra blocks, used to input the non OpenServer
variables.
The loop will be used to schedule pipeline re-routing, the last two blocks will be used to
schedule LP separator masking and performing the acid job for ‘WellB3’.
July, 2021
RESOLVE Manual
Examples Guide 2122
If pipeline ‘MaskFlag’ is 0, then the pipeline schedule is defined in reverse, i.e. to unmask it
initially and then mask it later.
July, 2021
RESOLVE Manual
Examples Guide 2124
The ‘LP Bypass’ block is defined such that it masks the LP separator and opens the
‘bypassLP’ pipeline. Once maintenance is finished they are returned to their initial conditions.
Finally, the ‘B3 Acid Job’ schedules 3 events for ‘WellB3’ by turning the well off on the acid job
date, and turning it on after a number of days with a different PI.
This workflow can be test for debugging, and if it has been successfully executed, it should
result in the following schedules in the GAP model.
LP Separator:
Well 3B:
Pipe ‘1BtoLP’:
Pipe ‘1BtoHP’:
July, 2021
RESOLVE Manual
Examples Guide 2126
Pipe ‘bypassLP’:
Once the ‘Set inputs’ is modified move up to a higher level by selecting the button on the tool
bar. It is now required to modify ‘Retrieve results’ sub-flowsheet, to retrieve cumulative
production at the end of the run.
Enter the ‘Retrieve results’ sub-flowsheet. The original workflow looks as follows, and is
designed to automatically retrieve OpenServer output variables (there are none in this
example).
The first block retrieves number of records in the forecast table in GAP.
The resulting value is used as index to extract the cumulative gas and oil production from ‘Gas
Sales’ and ‘Oil Sales’ separators, and assign them to the CumGas and CumOil variables
respectively.
July, 2021
RESOLVE Manual
Examples Guide 2128
The last block is used to calculate average rates for the year.
This finishes the modifications of the workflow. For debugging purposes step through the
workflow to verify whether it performs as expected. Please ensure that all input variables are
initialised with values before doing so. This can be verified in the ‘Variables’ tab.
Then click ‘OK’. This will close CaseManager and return back to @Risk. Go to Step 8.
PxCluster should be started before running @Risk. This can be done from IPM Utilities:
In the displayed window select the big folder button to start the local cluster. The example may
also be ran on a distributed cluster is this is available. Further information on clustering can be
found in the Setting up PXCluster section.
July, 2021
RESOLVE Manual
Examples Guide 2130
In case a limited number of RESOLVE licenses are available, it may be required to limit the
number of jobs running in parallel. This can be done by selecting ‘Cluster options’ button.
Once all options are defined, select ‘Run on cluster’ button, which will populate cases and
submit them to the PxCluster for calculation.
Results will then be displayed in the table as they are being calculated. Once the run is finished
@Risk results will also be populated and diagrams will be built. The results can then be
analysed using the tools of @Risk from the Excel spreadsheet, or generate reports from @Risk.
July, 2021
RESOLVE Manual
Examples Guide 2132
Production from oil or gas fields is associated with various events that are carried out
throughout the field life and are aimed to repair and maintain equipment, increase production
and improve recovery. These events and activities sometimes have adverse effect on the
instantaneous field production when parts of the system need to be shut down.
These downfalls in production should be carefully planned and accounted for. This is particularly
important when making decisions with regards to the future contractual commitments of the
company.
Nowadays planning of field activities is performed based on models which can be used to
generate a field production profile using assumptions made by engineers with regards to those
field events. Implementation of those assumptions in the model is fairly easy and can be
achieved through various instruments the software may provide –schedules, DCQ tables,
workflows, macros, etc.
However, the uncertainty in the assumptions may significantly affect results of the forecast and
subsequently the decisions that are made based on those results. This example approaches
the uncertainty of the planned events in a statistical manner. The statistical data used is
obtained from the previous experience of the field production.
2. Field Description
The production system in question consists of 2 reservoirs. The system includes 6 production
wells and 2 main separators (LP and HP). The high pressure separator in the system is
constrained by a maximum gas and liquid rate.
It is possible to route wells to either separators if one of them is shut for maintenance or broken.
The system layout in GAP is shown in the figure below.
July, 2021
RESOLVE Manual
Examples Guide 2134
The objective of the project is to generate a production profile and estimate cumulative
production for the field over the next year taking into account various field events.
The events listed above can be easily accounted for in GAP using a schedule for wells,
pipelines and separators, as well as downtime or production deferement.
However, in the past the short term planning based on the model forecasts showed some
discrepancies form the actual field behaviour over the covered period of time due to incorrect
assumptions made with regards to field planned and unplanned events, which may lead to the
violation of the company contractual obligations.
It is therefore required to develop a more holistic approach towards the short term forecasting
and planning using previous company experience and taking into account the uncertainties
inherent to all planned events. This will be done using a statistical approach, where each
uncertain parameter is assigned a probability distribution.
For example, the planned well stimulation job can be set for a particular date and it is planned to
have a certain resulting PI. Based on the previous field experience, planned acid jobs can be
shifted earlier or postponed for 2 weeks depending on other events in the field. The resulting PI
of the well in question may also vary in the range of 1.5 STB/day/psi.
Once the assumptions for all parameters have been defined, multiple realisations of the model
will be run to allow for a statistical analysis of the model results.
5. Model Architecture
To define assumptions and generate a set of input values, a statistical analysis tool such as
Sibyl can be used. It is then required to pass data to GAP model for calculations and run it
multiple times to obtain distribution of cumulative production and average rates. The later can
then be analysed and used for decision making.
The integration between the field model and Sibyl is performed in RESOLVE by means of the
Sibyl data object.
6. Licenses Required
Running this example will require the following licenses to be available to the user:
July, 2021
RESOLVE Manual
Examples Guide 2136
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP is registered. Note that this operation is not required if it has been done previously.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Make sure that the Sibyl Data Object is registered: this is the case if the Sibyl appears in the list
of Data Objects. This is automatically performed by RESOLVE and can be verified by selecting
Drivers | Register Data Object from the main menu. If the Sibyl driver is not installed use the
Register command to browse to the installed program directory and register Probabilistic.dll.
7. Files Location
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_12-Sibyl
This folder contains a file ‘Sibyl GAP Example.rsa’ which is an archive file that contains the
RESOLVE and GAP files required to go through the example. The archive file needs to be
extracted either in the current location or a location of the user’s choice.
Double click on GAP and browse for the GAP model extracted from RESOLVE archive on Step
1. Then select OK.
July, 2021
RESOLVE Manual
Examples Guide 2138
RESOLVE will open GAP and display wells and separator nodes.
Go to Step 3.
3.8.12.4 Step 3 - Add Sibyl data object
From the ‘Add DataObject’ menu browse for ‘Probabilistic’ and select ‘Sibyl’.
July, 2021
RESOLVE Manual
Examples Guide 2140
The RESOLVE model in question has only GAP module added, therefore this module will be
automatically selected in the ‘Analysis on model’ drop down list.
As a starting point it is possible to select a workflow from the list of templates. Click ‘Select
model template’ button and choose ‘GAP forecast’ template. This template is designed to set
and retrieve OpenServer variables, and to run the GAP forecast. It can also be edited, and this
will be discussed later on.
Go to Step 5.
3.8.12.6 Step 5 - Setup the Sibyl model
The ‘Sibyl model’ tab allows defining Sibyl distributions to describe the range of values a
variable can have. A triangular distribution will be chosen for all the variables, using the
parameters shown below.
July, 2021
RESOLVE Manual
Examples Guide 2142
For instance, for the well downtime, the distribution is the following.
The next step is to define the output parameters. Output parameters will be defined in the
controlling workflow, therefore here the only requirement is to give the variable a name and
July, 2021
RESOLVE Manual
Examples Guide 2144
The final Sibyl output variable table should now look as follows.
Go to Step 6.
3.8.12.7 Step 6 - Map the Sibyl variables
Now that Sibyl distributionss are defined it is required to map them to the relevant variables.
The distribution can be mapped to an OpenServer tag or non-OpenServer variable.
The workflow template that was selected previously is designed such that all parameters
mapped to OpenServer tags will be automatically passed to the GAP model. For non-
OpenServer variables some modifications of the workflow will be required, and this will be
To map the variables select the 'Add' button in the table and enter the description of the
variable, e.g. ‘Well1A_Downtime’. In the displayed window choose the first radio button, enter
the variable OpenServer string from GAP. The ‘Get Unit’ function is optional, and enables to
verify that RESOLVE is able to access the string (however, for the variables mentioned here
there are no units).
After creating the variable, one of our distributions must be associated and a test value defined.
The above mapping should be done for all variables. For non-OpenServer variables type ’na’
July, 2021
RESOLVE Manual
Examples Guide 2146
instead of the OpenServer tag. Once the mapping is complete, enter the test values shown
below, which will be used when testing the workflow. The table should now look like the
following.
We will now modify the workflow to account for the non OpenServer variables, go to Step 7.
3.8.12.8 Step 7 - Modify the workflow
Some Sibyl variables are mapped to non-OpenServer variables, namely
Date, duration and resulting PI of acid job;
Date and duration of LP separator maintenance.
These parameters will still be passed to the GAP model using OpenServer. However, they will
not be transferred to the model directly, like downtime factors, but will be setup in the equipment
schedule, as they correspond to date driven events.
The above elements require some changes to the workflow controlling the model. To modify the
workflow select the ‘Debug model workflow with test values’ button.
This will display the underlying Case Manager. It will be seen that the CaseManager has been
populated with the variables previously defined. The OpenServer variables are stored inside the
DataStore ‘inputVariables’, and an additional variable is created for each non OpenServer
variable.
July, 2021
RESOLVE Manual
Examples Guide 2148
We are going to create one additional variable, a DataStore. When the LP separator is shut
down, the LP wells will be routed towards the HP separator, and the DataStore will be used to
hold the required pipeline routings. To do this, enter the following variable name, set its type as
User -> DataStore and press Add.
Once added select ‘Edit’ in the table and define 2 columns for the DataStore.
Select OK and in the displayed table type in names of pipelines that will rerouted during the LP
separator workover and corresponding mask flags.
The next step is to modify the workflow. Switch to the ‘Workflows’ tab where the workflow editor
is displayed. The workflow is split into 3 blocks:
Set inputs – this is sub-flowsheet that passes data to the GAP model;
Run prediction – operation that initialises and runs GAP forecast;
Retrieve results – sub-flowsheet that reads results from GAP.
Enter the ‘Set inputs’ sub-flowsheet. The original workflow is shown below, and its role is to
automatically set the OpenServer variables into GAP.
July, 2021
RESOLVE Manual
Examples Guide 2150
This should be modified by adding a few extra blocks, used to input the non OpenServer
variables.
The loop will be used to schedule pipeline re-routing, the last two blocks will be used to
schedule LP separator masking and performing the acid job for ‘WellB3’.
July, 2021
RESOLVE Manual
Examples Guide 2152
If pipeline ‘MaskFlag’ is 0, then the pipeline schedule is defined in reverse, i.e. to unmask it
initially and then mask it later.
The ‘LP Bypass’ block is defined such that it masks the LP separator and opens the
‘bypassLP’ pipeline. Once maintenance is finished they are returned to their initial conditions.
Finally, the ‘B3 Acid Job’ schedules 3 events for ‘WellB3’ by turning the well off on the acid job
date, and turning it on after a number of days with a different PI.
July, 2021
RESOLVE Manual
Examples Guide 2154
This workflow can be test for debugging, and if it has been successfully executed, it should
result in the following schedules in the GAP model.
LP Separator:
Well 3B:
Pipe ‘1BtoLP’:
Pipe ‘1BtoHP’:
Pipe ‘bypassLP’:
Once the ‘Set inputs’ is modified move up to a higher level by selecting the button on the tool
bar. It is now required to modify ‘Retrieve results’ sub-flowsheet, to retrieve cumulative
production at the end of the run.
Enter the ‘Retrieve results’ sub-flowsheet. The original workflow looks as follows, and is
designed to automatically retrieve OpenServer output variables (there are none in this
example).
July, 2021
RESOLVE Manual
Examples Guide 2156
The first block retrieves number of records in the forecast table in GAP.
The resulting value is used as index to extract the cumulative gas and oil production from ‘Gas
Sales’ and ‘Oil Sales’ separators, and assign them to the CumGas and CumOil variables
respectively.
The last block is used to calculate average rates for the year.
This finishes the modifications of the workflow. For debugging purposes step through the
workflow to verify whether it performs as expected. Please ensure that all input variables are
initialised with values before doing so. This can be verified in the ‘Variables’ tab.
Then click ‘OK’. This will close CaseManager and return back to Sibyl. Go to Step 8.
PxCluster should be started before running Sibyl. This can be done from IPM Utilities:
July, 2021
RESOLVE Manual
Examples Guide 2158
In the displayed window select the big folder button to start the local cluster. The example may
also be ran on a distributed cluster is this is available. Further information on clustering can be
found in the Setting up PXCluster section.
In case a limited number of RESOLVE licenses are available, it may be required to limit the
number of jobs running in parallel. This can be done by selecting ‘Cluster options’ button.
July, 2021
RESOLVE Manual
Examples Guide 2160
Once all options are defined, select ‘Run on cluster’ button, which will populate cases and
submit them to the PxCluster for calculation.
Results will then be displayed in the table as they are being calculated. Once the run is finished
Sibyl results will also be populated and diagrams will be built. The results can then be analysed
and visualised from Sibyl.
July, 2021
RESOLVE Manual
Examples Guide 2162
The well is a horizontal well with transverse vertical fractures and a REVEAL model of the well is
available. The REVEAL model is controlled using the historical rates. The objective of this
example is to use the Particle Swarm Data Object to history match the bottomhole pressure
and the fractional flow of the well.
The Particle Swarm Data Object contains a stochastic optimiser which is used to maximise or
minimise an objective function by manipulating a set of user-defined optimisation variables. For
history matching, the objective function is an error function which is to be minimised. Further
details on the working principles of the particle swarm can be found in the Particle Swarm Data
Object section of this manual.
In this example, the variables that are used for history matching are:
Permeability multiplier
Porosity multiplier
Oil/Gas relative permeability end point
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE REVEAL
1 2 or more
Before starting with this example, it is necessary to make sure that the PSwarm Data Object of
RESOLVE is registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_13-Particle_Swarm
This folder contains a file "Particle Swarm Example.rsa", which is an archive file that contains
the REVEAL file required to go through the example. The archive file needs to be extracted
either in the current location or a location of the user’s choice. Go to Step 1.
3.8.13.2 Step 1 - Start a new RESOLVE file
Start RESOLVE and go to File| Archive| Extract. Navigate to the above mentioned folder (see
Files Location above), select the .rsa file and extract its content into a selected location. When
the “Open Master File?” question is prompted, select “No”. This step ensures that underlying
July, 2021
RESOLVE Manual
Examples Guide 2164
Double-click on the REVEAL icon, and in the 'File name' field browse for Tight Reservoir.rvl
which was extracted in Step 1. Click 'Start'.
RESOLVE will then launch REVEAL and open the file. The well will be displayed with a green
icon.
Go to Step 3.
3.8.13.4 Step 3 - Add the Particle Swarm object
From the 'Add Data Object' icon, browse for 'Optimisation' and add a 'PSwarm' object to the
model.
July, 2021
RESOLVE Manual
Examples Guide 2166
Optimisation variables: this tab is used to define the optimisation variables and their range
(minimum and maximum value of the variable)
Run & Results: this tab is used to launch the calculations and analyse the results.
The 'Reveal' model is automatically selected in the 'Analysis on model' screen, as it is the only
instance currently in the RESOLVE model.
A number of template workflows are provided. Click 'Select model template' and load the
'REVEAL Run' template. This workflow contains the basic commands to write to the script of
REVEAL and run the model. The workflow will then need to be edited in order to input the
optimisation variables and calculate the error function after the REVEAL run.
Go to Step 5.
3.8.13.6 Step 5 - Define the optimisation variables
Enter the 'Optimisation variables' tab. This is where we defined the variables that the object can
control in order to maximise or minimise the objective function. The range over which the
variables can vary is also specified here.
In this example, the variables that are used for history matching are:
Permeability multiplier
Porosity multiplier
Oil/Gas relative permeability end point
Gas relative permeability end point
Multiplier on the transverse dimension of the grid, i.e the well drainage area
To add an optimisation control variable, click on the 'Variable tag' buttons. A variable can either
be chosen from a list of variables (if it exists) or added using a description and corresponding
OpenServer string.
If the variable to be added has a known OpenServer string, this should be entered here. In this
case, a permeability multiplier is to be added: this does not have an OpenServer string, and 'na'
should be specified. Select 'OK'.
July, 2021
RESOLVE Manual
Examples Guide 2168
Click on 'Debug model workflow with test values' to initialise the optimisation variables and edit
the workflow. Go to Step 6.
3.8.13.7 Step 6 - Edit the workflow
On clicking 'Debug model workflow with test values', the underlying Case Manager is displayed
and this allows to edit and test the workflow. In the 'Variables' tab, the optimisation variables
previously defined have been added and initialised with a value which is the average of the
minimum and maximum entered.
Add a RevealResults and a History variable of type DataSet as shown below. These
variables will be used to hold the results and the history data in the workflow.
For both DataSets, click on 'Edit' and setup the following columns.
July, 2021
RESOLVE Manual
Examples Guide 2170
Enter the 'Workflows' tab. The template previously loaded is shown, this now needs to be
edited.
Delete the all the elements apart from the 'write script' operation, and add the following
operation.
Within the 'Set Inputs' operation, add the following 9 commands. Commands can be added
using the 'Add global function' button. All the operations are 'Generic OpenServer functions'.
Note: on clicking OK, the variables Perm and Poro should be added as 'double precision'.
July, 2021
RESOLVE Manual
Examples Guide 2172
Count Results
Start by counting the number of rows in the results table of REVEAL, for the history and the
simulation results.
On clicking OK, the variables nRevRes and nHisRes should be created as integers.
Loop Results
Loops through the simulation results. On clicking OK, the variable i should be created as an
integer.
GetRevealResults
Get the REVEAL results of the current row.
July, 2021
RESOLVE Manual
Examples Guide 2174
Loop History
Loops through the production history, which is stored in the REVEAL model. On clicking OK, the
variable j should be created as an integer.
GetHistoryResults
Get the production history results of the current row.
July, 2021
RESOLVE Manual
Examples Guide 2176
n
1 Generic openserver Get a OpenServe Reveal "Reveal.WellRes[{History}][{Well1}]["+j+"] History.Column[0].Value[i]
function variable r .Date"
2 Generic openserver Get a OpenServe Reveal "Reveal.WellRes[{History}][{Well1}]["+j+"] History.Column[1].Value[i]
function variable r .OilProduced"
3 Generic openserver Get a OpenServe Reveal "Reveal.WellRes[{History}][{Well1}]["+j+"] History.Column[2].Value[i]
function variable r .WaterProduced"
4 Generic openserver Get a OpenServe Reveal "Reveal.WellRes[{History}][{Well1}]["+j+"] History.Column[3].Value[i]
function variable r .GasProduced"
5 Generic openserver Get a OpenServe Reveal "Reveal.WellRes[{History}][{Well1}]["+j+"] History.Column[4].Value[i]
function variable r .WaterCut"
6 Generic openserver Get a OpenServe Reveal "Reveal.WellRes[{History}][{Well1}]["+j+"] History.Column[5].Value[i]
function variable r .GOR"
7 Generic openserver Get a OpenServe Reveal "Reveal.WellRes[{History}][{Well1}]["+j+"] History.Column[6].Value[i]
function variable r .BottomHolePressure"
Set Variables:
Define a number of variables:
nPoints: counts the number of terms which make up the error function, to be used when
calculating the chi-squared error. Should be defined as an integer
GORweighting: the error function will include differences in pressures (in psig) and GOR (in
scf/STB). This is the weighting applied to the GOR. Should be defined as a double precision.
Error: Resets the value of the error function to 0
LoopHistory2
The purpose of this loop is to go through the simulation and production history and calculate the
Error by adding successively the difference between the history and the simulation. This loop
starts from 1 because the simulation results in REVEAL do not have a point corresponding to
the first history record.
July, 2021
RESOLVE Manual
Examples Guide 2178
Add Error
If the well is flowing (No branch), add the following contribution to the error. BHP1, BHP2, GOR1
and GOR2 should be defined as double precision. The error is calculated by adding the square
of the difference between the simulation and history, including the GOR weighting.
July, 2021
RESOLVE Manual
Examples Guide 2180
If the well is shut in (Yes branch), add only the BHP contribution to the error.
Calc Error
Calculate the final error by dividing by the number of terms, and taking the square root.
The workflow is not ready to be tested. Click on the icon to go back to the main workflow,
and go to Step 7.
3.8.13.8 Step 7 - Test the workflow
Test the workflow using the icon (run) or the icon (step through). While stepping through
July, 2021
RESOLVE Manual
Examples Guide 2182
In the displayed window select the big folder button to start the local cluster. The example may
also be ran on a distributed cluster is this is available. Further information on clustering can be
found in the Setting up PXCluster section.
In case a limited number of RESOLVE licenses are available, it may be required to limit the
number of jobs running in parallel. This can be done by selecting ‘Cluster options’ button.
July, 2021
RESOLVE Manual
Examples Guide 2184
Under 'Options', increase the initial population to 30. Note that this is case dependent: it is
increased in this example to reflect the complexity of the problem and the number of
optimisation variables.
Run the model by clicking on 'Run on cluster', this may take several hours depending on the
computing power available and the number of models that can be run simultaneously. The
following is obtained. It can be observed that the simulations have stopped after around 190
iterations because the search step size has dropped below the convergence tolerance.
The model can be run with the optimum parameters by going into the 'Optimisation variables'
tab and clicking on 'Debug model workflow with test values', in the same way as Step 7.
The values of the optimisation variables should be assigned individually by clicking on the 'Edit'
button in the 'Variables' tab.
July, 2021
RESOLVE Manual
Examples Guide 2186
Run the model by running the workflow from the 'Workflows' tab. The results can be obtained
and viewed from the REVEAL model. The following match is obtained for the BHP as well as
the GOR.
BHP:
GOR:
An oil field with an initial gas cap and strong aquifer support has been on production for a few
years and production data is available. Currently the reservoir model does not reproduce the
fractional flow history (WCTs and GORs). In order to accurately forecast the production, it is
required to obtain a representative reservoir model of the field, via History Matching.
The reservoir model contains five layers. Layers 1, 3 and 5 contain fluvial channels with
permeabilities around 1 Darcy, encased in clay and mudstone with permeabilities in the mD.
July, 2021
RESOLVE Manual
Examples Guide 2188
There six production wells and they are located around the initial gas cap.
The objective of the example is to determine the permeability in the channels that would be
required in order to match the fractional flow history of the wells. This will be achieved by
applying permeability multipliers to layers 1, 3 and 5 of the model. The model has been defined
with 5 Fluid In Place regions (one for each layer), and therefore the multipliers will be applied to
FIP regions 1, 3 and 5.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE REVEAL
1 2 or more
Before starting with this example, it is necessary to make sure that the History Matching Data
Object of RESOLVE is registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_14a-History_Matching
This folder contains a file "History Matching.rsa", which is an archive file that contains the
REVEAL file required to go through the example. The archive file needs to be extracted either in
the current location or a location of the user’s choice. Go to Step 1.
3.8.14.2 Step 1 - Start a new RESOLVE file
Start RESOLVE and go to File| Archive| Extract. Navigate to the above mentioned folder (see
Files Location above), select the .rsa file and extract its content into a selected location. When
the “Open Master File?” question is prompted, select “No”. This step ensures that underlying
model is extracted into the folder.
Create a new project using File| New or the icon .
July, 2021
RESOLVE Manual
Examples Guide 2190
Double-click on the REVEAL icon, and in the 'File name' field browse for Tight Reservoir.rvl
which was extracted in Step 1. Click 'Start'.
RESOLVE will then launch REVEAL and open the file. The well will be displayed with a green
icon.
Go to Step 3.
3.8.14.4 Step 3 - Add the History Matching Tool
From the 'Add Data Object' icon, select the History Matching Tool and add it to the RESOLVE
model.
July, 2021
RESOLVE Manual
Examples Guide 2192
Go the 'History data' tab and select 'Import History'. In this example, the production history will be
imported from a text file. Note that this text file can be generated from an Excel table. More
details and examples of supported formats can be found in this section.
Click OK and browse for 'History.txt' which should have been extracted along with the REVEAL
model. The production data should now have been imported into the model. The next step is to
perform a mapping in order to indicate to the tool the correspondence between the imported
columns of data and the actual model variables, as well as the units of the imported data.
Select 'Map columns to model', and then 'Auto-map'. This attempts to perform the mapping
automatically, for example when equipment names are identical between the REVEAL model
and the imported data, or if it recognises the history variable or the unit. It is always possible to
perform this mapping manually: this will also be aided by the auto-complete functionality.
July, 2021
RESOLVE Manual
Examples Guide 2194
On clicking 'Auto-map', the following window appears. This indicates that all columns have been
mapped apart from the Status columns. This is not an issue, and the Status of the wells will be
interpreted in the next step.
The columns of the imported data are now mapped with the wells and variables of the model.
Select 'Interpret well statuses'. The objective of this step is to map the status of the well in the
imported data to the status of the well in the model. The values in the columns are listed and
should be mapped to the corresponding well status in the REVEAL model. Select 'Producing'
and then 'Shut (no crossflow)', and click OK.
July, 2021
RESOLVE Manual
Examples Guide 2196
The 'Plot history' button may be used to quality check the data import.
July, 2021
RESOLVE Manual
Examples Guide 2198
Go to the 'Variable weights' tab. This tab is used to define weightings that are used in the
calculation of the error function between history and simulated data. These weightings can be
used to reflect the relative confidence in the data, or to define priorities for the different variables
to be matched. In this example, these will be left as default.
The report variables is used to define additional variables that are to be extracted by the tool
from the REVEAL model for plotting and analysis. Note that the tool automatically extracts
calculated variables corresponding to the production history. In this example, no additional
reported variables will be addded. Go to Step 6.
Click on 'Set method'. This window is used to define the method used for combining the
differences between historical and simulated values: leave this to the default using the RMS
error. Enter the following minimum normalisation values. As shown below, the difference
between simulated and historical values (for well i, variable j and time n) are normalised with
respect to the average. The minimum normalisation is used to ensure that if small rates are
calculated, this does not affect the error function disproportionately (for example the water rate if
water breakthrough has not happened yet). The value entered should be of the same order of
magnitude as the individual well's production. More details on the calculation of the error
function can be found in this section. Select 'OK'.
July, 2021
RESOLVE Manual
Examples Guide 2200
The next step is to define the model inputs that the tool will adjust in order to improve the history
match or to run sensitivities: to do this select 'Add variable'. As explained in the example
introduction, the XY permeability of FIP regions 1, 3 and 5 will be used as control variables.
Select 'FIPRegion', then 'XY_permeability' and 'Region 1'.
Repeat this step for Regions 3 and 5, and enter the following bounds on the multipliers. This
defines the minimum and maximum multipliers used during the optimisation.
July, 2021
RESOLVE Manual
Examples Guide 2202
Click Start Run and in the window which appears on the screen click Run.
If the model is to be run on a cluster, then PX cluster has to be launched prior to running the
history matching tool.
The well is a horizontal well with transverse vertical fractures and a REVEAL model of the well is
available. The REVEAL model is controlled using the historical rates. The objective of this
example is to use the History Matching Data Object to history match the bottomhole pressure
and the fractional flow of the well.
The History Matching Data Object can use different stohcastic optimisers to maximise or
minimise an objective function by manipulating a set of user-defined optimisation variables. For
history matching, the objective function is an error function which is to be minimised. Further
details on the working principles of these can be found in their respective manual sections.
In this example, the variables that are used for history matching are:
Permeability multiplier
Porosity multiplier
Oil/Gas relative permeability end point
Gas relative permeability end point
Size of the transverse dimension of the grid, i.e the well drainage area
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE REVEAL
1 2 or more
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 2204
...\resolve\Section_6-Data_Objects\Example_6_14b-History_Match
This folder contains a file "History Matching Example.rsa", which is an archive file that contains
the REVEAL file required to go through the example. The archive file needs to be extracted
either in the current location or a location of the user’s choice. Go to Step 1.
3.8.15.2 Step 1 - Start a new RESOLVE file
Start RESOLVE and go to File| Archive| Extract. Navigate to the above mentioned folder (see
Files Location above), select the .rsa file and extract its content into a selected location. When
the “Open Master File?” question is prompted, select “No”. This step ensures that underlying
model is extracted into the folder.
Create a new project using File| New or the icon .
Double-click on the REVEAL icon, and in the 'File name' field browse for Tight Reservoir.rvl
which was extracted in Step 1. Click 'Start'.
RESOLVE will then launch REVEAL and open the file. The well will be displayed with a green
icon.
Go to Step 3.
3.8.15.4 Step 3 - Add the History Matching Tool
From the 'Add Data Object' icon, select the History Matching Tool and add it to the RESOLVE
model.
July, 2021
RESOLVE Manual
Examples Guide 2206
Go the 'History data' tab and select 'Import History'. In this example, the production history will be
imported from the REVEAL file itself. More details and examples of supported formats can be
found in this section.
Click OK and the production data should now have been imported into the model. The next step
is to perform a mapping in order to indicate to the tool the correspondence between the
imported columns of data and the actual model variables, as well as the units of the imported
data.
Select 'Map columns to model', and then 'Auto-map'. This attempts to perform the mapping
automatically, for example when equipment names are identical between the REVEAL model
and the imported data, or if it recognises the history variable or the unit. It is always possible to
perform this mapping manually: this will also be aided by the auto-complete functionality.
July, 2021
RESOLVE Manual
Examples Guide 2208
The columns of the imported data are now mapped with the wells and variables of the model.
Select 'Interpret well statuses'. The objective of this step is to map the status of the well in the
imported data to the status of the well in the model. The values in the columns are listed and
should be mapped to the corresponding well status in the REVEAL model. Select 'Producing'
and then 'Shut (no crossflow)', and click OK.
The 'Plot history' button may be used to quality check the data import.
Go to the 'Variable weights' tab. This tab is used to define weightings that are used in the
calculation of the error function between history and simulated data. These weightings can be
July, 2021
RESOLVE Manual
Examples Guide 2210
used to reflect the relative confidence in the data, or to define priorities for the different variables
to be matched. In this example, these will be left as default.
The report variables is used to define additional variables that are to be extracted by the tool
from the REVEAL model for plotting and analysis. Note that the tool automatically extracts
calculated variables corresponding to the production history. In this example, no additional
reported variables will be addded. Go to Step 6.
3.8.15.7 Step 6 - Create a run
Go to the 'Run history match' tab. This tab is used to create runs and analyse results. Click on
the 'Create new run' button, and select the 'PSwarm' engine: this will use a stochastic optimiser
using the Particle Swarm methodology to minimise the error function and reduce the error
between historical and calculated data.
Click on 'Set method'. This window is used to define the method used for combining the
differences between historical and simulated values: leave this to the default using the RMS
error. Enter the following minimum normalisation values. As shown below, the difference
between simulated and historical values (for well i, variable j and time n) are normalised with
respect to the average. The minimum normalisation is used to ensure that if small rates are
calculated, this does not affect the error function disproportionately (for example the water rate if
water breakthrough has not happened yet). The value entered should be of the same order of
magnitude as the individual well's production. More details on the calculation of the error
function can be found in this section. Select 'OK'.
July, 2021
RESOLVE Manual
Examples Guide 2212
The next step is to define the model inputs that the tool will adjust in order to improve the history
match or to run sensitivities: to do this select 'Add variable'. As explained in the example
introduction, the XY permeability, porosity and REVEAL tokens representing end-point gas and
oil relative permeabilties and grid scpaing in the Y-direction will be used.
July, 2021
RESOLVE Manual
Examples Guide 2214
Previous runs can then be saved as 'Matches', with the best error reported and the final values
of the sensitivity variables
A well with two tubing strings has both gas-lifted from a common gas injection source through a
shared casing. The produced fluid properties of the layers each string is completed in is known,
in addition to the injection point and size on each string. The total gas lift available is known and
metered, but the casing head pressure required to inject this, the split of injected gas and
subsequent well performance is not.
July, 2021
RESOLVE Manual
Examples Guide 2216
This example will use the Dual String Gas Lift data object to determine this, based on
PROSPER models constructed for each string.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE PROSPER
1 2
Before starting with this example, it is necessary to make sure that the History Matching Data
Object of RESOLVE is registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_15-Dual_String_Gas_Lift
This folder contains a file "DualStringGasLift.rsa", which is an archive file that contains the
RESOLVE file required to go through the example. The archive file needs to be extracted either
in the current location or a location of the user’s choice. Go to Step 1.
Double-click on the IPM-OS icon labelled 'String1', and in the 'File name' field browse for
String1.out which was extracted in Step 1, then do the same for String 2.
July, 2021
RESOLVE Manual
Examples Guide 2218
Go to Step 3.
3.8.16.4 Step 3 - Add the Dual String Gas Lift object
From the 'Add Data Object' icon, select Dual String Gas Lift and add it to the RESOLVE model.
After doing this, link each PROSPER IPM-OS object to the Dual String Gas Lift object.
Then enter the measured known data into the object, which is detailed below:
Go to Step 5.
3.8.16.6 Step 5 - Set the method and calculate
In order to balance the two strings and the casing side injection rate, we must begin with the
measurements we know. In this case, we know 5.4 MMscf/d of gas lift gas is available. This will
be set as the fixed input, as shown below. Then press calculate.
July, 2021
RESOLVE Manual
Examples Guide 2220
Go to Step 6.
3.8.16.7 Step 6 - Review the results
There are engineering problems which require finding the frequencies in a stationary signal with
noise. To find the frequencies in a stationary signal, the signal can be converted from the time
domain into the frequency domain by applying a Fast Fourier Transformation (FFT). FFT
decomposes the signal in a series of sinusoidal functions of different frequencies by direct
Fourier Transformation, as shown below. In this manner, it reduce the noise and extract
frequencies of the original signal.
In this example we have built a workflow by first generating a periodic signal comprised of
multiple sinusoidal functions and a random noise. Then by applying a Fast Fourier
Transformation (FFT) data object , we decompose the signal from the time to frequency
domain to find the main frequencies in the signal. The obtained frequencies then are compared
with the frequencies used in the original signal which proves the concept of the method.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE
1
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_16-Spectral_Analysis
July, 2021
RESOLVE Manual
Examples Guide 2222
This folder contains a file "SpectralAnalysisExample.rsl", which is the RESOLVE file required to
go through the example.
3.8.17.2 Step 1- Start a new RESOLVE file
Start RESOLVE, and create a new project using File| New or the icon .
Right click on the element and change the name of the workflow to "SetSource".
Place a DataSet on the canvas and name it "Source".The values of the signal will be collected
in the DataSet.
July, 2021
RESOLVE Manual
Examples Guide 2224
Access the "Source" data set and create a column- Input (units are not required).
When the data set is populated with data, it has to be cleaned of all the previous data excluding
the headings. This should be done with the use of "Math library functions" in the workflow.
Access the "SetSource" workflow element and place an Operation element on the canvas.
Name it "Clear data" and connect it to the Start block.
July, 2021
RESOLVE Manual
Examples Guide 2226
Access the Clear data element. From the Math library functions category, add a global function
"Clear the data (leave the column headings) and the columns from the dataset" available under
DataSet. In the value column of the function select Source.
To generate random fluctuations (noise) in the signal, use a Probabilistic modeling function.
This will require to create a variable with the type Distribution first.
Navigate to the "Add variables used in workflow" window and type "random" in the Variable
name section. Then select "user type" within the Variable type and choose Distribution under
the Probabilistic modeling variable type. Select "Start of run only" in the Initialisation option.
Click on Add variable to create the variable.
July, 2021
RESOLVE Manual
Examples Guide 2228
The probabilistic modeling function can be implemented within the workflow via the use of an
operation element. Place it on the canvas and name it "Set random". Connect it to the "Clear
data" element.
Access the operation element and select "Set as a normal distribution" under the Probabilistic
modeling category.
July, 2021
RESOLVE Manual
Examples Guide 2230
Multiple data points are needed to generate a signal. The desired number of points can be
stored in an assignment element which will then be passed to a Loop element generating the
desired signal at each iteration of the loop.
Place the assignment element on the canvas and name it "iCount". Connect it to the "Set
random" element.
July, 2021
RESOLVE Manual
Examples Guide 2232
Create the variable "iCount" which will store the number of data points for the modeled signal.
For that, navigate to the "Add variables used in workflow" window and type "iCount" in the
"Variable" name section.
Select "Every call" in the Initialisation option. Set the starting value to be 0 and click on Add
variable.
Access the iCount assignment element and enter iCount variable in the "Variable" column.
Set iCount to 8192 which will determine the number of signal points.
July, 2021
RESOLVE Manual
Examples Guide 2234
The complete signal can be generated with the use of the Loop element. In this respect, the
signal will be made from the following functions:
1) Two sine waves of different frequencies and amplitudes.
2) Baseline drift (another very low freq sine wave in this case).
3) Random noise.
Each of the functions will be used sequentially at each iteration to generate a data point.
Within the Loop element, enter i as a variable to increment and provide the following Loop
details:
Starting value: 0
End value: iCount-1
Loop increment: 1
Please note, since variable i does not exist, RESOLVE will ask the user whether a new variable
is to be added. In this instance, enter an integer as a variable type and click "Yes".
The goal now is to create multiple sin functions which will constitute the signal.
Connect the Loop element to an assignment element which will generate the first sin function of
the form Sin(2·π ·i·0.1) where 0.1 is the frequency of the function. Name this assignment
element "Set sin".
Enter the sin function within the Commands tab of the assignment element. Set the return
assignment to be in the first column of the Source data set - Source.Column[0].Value[i].
July, 2021
RESOLVE Manual
Examples Guide 2236
Place another Assignment element on the canvas and name it "Add sin 2". This will add the
other sin function to the signal. After placing the assignment element on the canvas, connect it to
"Set sin".
Access "Add sin 2" and enter a sin function of the form 0.5*Sin(2*π *i*0.33) where 0.333 is the
frequency of the function and A=0.5 is the amplitude .
Place another assignment element on the canvas. Name it "Add base" and connect it to the
"Add sin 2" element.
July, 2021
RESOLVE Manual
Examples Guide 2238
July, 2021
RESOLVE Manual
Examples Guide 2240
To build the workflow, place a workflow element on the main canvas and name it "RunFFTs".
In the workflow, use the Spectral Analysis data object to calculate frequencies of the signal. This
data object can be found under the Data analysis category of the Data object tab.
Access "RunFFTs", place an operation element on the canvas and name it "Do First FFT". Link
the element with the Start block using a link item.
July, 2021
RESOLVE Manual
Examples Guide 2242
Add a function called "Perform spectral analysis of an input column" under the Data Analysis
category.
July, 2021
RESOLVE Manual
Examples Guide 2244
Click Ok. RESOLVE will ask whether a new variable iRet is to be created. Click Yes.
July, 2021
RESOLVE Manual
Examples Guide 2246
Connect the "Do Single FFT" operation to the End block to complete workflow.
Access "Spectral Analysis" on the main canvas. Our analysis focuses on the first 512 samples
and assumes that the sampling rate or sampling frequency (i.e. the number of signal per
second) is 1 HZ.
Enter these values for the corresponding parameters in the Spectral Analysis object.
To enter the Sample rate directly, select "Sample rate" as an Analysis period definition and
then Click Ok.
The generated spectrogram might suffer from background noise. This can be suppressed by
taking the average of multiple smaller periods from the signal.
Implement the averaging of the signal via another Spectral analysis object. Place it on the
canvas and name it "MultipleFFT".
July, 2021
RESOLVE Manual
Examples Guide 2248
Enter "MultipleFFT" and select analysis of the first 512 case samples with the sampling
frequency of 1 HZ. In addition , select the "Analyse multiple successive periods" option within
the "Perform single or multiple period analysis?" section. To supply the signal to the object,
extend the "RunFFTs" workflow by adding another operation element.
In order to do this, access "RunFFTs" workflow and place another operation element on the
canvas. Name it "Do Multi FFT" and connect to the End block.
July, 2021
RESOLVE Manual
Examples Guide 2250
Provide the following values for the parameters in "Do Multi FFT":
spectralAnalysis MultipleFFT
column Source.Column[0]
startindex 0
Assign return iRet
July, 2021
RESOLVE Manual
Examples Guide 2252
July, 2021
RESOLVE Manual
Examples Guide 2254
From the spectrogram we can see that there is a significant background noise which has to be
removed. This will be removed with Multiple spectral object.
The noise is removed because multiple picks are averaged. The spectrogram of the object is
shown below.
From this graph we can clearly see three peaks- 0.1, 0.33 and 0.45 Hz- subsequently. These
peaks correspond to the frequencies used in the original sin waves.
3.8.18 Example 6.17: Wavelet Analysis
3.8.18.1 Overview
1. Example Introduction
From Example 6.16 we have seen that a Fourier Transformation decomposes a signal in time f
(t) in a number of sine waves. However, this method of transforming the signal does not identity
at what time a specific frequency has occurred as a Fourier Transformation is only applicable to
stationary signals. When we are analyzing a non-stationary signal, which is comprised of
frequencies changing with time, a Fourier Transformation is not applicable. Instead, Wavelet
Transformation can be used for the ananlys. Wavelet Transformation converts a signal f(t) into
a series of wavelets of different scales and positions and gives an engineer an idea of the
distribution of different frequencies of a signal in time.
July, 2021
RESOLVE Manual
Examples Guide 2256
In this example, we first generate an artificial signal which includes a step function affecting the
signal at some point in time. This step function can be an example of a short event which
influenced the signal during its recording. Then we apply the Wavelet Transformation to the
signal in order to find when the step function affected the signal. We do this because the
problem cannot be solved with a Fourier Transform as in the current example we are dealing
with a non-stationary signal.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE
1
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_17-Wavelet_Analysis
This folder contains a file "WaveletAnalysisExample.rsl", which is the RESOLVE file required to
go through the example.
3.8.18.2 Step 1-Start a new RESOLVE file
Start RESOLVE, and create a new project using File| New or the icon .
Right click on the element and change the name of the workflow to "SetSource".
July, 2021
RESOLVE Manual
Examples Guide 2258
Place a DataSet on the canvas and name it "Source".The values of the signal will be collected
in the DataSet.
Access the "Source" data set and create columns- Input and Target (units are not required).
July, 2021
RESOLVE Manual
Examples Guide 2260
When the data set is populated with data, it has to be cleaned of all the previous data excluding
the headings. This should be done with the use of "Math library functions" in the workflow.
Access the "SetSource" workflow element and place an Operation element on the canvas.
Name it "Clear data" and connect it to the Start block.
Access the Clear data element. From the Math library functions category, add a global function
"Clear the data (leave the column headings) and the columns from the dataset" available under
DataSet. In the value column of the function select Source.
The analysed signal will include noise as well. To generate random fluctuations (noise) in the
signal, use a Probabilistic modeling function. To do this, create a variable with the type
Distribution first.
Then navigate to the "Add variables used in workflow" window and type "random" in the
Variable name section; select "user type" within the Variable type and choose Distribution
under the Probabilistic modeling variable type; select "Start of run only" in the Initialisation
options; click on Add variable to create the variable.
July, 2021
RESOLVE Manual
Examples Guide 2262
The probabilistic modeling function can be implemented within the workflow via the use of an
operation element. Place it on the canvas and name it "Set random". Connect it to the "Clear
data" element.
Access the operation element and select "Set as a normal distribution" under the Probabilistic
modeling category.
July, 2021
RESOLVE Manual
Examples Guide 2264
Multiple data points are needed to generate a signal. The desired number of points can be
stored in an assignment element which will then be passed to a Loop element generating the
desired signal at each iteration of the loop.
Place the assignment element on the canvas and name it "iCount". Connect it to the "Set
random" element.
July, 2021
RESOLVE Manual
Examples Guide 2266
Create the variable "iCount" which will store the number of data points for the modeled signal.
For that, navigate to the "Add variables used in workflow" window and type "iCount" in the
"Variable" name section.
Select "Every call" in the Initialisation option. Set the starting value to be 0 and click on Add
variable.
Access the iCount assignment element and enter iCount variable in the "Variable" column.
Set iCount to WaveletAnalysis.Settings.NumberOfDataSamples. This expression refers to
the number of signal points which will be entered within the Wevelet data object placed on the
canvas later.
The complete signal can be generated with the use of the Loop element. In this respect, the
July, 2021
RESOLVE Manual
Examples Guide 2268
Each of the functions will be used sequentially at each iteration to generate a data point.
Within the Loop element, enter i as a variable to increment and provide the following Loop
details:
Starting value: 0
End value: iCount-1
Loop increment: 1
Please note, since variable i does not exist, RESOLVE will ask the user whether a new variable
is to be added. In this instance, enter an integer as a variable type and click "Yes".
The goal now is to create multiple sin functions which will constitute the signal.
Connect the Loop element to an assignment element which will generate the first sin function of
the form Sin(2·π ·i·0.015) where 0.015 is the frequency of the function. Name this assignment
element "Set sin".
Enter the sin function within the Commands tab of the assignment element. Set the return
assignment to be in the first column of the Source data set - Source.Column[0].Value[i].
In the same assignment element, set the second row to Source.Column[0].Value[i]. This will
populate the second column of the Source data set with zeros. This will be needed later to
extract a step function from the signal.
July, 2021
RESOLVE Manual
Examples Guide 2270
Place another Assignment element on the canvas and name it "Add sin 2". This will add the
other sin function to the signal. After placing the assignment element on the canvas, connect it to
"Set sin".
Access "Add sin 2" and add a sin function to the values of the first column of the Source data
store. This sin function will be 0.5*Sin(2*π *i*0.33) where 0.333 is the frequency of the function
and A=0.5 is the amplitude .
Place another assignment element on the canvas. Name it "Add base" and connect it to the
"Add sin 2" element.
July, 2021
RESOLVE Manual
Examples Guide 2272
Now add a random noise to the signal. Within "Add rnd", enter Source.Column[0].Value[i]
+random.NextRandomSample in the "set equal to" section.
Add a large spike artefact to the signal by increasing a 350th row of the time series by 3 -
Source.Column[0].Value[350] + 3. This will be add in an assignment element "Add spike"
linked to the Loop element.
July, 2021
RESOLVE Manual
Examples Guide 2274
Finally , include a small step function in the signal which we want to find. This step function will
be generated with a loop element which we will call Step connected to an assignment element
which will be called "Add step".
Access the Loop element called "Step" and use icount/2 as a starting point for looping. This
indicates that the step function will start from the middle of the signal.
The duration of the signal will be 1/6th of the duration of the signal. Therefore, the end value will
be icount/2 +icount/6.
At each iteration of the loop element, a value of one will be added to the signal via the "Add
July, 2021
RESOLVE Manual
Examples Guide 2276
The example file provided has workflows and objects annotated within to demonstrate the
functionality of the data object.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE
1
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_6-Data_Objects\Example_6_18-Neural_Networks
This folder contains a file "NeuralNetworks.rsa", which is an archive file that contains the
RESOLVE file required to go through the example. The archive file needs to be extracted either
in the current location or a location of the user’s choice.
3.8.20 Example 6.19: Python
3.8.20.1 Overview
Objective of this example
The objective of this example is to show how Python libraries can be applied in RESOLVE for
solving practical problems of the oil and gas industry. Since there is a wide range of freely
distributed python libraries, they can be incorporated in engineering workflows in RESOLVE
which makes the engineering toolkit flexible and more powerful.
One of the most important tasks when dealing with producing wells in a field is estimating their
producing rates based on the number of parameters such as their water cut, gas oil ratio and
flowing well head pressure. Whilst a detailed well model can be used to calculate the producing
rate of the well based on the inputs mentioned above, there is also a need to use proxy models
of wells which can quickly give the reliable answer. In this instance, a neural network which is
available in a free machine learning SciKit MLP library within Python can be applied to predict
the liquid rate of a well based on the inputs discussed above.
In this example, we will initially use the VLP/IPR calculator to obtain the production rates of a
well from the associated GAP model. Then the generated data will be used to train a neural
network which will be created from the free machine learning SciKit MLP library within the
Python programming language. The trained neural network must reproduce the test data set
with a reasonable accuracy and only then it can be used to predict production rates of a well
based on their water cut, GOR and flowing well head pressure.
July, 2021
RESOLVE Manual
Examples Guide 2278
Structure of the neural network and its representation with multi-layer perception (MLP)
Neural networks are computing systems which use multiple artificial neurons to train the
computer based on observed data. This computational approach helps to solve a broad range
of engineering problems.
A typical structure of a neural network is shown below-it consists of an input layer, a hidden layer
and an output layer.
In this example, the input layer will have three input parameters (i.e. FWHP, WCT and GOR),
there will be two hidden layers to create enough flexibility for computing and the output layer will
contain liquid rate.
We will use the Multilayer perception algorithm (MLP) taken from the SciKit library in Python to
create this network. This is a supervised learning algorithm which has a capability to learn non-
linear models.
The training will be done on a training data set , whereas testing the network will be done using
a testing data set.
Learning objectives
1. Interact with the Python data object via a visual workflow in RESOLVE.
2. Apply libraries of the Python data object to solve practical production engineering problems.
3. Create a neural network to predict production rates of a well.
Licenses required
To run this example, the user will need to have a RESOLVE and GAP licenses.
…\resolve\Section_6-Data_Objects\Example_6_19\Python\Initial Files
July, 2021
RESOLVE Manual
Examples Guide 2280
a Python file python.py containing functions for creating and testing a neural network.
a data set which will be used for predicting well liquid rate from the trained network.
Start RESOLVE, and open a new project using File | New or the icon and ensure that the
models are set to be reloaded when the forecast starts by going to the Options | System
Options section.
In the System properties section appeared on the screen, select “Single solve/optimisation only”
as the Forecast mode.
From the main menu, go to Edit System | Add Client program and from the resulting menu,
select “GAP”. Place it on the canvas and give a name to the label (for instance “GAP”).
The RESOLVE model will then be displayed as illustrated below:
Double click on the GAP icon and the following screen will appear.
July, 2021
RESOLVE Manual
Examples Guide 2282
Set up the location of the Petex Field.gap file which should be extracted from the Oil Field.gar
file.
3.8.20.5 Step 4- Defining distribution functions for the input parameters
Three input parameters (WCT, GOR and FWHP) will be used in calculating the performance of
a well: water cut (WCT), gas oil ratio (GOR) and flowing well head pressure (FWHP). The
quickest way to generate values for these parameters is to use distribution functions which
cover the full range of expected operating conditions. The calculated rates and input values can
then be employed to form training and test data sets which are respectively used to train and
test our neural network.
Add three Distribution data objects to the canvas using the DataObject icon .
To change the names of distribution objects, navigate to an object and click the right mouse
button. In the list of options which appear, select “Change Label” and then in the dialogue box
click “Yes”. Set the names as outlined below.
July, 2021
RESOLVE Manual
Examples Guide 2284
Enter the number of VLP/IPR calculations (each calculation will correspond to a set of input
parameters)
Generate the number of random samples from already defined distributions of input
parameters (i.e. GOR, WCT, and FWHP).
Use the input parameters to Perform VLP/IPR calculations and then store the calculation
results in a FlexData store.
Train and test the neural network.
From the list applications , place Workflow on to the canvas and OpenServer from the list of
Data Objects on to the canvas as well.
July, 2021
RESOLVE Manual
Examples Guide 2286
Change the name of the OpenServer object to OS by navigating to the object and clicking the
right mouse button. In the list of options which appear, select “Change Label”.
Navigate to the “Add variables used in workflow” screen and create the
variable Return_vf of the “UI Return structure” type available under Workflow Utilities in the
user type drop-down list.
July, 2021
RESOLVE Manual
Examples Guide 2288
Access the operation element “Input”, click on “Add global function” in the operation block and
then use the “Return a user-entered value” option available under “User interface components”
accessed from Workflow Utilities.
The value of the Return_vf variable can now be assigned to the number of VLP/IPR
calculations. For this, place an assignment block onto the canvas and name it “Config”. Connect
July, 2021
RESOLVE Manual
Examples Guide 2290
Create the nSamples variable of the Integer type, which will store the value of the Return_vf
variable. Then create the vwf_VLPIPR variable of the “VLPIPRCalculator” type. This is
available under the GAP calculations variable user type which can be found under the user type
drop-down list.
July, 2021
RESOLVE Manual
Examples Guide 2292
Create the following columns in “Data_to_train” : FWHP, WCT, GOR, and Liquid Rate. The first
three columns will be populated with the input values, whereas the last column will be populated
with the Liquid Rate calculated by the VLP/IPR calculator.
The inputs provided to the calculator must be specified in an operation block. To create this
block, navigate back to the “Calc_well_performance” workflow and place an operation block
next to the assignment block “Config”. Name this block “Get inputs” and connect it to the
assignment block “Config”.
In order to make sure that the data store Data to train is initially clear of any values each time
when executing the workflow, add the operation in the workflow which clears the data. Access
the “Get input” block and add the operation “Clear the data (leave the column headings) from
July, 2021
RESOLVE Manual
Examples Guide 2294
Select Data to train data store as the input value for the operation.
Generate input values for the VLP/IPR calculation using the distribution functions for GOR, WCT
and FWHP. For this, select the next operation called “Generate a number of random samples
from the distribution” under the “Probabilistic modelling” category in “Get inputs”, as shown
below.
July, 2021
RESOLVE Manual
Examples Guide 2296
This operation will require the following fields to be filled in : “distribution”, “number of samples”
and “column”. Select FWHP distribution in the distribution field; then select nSamples in the
numberOFSamples field to define the number of input values used for calculations; to
complete the data entry in the operation, enter the column reference string in the Data_to_train
data store where the input values for the FWHP will be populated.
July, 2021
RESOLVE Manual
Examples Guide 2298
When the newly added operations are executed, the input values will be populated in the
FWHP, GOR and WCT columns of the Data_to_train data-set.
Create the variable sample of the integer type by navigating to the “Add variables” option and
after filling in the required fields (Starting value 0 Variable type) click “Add variable”. This
variable will include a number of input data points from the Data_to_train data store.
Access the “Loop samples” block and type sample in the Variable field. This variable will
increment the loop. Use 0 as the Starting value , nSamples-1 as the End value and 1 as the
Loop increment.
July, 2021
RESOLVE Manual
Examples Guide 2300
Then access the Assignment block and enter the input data to the VLP/IPR calculator as per
the description in section 5.13.8 of the Visual Workflow User Manual. The data entry and
explanation for each line is given below.
1) Enter the number of input variables which are to be provided for the calculation.
vwf_VLPIPR.Calculations[sample].InputVariableCount and set it to be equal to 3 , as
three input variables will need to be provided.
2) Enter the variable type for each input variable and set a value for each variable
vwf_VLPIPR.Calculations[sample].InputVariables[0].VarType equal to
TPDVariableType.GOR;
vwf_VLPIPR.Calculations[sample].InputVariables[1].VarType equal to
TPDVariableType.WATER_CUT;
vwf_VLPIPR.Calculations[sample].InputVariables[2].VarType equal to
TPDVariableType.FIRST_NODE_PRESSURE;
vwf_VLPIPR.Calculations[sample].InputVariables[0].Value equal to
Data_to_train.Column["GOR"].Value[sample];
vwf_VLPIPR.Calculations[sample].InputVariables[1].Value equal to
Data_to_train.Column["WCT"].Value[sample];
vwf_VLPIPR.Calculations[sample].InputVariables[2].Value equal to
Data_to_train.Column["FWHP"].Value[sample]
3) Enter the system in the GAP model containing the well which is to be calculated
vwf_VLPIPR.Calculations[sample].System equal to GAPSystem.Production
4) Enter the well name
vwf_VLPIPR.Calculations[sample].Well equal to "Well1"
July, 2021
RESOLVE Manual
Examples Guide 2302
l
Access the Calculate block and click on “Add global function”.
On the screen which appears on the display, select GAP calculations as the category of
operation and choose Execute calculator as the operation under GAP object calculation. Then
fill in the operation fields as follows:
The calculator object on which the calculation should be performed- vwf_VLPIPR;
OpenServer which refers to a running GAP instance- OS.GAP[0].
Place a Loop block on the workflow canvas and name it Loop results.
July, 2021
RESOLVE Manual
Examples Guide 2304
Within Loops results select result as the variable in the Variable field. Enter 0 as the Starting
value for the loop, nSamples-1 as the end value for the loop with the increment being 1.
To report the calculation results in the Data to train store, place an Assignment element onto the
canvas and connect it to the “Loop results” block. Name that block "report".
5.6 Creating a neural network using a Python data object and interacting with the object via
the Visual Workflow
This section provides a description of how to build a Python data object using a prebuilt Python
script for creating a neural network and how to interact with this data object via Visual
July, 2021
RESOLVE Manual
Examples Guide 2306
Workflow.
To add the Python data object, navigate to the main screen and place the Python object on the
canvas first. Change the label of object and name it Train.
Access the Python object and within the new screen click on the Open button. Select the file
“python.py” which contains the code to train the neural network.
Then type the name of the function which will be executed in the provided Python script -
Run_ann_regression.
This script contains the following sections: a) import python libraries; b) save results on the local
drive c) input values of the neural network parameters; d) divide the total data set into training
and testing data sets d) scaling input and output data; e) plot the results.
Import python libraries. This will be used for building the network, manipulating the data and
plotting the results. This includes Scikit-learn which is an open source machine learning
library that supports supervised and unsupervised learning. In our case, this is used to
prepare the data and build our neural network.
Create a function run_ann_regression (ds) with two input parameters: ds- training and
testing data set stored in “Array”. All the inputs below are entered under the name of this
function.
July, 2021
RESOLVE Manual
Examples Guide 2308
ANN_layeres - the number of hidden layers in the network. In this example, we have two layers
with 50 neurons in each layer;
ANN_Number_of_iter - the maximum number of iterations. The solver iterates until
convergence, which is determined by the tolerance (ANN_training_tolerance) or this number
of iterations;
ANN_training_tolerance –tolerance. When the loss or score is not improving by at least the
tolerance, convergence is considered to have been reached and the training stops;
Ann_print_to_screen- this defines whether to print progress messages to stdout. If “False” is
entered, then the progress message will not be printed out;
ANN_solver -the solver for weight optimization. The default solver ‘adam’ works very well on
relatively large datasets (with a thousand of training samples or more) in terms of both training
time and validation score;
ANN-learning -learning rate schedule for weight updates. The argument ‘Adaptive’ keeps the
learning rate constant to ‘learning_rate_init’ (default=0.001) as long as the training loss keeps
decreasing.
This data frame contains training and testing arrays. To separate it, first create a testing data
set by picking every 50th row from the Array data set.
Create a training data set by removing the testing data set from df.
Separate the input data from the output data for training and testing purposes
Modify the input and output data for standardisation. Use the functions of the Scikit-learn
python library
Create the Multi-layer Perceptron regressor with the input parameters defined earlier. This
will train the network
July, 2021
RESOLVE Manual
Examples Guide 2310
The rest of the function “run_ann_regression” relates to the visualisation of the results. The final
line “return [mlp,scaler,out_scaler]” will retrieve the simulation results.
Please note, the function “predict” placed at the end of the script will be used for prediction of
the liquid rate. This will be described in the next section of the manual .
For that, place an Operation element on the canvas and name it “Train_ann”.
Within “Train_ann”, select the Python scripting category and then choose the “Add or set a
function argument” operation. In this operation enter the following data:
Train - the name of the current Python data object which will be called from this operation ;
Data to train- is the data store which will be supplied as the argument to the function in the
script;
Zero – the index of the argument supplied to the function (because only a single argument
will be passed);
ListofLists – the Python type of argument to be passed.
Create the variable vwf_Specilaised_MLP of the string type by navigating to the “Add
variables” option. This variable will contain the output of the “run_ann_regression “ Python
function.
July, 2021
RESOLVE Manual
Examples Guide 2312
Then add a new operation under “Train_ann” - Execute the Python function under Python
scripting category. This will execute the "run_ann_regression" function.
Under the Input section of this operation, type Train for the Python object to execute.
In the “Assign return value to” field, type vwf_Specilaised_MLP. This will contain parameters of
the trained network which can then be used for predicting liquid rate from a new data set
containing only input parameters -FWHP, GOR and WC.
To create a new data set used as the input for prediction, place FlexDataStore on the canvas
and name it “Input_to_Predict”.
July, 2021
RESOLVE Manual
Examples Guide 2314
Then place another Python data object on the canvas and name it ”Predict”.
Access the Python object and within the new screen click on the Open button. Select the file
July, 2021
RESOLVE Manual
Examples Guide 2316
“python.py” which contains the code to predict the liquid rate using the trained neural network.
Then type the name of the function which will be executed in the provided Python script -predict.
The function "predict" contains a script for predicting the well liquid rate using the output from
the trained network.
To supply the input data to the new Python object via Workflow and predict the liquid rate, place
a Predict operation block within the Workflow and link it to the "Train_ann" operation.
Access the Predict operation block, and click on "Add global function" on the operation screen.
Then select “Add or set function argument” under the Python scripting category to supply the
input data to the Python object "Predict".
The second argument to be passed to the function will contain the output from the trained
network - Train.FunctionReturns["run_ann_regression"]. For this, add a global function
again and in the list of operations select “Add or set function argument” under the Python
scripting category.
July, 2021
RESOLVE Manual
Examples Guide 2318
Then add a new operation - Execute the Python function operation under Python scripting
category. This will execute the "predict" function.
The return of the function "predict" will contain the liquid rate which can then be populated in a
new FlexDataStore. Lets name this data store "Output" and place it on the canvas.
As the final step, add the last operation in the Predict operation block which will populate the
output of the Python function to the "Output" data store.
For this, select the "Insert return into DataStore/FlexDataStore/DataSet" operation under the
Python scripting category and in the input section enter the following data:
Predict - the name of the current Python data object which will be called from the operation;
Output- the data store which will be passed to the function in the script as an argument;
Zero- the index of the column supplied to the function;
ListofLists - the column at which the data should be written.
July, 2021
RESOLVE Manual
Examples Guide 2320
Finish the workflow by placing a Terminator block on the canvas and connect it to the Predict
operation element.
July, 2021
RESOLVE Manual
Examples Guide 2322
In the dialog menu about the number of samples to generate, enter 1000.
While executing the algorithm, the graph which shows the training quality of the network from
"run_ann function" will appear on the screen, as shown below.
From this graph we can conclude that the network is reliable and can be used for predicting the
liquid rate.
Next, this network is used to calculate the liquid rate from the data in the "Input to predict" store.
The calculation results can then be checked in the Output data store, as shown below.
July, 2021
RESOLVE Manual
Examples Guide 2324
1. Example Introduction
The objective of this example is to introduce Global Optimisation and one of the tools in
RESOLVE which is used for this purpose: the Sequential Linear Programming optimiser.
Although not restricted to, Global Optimisation opportunities often arise from the interaction of a
production system and process plant. Performing integration in RESOLVE between the models
of these systems means that their interaction can be captured and understood, which may lead
to optimisation opportunities being identified. This example is concerned with such an
interaction.
The example consists of an LNG Field, where produced gas is processed into LNG and
associated Condensate. The overall objective is to maximise Condensate Production while
meeting a specific LNG demand.
Production is currently coming from three active Wells (W1, W2 and W3), all producing from the
same reservoir Res 1. Two more wells are to come on stream (W4 on 01/07/2010 and W5 on
01/11/2010) producing from two difference reservoirs, as shown below.
Production is gathered at a manifold from where production flows to the Processing Plant.
July, 2021
RESOLVE Manual
Examples Guide 2326
At the Plant, CO2 is removed from the produced stream and the natural gas liquefied.
This process is captured by a simple XLS spreadsheet which calculates LNG production as
well as the associated Condensate production based on the produced fluid rate and
composition, as shown below.
The LNG demand to be honoured is 7780 tpd. At current production conditions, 350 mmscf/d
of 'Raw' Gas in GAP produces the LNG required to meet that demand.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
RESOLVE GAP
1 1
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP is registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
July, 2021
RESOLVE Manual
Examples Guide 2328
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_7-Global_Optimisation\Example_7_1-
Global_Optimisation_Introduction
Select Options | Units and set the input and output units to Oilfield, then select OK.
Now is a good time to save the file using File | Save As..., and enter a file name (e.g.
Global_Optimisation_Intro.rsl).
Go to Step 2.
3.9.1.3 Step 2: Add Gap model
From the list of applications, select GAP and add to the graphical window.
July, 2021
RESOLVE Manual
Examples Guide 2330
Double click on GAP and browse for the GAP model extracted from the RESOLVE archive on
Step 1. Select the feature "Always save forecast snapshots". This saves a snapshot of each
prediction timestep in GAP, allowing the user to reload a copy of each snapshot and analyse
the performance at that date - this can be particularly useful for troubleshooting purposes. Select
'full optimization': in this example we are concerned with optimisation, and hence it is required
to use the GAP optimiser instead of the rule based solver. Then select OK.
Note that from this screen it is possible to run GAP remotely on a cluster. Further information on
this can be found in the Setting up a Cluster section.
When OK is pressed, RESOLVE will take a few moments to load GAP and query the case. The
GAP production model contains three active wells and a separator. It is possible to check the
GAP model (i.e. the GAP model will be open on the windows taskbar) to confirm the contents of
the GAP file.
Go to Step 3.
3.9.1.4 Step 3: Add Excel instance
Add an instance of Excel to the model. From the same list of applications as previously, select
Excel and add an Excel instance to the model.
July, 2021
RESOLVE Manual
Examples Guide 2332
Select 'OK'. By default, the Excel instance will come up with one input and one output icon.
Inputs are used to pass data into Excel and output to extract data. In this example, this input will
be used to transfer the data required by the spreadsheet, i.e. the fluid composition, mass rate
and surface gas rate.
Link the source 'Sep' from GAP to the Excel input 'In-1' using the icon.
The data transfer between GAP and Excel can now be setup. To be able to pass compositional
data into Excel, the component names need to be defined in the Excel instance. Double-click on
the Excel instance, and in the 'Excel Details' tab:
Check 'Pass compositional data'
Click 'Setup Compositions'
Enter 'Composition1' as the composition name
Click Edit and enter the following list of components
N2
CO2
C1
C2
C3
IC4
NC4
IC5
NC5
C6
C7::10
C11::14
C15::17
C18::C24
July, 2021
RESOLVE Manual
Examples Guide 2334
Enter the 'Input Data' tab. Under 'Sheet name' select 'Sheet1', and send the solver gas rate to
c28.
Next click on 'Composition'. Under Composition select 'Composition1' that was created
previously. Enter b10 as the 'Compositional data cell': this corresponds to the top left corner of
the compositional table that is passed by RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 2336
Go to Step 4.
To obtain that list, go to Variables | Import Application Variables. The screen below will be
displayed, which allows to import variables from the various applications present in the model.
From the GAP tab, click 'Edit variables'. Import the separator gas rate, oil rate, and maximum
gas rate (which can be found under the OpenServer variables | Contraint (input) variables
tab. This is done by selecting the desired variables from the list and clicking the red arrow.
July, 2021
RESOLVE Manual
Examples Guide 2338
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
From the Excel tab, click 'Edit variables'. Import the LNG and condensate mass rates.
Click OK and add the variables for plotting by ticking all boxes in the Add to plot column.
Go to Step 5.
3.9.1.6 Step 5: Run the reference case
Enter the following schedule from Schedule | Forecast data. The forecast is to run for two
years from 1/1/2010 to 1/1/2012, with a one month time step.
July, 2021
RESOLVE Manual
Examples Guide 2340
Run the model by clicking the icon. It is required to map the components of GAP to those
that we have defined for Excel. This can be done when prompted by selecting pairs of
components from the two lists and clicking 'Add Individual Connection'.
When the run is complete, the results can be analysed. The following plot shows the GAP gas
rate as well as the LNG mass rate through the forecast. It can be observed that the LNG rate
quickly falls below the demand (marked by the horizontal line), while the GAP gas rate is
maintained at the defined plateau of 350 mmscf/d much longer.
July, 2021
RESOLVE Manual
Examples Guide 2342
This is because the composition of the fluid being produced is changing and hence the amount
of raw gas required to produce a fix amount of LNG is also changing. The GAP gas rate of 350
mmscf/d to produce 7780 tpd of LNG is valid at the beginning of the forecast, but becomes
invalid as the produced composition changes.
The impact that the fluid composition has on the LNG production is captured in the Process
model and this is why the optimisation problem becomes Global. In other words, both underlying
models (GAP and XLS) need to be part of the optimisation as no single application/module
captures the system response in its totality.
Therefore Global Optimisation is required to ensure that the 7780 tpd of LNG demand can be
met for as long as possible.
Go to Step 6.
3.9.1.7 Step 6: Set up the global optimisation problem
There are several ways that this problem can be set up and solved in RESOLVE. One possible
approach is to have a two-level optimisation where GAP optimises by controlling the wells dP
chokes while RESOLVE optimiser controls the boundary GAP optimises within (Qgas
Constraint).
At the top level, RESOLVE will maximise the condensate product by controlling the 'raw' GAP
gas rate, subject to the constraint that the LNG production should be less than 7780 tpd. At the
second level, the GAP optimiser will maximise production within the maximum gas rate supplied
by the SLP of RESOLVE.
For these methods to be successful, it is also fundamental to ensure that the objective functions
of the different Optimisation Levels are aligned.
To enable global optimisation, go to Options | System options, and set the 'Forecast mode'
to 'Full forecast with global optimisation'.
July, 2021
RESOLVE Manual
Examples Guide 2344
The optimisation is setup via Optimisation | Setup: this is where the objective function,
controls and constraints are defined.
In this case the Objective Function is located in the LNG Plant module. Select the corresponding
tab and click on Edit to define it.
July, 2021
RESOLVE Manual
Examples Guide 2346
Set the Constraint (also located in the LNG Plant module) in cell C30. This is the LNG
production and should be limited to 7780 tpd.
July, 2021
RESOLVE Manual
Examples Guide 2348
The OpenServer variable for the separator maximum gas rate needs to be picked up from GAP
and pasted into the corresponding cell, as shown below. Upper and lower bounds for the control
variable should also be provided.
Finally, it is required to setup some parameters relating to the optimisation and the way that
derivatives of the objective function are obtained. This is done from Optimisation | Summary.
Tick 'Retain optimisation of underlying application': this is to ensure that within every iteration of
the RESOLVE optimiser, GAP performs optimisation as well.
Click on 'Top level'. In this screen we can enable or disable certain objective functions,
constraints and controls, as well as introduce scheduling for the optimisation.
July, 2021
RESOLVE Manual
Examples Guide 2350
Click 'Edit control variable parameters' to define how RESOLVE should obtain these
derivatives. For this case, we can tell RESOLVE not to perturb the maximum gas rate less than
1 MMscf/d (starting with 5) and not to change the maximum gas rate in one step further than 10
MMscf/d.
Run the model by clicking the icon. As optimisation is being performed, an 'Optimisation'
tab is visible and provides information on the current optimisation calculations. The visible tabs
are the following:
Table: displays the calculated derivatives
Ctrl Results: displays the values of the control parameters
Fn Results: displays the value of the objective function as well as the value of any variable on
which there is a constraint
Overall: displays the final optimised result for that time step.
July, 2021
RESOLVE Manual
Examples Guide 2352
Once the run is completed, the results can be analysed. The following plot shows the GAP gas
rate and the LNG mass rate. It can be seen that the LNG rate is now maintained at the demand
for as long as is physically possible.
As a result of this double optimisation, condensate production is also higher than in the
reference case.
This example illustrate the use of the SLP optimiser of RESOLVE, and the concept of global
optimisation. No single application contains all the information about a given problem: by
performing integration we can capture the interaction between these applications and identify
optimisation opportunities. In this example, the process was captured in Excel but the
information about the changing composition of the produced fluid was held in GAP. By
performing integration and optimisation, it has been possible to generate a production forecast
which honours a fixed LNG demand and results in increased condensate production.
3.9.2 Example 7.2: GAP-Process Optimisation
3.9.2.1 Example 7.2.1: GAP - UniSim Optimisation
3.9.2.1.1 Overview
1. Example Introduction
The objective of this section is to demonstrate how to optimise a surface network - plant
simulation coupled model by using the RESOLVE optimisation capabilities.
This example builds on Example 3.1 and it is recommended that the user follows this example
first. In that example, RESOLVE was used to couple a GAP model of an oil field and a UniSim
model of the compression train for compressing the associated gas.
The GAP model contains 9 producing oil wells and is shown below.
July, 2021
RESOLVE Manual
Examples Guide 2354
In Example 3.1, these models were coupled in RESOLVE. The connection between the
separator in GAP and the inlet of UniSim means that at every timestep, the pressure and
temperature of the separator, the composition and the total mass rate are passed from GAP to
the UniSim 'Inlet' feed.
In this field, the main requirement is that the associated gas must be compressed to 1300 psig
in order to join an existing export line. In Example 3.1, it was verified that the process model
could provide the necessary pressure throughout the forecast.
For the first year of the forecast, the delivery pressure remains close to 1300 psig while the
wells are being choked: this means that the process is limiting the production. For the remaining
period of the forecast, the wells are fully opened and the delivery pressure exceeds 1300 psig:
therefore the production system is limiting the production.
The current production constraint imposed to the field is due to the requirement to compress the
gas from 150 psig (Separator Pressure) to 1300 psig. The ability of the compressors to deliver
the required discharge pressure is function of the in-situ gas volume which in turns depends on
the suction pressure. This makes the field constraint to be dependent on the boundary condition
between the production system and the process (i.e. separator pressure).
To meet this constraint, wells are currently choked back. This means that we are losing energy
at the well chokes which could be transferred to the process by increasing the separator
pressure (and hence increasing the compressors suction pressure).
While increasing the separator pressure will increase the amount of gas that the compressors
can handle for a given discharge pressure, it will also increase the back pressure experienced
by the wells (affecting their potential). Finding the optimum separator pressure (boundary
condition between the production system and the process) is a Global Optimisation problem
which requires integrating and solving simultaneously both systems.
This optimisaiton opportunity is easy to visualize if we plot both Field Potential and Process
handling ability as a function of boundary pressure (Separator)
July, 2021
RESOLVE Manual
Examples Guide 2356
Process Potential:
If we assume constant producing GOR, the ability of the process to handle oil production is
directly proportional to the ability to compress the producing gas (to the required delivery
pressure of 1300 psig). Hence, the ability of the process to handle the produced oil increases
as the suction pressure (separator pressure) increases.
Taking both potentials into account, the true potential of the Field becomes:
We can see from the above plot that we can increase production by increasing the separator
pressure up to the point where the potential of the wells + network becomes smaller than the
ability of the process to handle production. The Global Optimisation problem to solve is to find
where this optimum is.
As conditions change in the field with time, this optimum separator pressure will also change.
This can be visualised on the plot below: as the potential of the production network decreases
with time, the optimum operating conditions in the field change. Hence, we will also look at how
to dynamically take this into account during a 3-year forecast.
July, 2021
RESOLVE Manual
Examples Guide 2358
Note that in this example, the separator's gas constraint has been removed: this was previously
used as a proxy constraint to ensure that the delivery pressure of 1300 psig could be met at a
separator pressure of 150 psig. We are now concerned with global optimisation of the system,
and hence this proxy constraint can be removed.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both UniSim and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_7-Global_Optimisation\Example_7_2_1-GAP_UniSim_Optimisation
This folder contains a file "GAP UniSim Optimisation Start.rsa" which is a "RESOLVE archive
file" that contains the RESOLVE file, UniSim file, GAP file and other associated files required to
go through the example. The archive file needs to be extracted either in the current location or a
location of the user"s choice.
Go to Step 1
3.9.2.1.2 Step 1 - Enable optimisation
Open the RESOLVE model called 'GAP UniSim.rsl' which was extracted from the model archive.
This contains the model built in Example 3.1.
July, 2021
RESOLVE Manual
Examples Guide 2360
It is first necessary to enable the optimisation. Do this from the Options | System options
menu item and change the "Forecast mode" to "Full forecast with global optimisation", as
illustrated below.
Go to Step 2
3.9.2.1.3 Step 2 - Setup the optimisation problem
In this step, the optimisation problem is formulated. This is done by going to the Optimisation
menu.
July, 2021
RESOLVE Manual
Examples Guide 2362
Enter the Optimisation | Setup dialog, which enables to define the problem's objective
function, control variables and constraints. These can be located within different applications,
which is what constitutes the strength of global optimisation in RESOLVE.
Select the GAP tab, and in front of 'No objective function' click on Edit. Select Sep1 as the
equipment and Oil Rate as the variable and click Set: this will define the separator oil rate as
the objective function. By default, the optimiser will try to maximise this quantity, but it is also
possible to minimise an objective function by selecting the corresponding radio button.
Next go the to Controls tab, and click on Vary separator pressure. This will automatically add
the separator pressure of GAP as a control variable. Lower and upper bounds for each control
should be provided: set these to 100 and 200 psig. Click OK.
Note that any variable in GAP can be used as a control variable by pasting the corresponding
OpenServer string.
July, 2021
RESOLVE Manual
Examples Guide 2364
In the UniSim tab, in front of 'No constraints' select Edit. Select the Sales stream, then the
stream Pressure. This should be defined to always be greater than 1300 psig, then click Add.
Select OK to return to the optimisation setup dialog, and select OK again to return to the main
screen of RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 2366
Next click on Top level: this menu allows to setup additonal parameters regarding the
optimisation, introduce scheduling of the optimisers, and define the perturbation steps used to
obtain derivatives of the objective function with respect to the control variables.
For this case, we can tell RESOLVE not to perturb the separator pressure less than 5 psi
(starting with 10) and not to change the separator pressure in one step further than 200 psi.
July, 2021
RESOLVE Manual
Examples Guide 2368
Go to Step 3.
3.9.2.1.4 Step 3 - Run the forecast
To illustrate the optimisation process, first run a single step of the forecast (click on the " "
button).
A series of iterations are going to be taken to meet the constraint. An additional screen (i.e.
optimisation progress) comes up during the run.
At the end of the timestep the screen will have the following appearance:
In the screenshot above, the "function results" tab is displayed. Along the top the iteration
number is displayed : two iterations have been taken and the constraint was met (i.e. within
tolerance) on the second step.
It is also possible to look at the separator pressure set at each iteration by looking at the
"control results" tab.
The simulation can now be run to the end (" "). The above table will be displayed at each
timestep to show the iterations that are taken.
Go to Step 4.
3.9.2.1.5 Step 4 - Analyse the results
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon. These results can also be displayed as the run is proceeding.
The variables available to plot are those that have been imported to RESOLVE in Example 3.1.
The following plot shows the compressor train discharge pressure, obtained from Example 3.1
(in red) and from the optimisation (in green). It can be observed that the requirement of 1300
psig is met exactly for the duration of the forecast.
July, 2021
RESOLVE Manual
Examples Guide 2370
The following plots show the oil rate, obtained from Example 3.1 (in red) and from the
optimisation (in green), as well as the cumulative production. The optimisation results in a much
higher oil rate at the beginning of the forecast, and a higher cumulative production. After some
time, the production rate drops below the reference case: this is expected as the field is
depleting faster in the reference case, however the optimisation would result in a higher NPV as
future profits are generally discounted.
This has been obtained by varying the separator pressure, as shown below. At the start of the
forecast, the separator pressure needs to be higher than in the reference case to increase
production. This is expected as the process was found to be initially limiting the production. In
the reference case, we were losing energy at the wellhead chokes, and this energy is now
transferred to the compression train by increasing the separator pressure.
The optimum separator pressure also changes with time, as the field is depleting. The obtained
results and optimum separator pressure are therefore consistent with expectations, as
illustrated in the system performance curves of the Overview.
1. Example Introduction
The objective of this section is to demonstrate how to optimise a surface network - plant
simulation coupled model by using the RESOLVE optimisation capabilities.
July, 2021
RESOLVE Manual
Examples Guide 2372
This example builds on Example 3.2 and it is recommended that the user follows this example
first. In that example, RESOLVE was used to couple a GAP model of an oil field and a Hysys
model of the compression train for compressing the associated gas.
The GAP model contains 9 producing oil wells and is shown below.
In Example 3.2, these models were coupled in RESOLVE. The connection between the
separator in GAP and the inlet of Hysys means that at every timestep, the pressure and
temperature of the separator, the composition and the total mass rate are passed from GAP to
the Hysys 'Inlet' feed.
In this field, the main requirement is that the associated gas must be compressed to 1300 psig
in order to join an existing export line. In Example 3.2, it was verified that the process model
could provide the necessary pressure throughout the forecast.
For the first year of the forecast, the delivery pressure remains close to 1300 psig while the
wells are being choked: this means that the process is limiting the production. For the remaining
period of the forecast, the wells are fully opened and the delivery pressure exceeds 1300 psig:
therefore the production system is limiting the production.
The current production constraint imposed to the field is due to the requirement to compress the
gas from 150 psig (Separator Pressure) to 1300 psig. The ability of the compressors to deliver
the required discharge pressure is function of the in-situ gas volume which in turns depends on
the suction pressure. This makes the field constraint to be dependent on the boundary condition
between the production system and the process (i.e. separator pressure).
July, 2021
RESOLVE Manual
Examples Guide 2374
To meet this constraint, wells are currently choked back. This means that we are losing energy
at the well chokes which could be transferred to the process by increasing the separator
pressure (and hence increasing the compressors suction pressure).
While increasing the separator pressure will increase the amount of gas that the compressors
can handle for a given discharge pressure, it will also increase the back pressure experienced
by the wells (affecting their potential). Finding the optimum separator pressure (boundary
condition between the production system and the process) is a Global Optimisation problem
which requires integrating and solving simultaneously both systems.
This optimisaiton opportunity is easy to visualize if we plot both Field Potential and Process
handling ability as a function of boundary pressure (Separator)
Process Potential:
If we assume constant producing GOR, the ability of the process to handle oil production is
directly proportional to the ability to compress the producing gas (to the required delivery
pressure of 1300 psig). Hence, the ability of the process to handle the produced oil increases
as the suction pressure (separator pressure) increases.
Taking both potentials into account, the true potential of the Field becomes:
We can see from the above plot that we can increase production by increasing the separator
pressure up to the point where the potential of the wells + network becomes smaller than the
ability of the process to handle production. The Global Optimisation problem to solve is to find
where this optimum is.
As conditions change in the field with time, this optimum separator pressure will also change.
This can be visualised on the plot below: as the potential of the production network decreases
with time, the optimum operating conditions in the field change. Hence, we will also look at how
to dynamically take this into account during a 3-year forecast.
July, 2021
RESOLVE Manual
Examples Guide 2376
Note that in this example, the separator's gas constraint has been removed: this was previously
used as a proxy constraint to ensure that the delivery pressure of 1300 psig could be met at a
separator pressure of 150 psig. We are now concerned with global optimisation of the system,
and hence this proxy constraint can be removed.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both Hysys and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_7-Global_Optimisation\Example_7_2_2-GAP_Hysys_Optimisation
This folder contains a file "GAP Hysys Optimisation Start.rsa" which is a "RESOLVE archive
file" that contains the RESOLVE file, Hysys file, GAP file and other associated files required to
go through the example. The archive file needs to be extracted either in the current location or a
location of the user"s choice.
Go to Step 1
3.9.2.2.2 Step 1 - Enable optimisation
Open the RESOLVE model called 'GAP Hysys.rsl' which was extracted from the model archive.
This contains the model built in Example 3.2.
July, 2021
RESOLVE Manual
Examples Guide 2378
It is first necessary to enable the optimisation. Do this from the Options | System options
menu item and change the "Forecast mode" to "Full forecast with global optimisation", as
illustrated below.
Go to Step 2
3.9.2.2.3 Step 2 - Setup the optimisation problem
In this step, the optimisation problem is formulated. This is done by going to the Optimisation
menu.
Enter the Optimisation | Setup dialog, which enables to define the problem's objective
function, control variables and constraints. These can be located within different applications,
which is what constitutes the strength of global optimisation in RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 2380
Select the GAP tab, and in front of 'No objective function' click on Edit. Select Sep1 as the
equipment and Oil Rate as the variable and click Set: this will define the separator oil rate as
the objective function. By default, the optimiser will try to maximise this quantity, but it is also
possible to minimise an objective function by selecting the corresponding radio button.
Next go the to Controls tab, and click on Vary separator pressure. This will automatically add
the separator pressure of GAP as a control variable. Lower and upper bounds for each control
should be provided: set these to 100 and 200 psig. Click OK.
Note that any variable in GAP can be used as a control variable by pasting the corresponding
OpenServer string.
In the Hysys tab, in front of 'No constraints' select Edit. Select the Sales stream, then the stream
Pressure. This should be defined to always be greater than 1300 psig, then click Add.
July, 2021
RESOLVE Manual
Examples Guide 2382
Select OK to return to the optimisation setup dialog, and select OK again to return to the main
screen of RESOLVE.
Next click on Top level: this menu allows to setup additonal parameters regarding the
optimisation, introduce scheduling of the optimisers, and define the perturbation steps used to
obtain derivatives of the objective function with respect to the control variables.
July, 2021
RESOLVE Manual
Examples Guide 2384
For this case, we can tell RESOLVE not to perturb the separator pressure less than 5 psi
(starting with 10) and not to change the separator pressure in one step further than 200 psi.
Go to Step 3.
3.9.2.2.4 Step 3 - Run the forecast
To illustrate the optimisation process, first run a single step of the forecast (click on the " "
button).
A series of iterations are going to be taken to meet the constraint. An additional screen (i.e.
optimisation progress) comes up during the run.
At the end of the timestep the screen will have the following appearance:
July, 2021
RESOLVE Manual
Examples Guide 2386
In the screenshot above, the "function results" tab is displayed. Along the top the iteration
number is displayed : two iterations have been taken and the constraint was met (i.e. within
tolerance) on the second step.
It is also possible to look at the separator pressure set at each iteration by looking at the
"control results" tab.
The simulation can now be run to the end (" "). The above table will be displayed at each
timestep to show the iterations that are taken.
Go to Step 4.
3.9.2.2.5 Step 4 - Analyse the results
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon. These results can also be displayed as the run is proceeding.
The variables available to plot are those that have been imported to RESOLVE in Example 3.2.
The following plot shows the compressor train discharge pressure, obtained from Example 3.2
(in red) and from the optimisation (in green). It can be observed that the requirement of 1300
psig is met exactly for the duration of the forecast.
The following plots show the oil rate, obtained from Example 3.2 (in red) and from the
optimisation (in green), as well as the cumulative production. The optimisation results in a much
higher oil rate at the beginning of the forecast, and a higher cumulative production. After some
time, the production rate drops below the reference case: this is expected as the field is
depleting faster in the reference case, however the optimisation would result in a higher NPV as
future profits are generally discounted.
July, 2021
RESOLVE Manual
Examples Guide 2388
This has been obtained by varying the separator pressure, as shown below. At the start of the
forecast, the separator pressure needs to be higher than in the reference case to increase
production. This is expected as the process was found to be initially limiting the production. In
the reference case, we were losing energy at the wellhead chokes, and this energy is now
transferred to the compression train by increasing the separator pressure.
The optimum separator pressure also changes with time, as the field is depleting. The obtained
results and optimum separator pressure are therefore consistent with expectations, as
illustrated in the system performance curves of the Overview.
1. Example Introduction
The objective of this section is to demonstrate how to optimise a surface network - plant
This example builds on Example 3.3 and it is recommended that the user follows this example
first. In that example, RESOLVE was used to couple a GAP model of an oil field and a ProII
model of the compression train for compressing the associated gas.
The GAP model contains 9 producing oil wells and is shown below.
In Example 3.3, these models were coupled in RESOLVE. The connection between the
separator in GAP and the inlet of ProII means that at every timestep, the pressure and
temperature of the separator, the composition and the total mass rate are passed from GAP to
the ProII 'Inlet' feed.
July, 2021
RESOLVE Manual
Examples Guide 2390
In this field, the main requirement is that the associated gas must be compressed to 1300 psig
in order to join an existing export line. In Example 3.3, it was verified that the process model
could provide the necessary pressure throughout the forecast.
For the first year of the forecast, the delivery pressure remains close to 1300 psig while the
wells are being choked: this means that the process is limiting the production. For the remaining
period of the forecast, the wells are fully opened and the delivery pressure exceeds 1300 psig:
therefore the production system is limiting the production.
The current production constraint imposed to the field is due to the requirement to compress the
gas from 150 psig (Separator Pressure) to 1300 psig. The ability of the compressors to deliver
the required discharge pressure is function of the in-situ gas volume which in turns depends on
the suction pressure. This makes the field constraint to be dependent on the boundary condition
between the production system and the process (i.e. separator pressure).
To meet this constraint, wells are currently choked back. This means that we are losing energy
at the well chokes which could be transferred to the process by increasing the separator
pressure (and hence increasing the compressors suction pressure).
While increasing the separator pressure will increase the amount of gas that the compressors
can handle for a given discharge pressure, it will also increase the back pressure experienced
by the wells (affecting their potential). Finding the optimum separator pressure (boundary
condition between the production system and the process) is a Global Optimisation problem
which requires integrating and solving simultaneously both systems.
This optimisaiton opportunity is easy to visualize if we plot both Field Potential and Process
handling ability as a function of boundary pressure (Separator)
Process Potential:
If we assume constant producing GOR, the ability of the process to handle oil production is
directly proportional to the ability to compress the producing gas (to the required delivery
pressure of 1300 psig). Hence, the ability of the process to handle the produced oil increases
as the suction pressure (separator pressure) increases.
July, 2021
RESOLVE Manual
Examples Guide 2392
Taking both potentials into account, the true potential of the Field becomes:
We can see from the above plot that we can increase production by increasing the separator
pressure up to the point where the potential of the wells + network becomes smaller than the
ability of the process to handle production. The Global Optimisation problem to solve is to find
where this optimum is.
As conditions change in the field with time, this optimum separator pressure will also change.
This can be visualised on the plot below: as the potential of the production network decreases
with time, the optimum operating conditions in the field change. Hence, we will also look at how
to dynamically take this into account during a 3-year forecast.
Note that in this example, the separator's gas constraint has been removed: this was previously
used as a proxy constraint to ensure that the delivery pressure of 1300 psig could be met at a
separator pressure of 150 psig. We are now concerned with global optimisation of the system,
and hence this proxy constraint can be removed.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both ProII and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
July, 2021
RESOLVE Manual
Examples Guide 2394
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_7-Global_Optimisation\Example_7_2_3-GAP_ProII_Optimisation
This folder contains a file "GAP ProII Optimisation Start.rsa" which is a "RESOLVE archive file"
that contains the RESOLVE file, ProII file, GAP file and other associated files required to go
through the example. The archive file needs to be extracted either in the current location or a
location of the user"s choice.
Go to Step 1
3.9.2.3.2 Step 1 - Enable optimisation
Open the RESOLVE model called 'GAP ProII.rsl' which was extracted from the model archive.
This contains the model built in Example 3.3.
It is first necessary to enable the optimisation. Do this from the Options | System options
menu item and change the "Forecast mode" to "Full forecast with global optimisation", as
illustrated below.
July, 2021
RESOLVE Manual
Examples Guide 2396
Go to Step 2
3.9.2.3.3 Step 2 - Setup the optimisation problem
In this step, the optimisation problem is formulated. This is done by going to the Optimisation
menu.
Enter the Optimisation | Setup dialog, which enables to define the problem's objective
function, control variables and constraints. These can be located within different applications,
which is what constitutes the strength of global optimisation in RESOLVE.
Select the GAP tab, and in front of 'No objective function' click on Edit. Select Sep1 as the
equipment and Oil Rate as the variable and click Set: this will define the separator oil rate as
the objective function. By default, the optimiser will try to maximise this quantity, but it is also
possible to minimise an objective function by selecting the corresponding radio button.
July, 2021
RESOLVE Manual
Examples Guide 2398
Next go the to Controls tab, and click on Vary separator pressure. This will automatically add
the separator pressure of GAP as a control variable. Lower and upper bounds for each control
should be provided: set these to 100 and 200 psig. Click OK.
Note that any variable in GAP can be used as a control variable by pasting the corresponding
OpenServer string.
In the ProII tab, in front of 'No constraints' select Edit. Select the Sales stream, then the stream
Pressure. This should be defined to always be greater than 1300 psig, then click Add.
July, 2021
RESOLVE Manual
Examples Guide 2400
Select OK to return to the optimisation setup dialog, and select OK again to return to the main
screen of RESOLVE.
Next click on Top level: this menu allows to setup additonal parameters regarding the
optimisation, introduce scheduling of the optimisers, and define the perturbation steps used to
obtain derivatives of the objective function with respect to the control variables.
July, 2021
RESOLVE Manual
Examples Guide 2402
For this case, we can tell RESOLVE not to perturb the separator pressure less than 5 psi
(starting with 10) and not to change the separator pressure in one step further than 200 psi.
Go to Step 3.
3.9.2.3.4 Step 3 - Run the forecast
To illustrate the optimisation process, first run a single step of the forecast (click on the " "
button).
A series of iterations are going to be taken to meet the constraint. An additional screen (i.e.
optimisation progress) comes up during the run.
At the end of the timestep the screen will have the following appearance:
July, 2021
RESOLVE Manual
Examples Guide 2404
In the screenshot above, the "function results" tab is displayed. Along the top the iteration
number is displayed : two iterations have been taken and the constraint was met (i.e. within
tolerance) on the second step.
It is also possible to look at the separator pressure set at each iteration by looking at the
"control results" tab.
The simulation can now be run to the end (" "). The above table will be displayed at each
timestep to show the iterations that are taken.
Go to Step 4.
3.9.2.3.5 Step 4 - Analyse the results
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon. These results can also be displayed as the run is proceeding.
The variables available to plot are those that have been imported to RESOLVE in Example 3.3.
The following plot shows the compressor train discharge pressure, obtained from Example 3.3
(in red) and from the optimisation (in green). It can be observed that the requirement of 1300
psig is met exactly for the duration of the forecast.
The following plots show the oil rate, obtained from Example 3.3 (in red) and from the
optimisation (in green), as well as the cumulative production. The optimisation results in a much
higher oil rate at the beginning of the forecast, and a higher cumulative production. After some
time, the production rate drops below the reference case: this is expected as the field is
depleting faster in the reference case, however the optimisation would result in a higher NPV as
future profits are generally discounted.
July, 2021
RESOLVE Manual
Examples Guide 2406
This has been obtained by varying the separator pressure, as shown below. At the start of the
forecast, the separator pressure needs to be higher than in the reference case to increase
production. This is expected as the process was found to be initially limiting the production. In
the reference case, we were losing energy at the wellhead chokes, and this energy is now
transferred to the compression train by increasing the separator pressure.
The optimum separator pressure also changes with time, as the field is depleting. The obtained
results and optimum separator pressure are therefore consistent with expectations, as
illustrated in the system performance curves of the Overview.
The objective of this section is to demonstrate how to control and trigger the optimisers of
RESOLVE dynamically i.e. based on the conditions in the field, from a workflow.
This example builds on Example 7.2.1 and it is recommended that the user follows this example
first. In that example, RESOLVE was used to perform optimisation of a coupled GAP model of
an oil field and a UniSim model of the compression train for compressing the associated gas.
The GAP model contains 9 producing oil wells and is shown below.
In this field, the main requirement is that the associated gas must be compressed to 1300 psig
in order to join an existing export line. In Example 7.2.1, the optimisation opportunity was
discussed: for every separator pressure the field has a given potential and the compressor train
has a maximum amount of gas that it can process.
Taking both potentials into account, it becomes apparent that there exists a separator pressure
which maximises the produced oil rate.
July, 2021
RESOLVE Manual
Examples Guide 2408
It was also discussed how the optimum separator pressure changes with time, as the
production conditions change. This is illustrated below: when the overall potential of the field
declines, the optimum separator pressure changes.
As a result, in Example 7.2.1 optimisation was performed at every time step in order to
maximise production. This resulted in the separator pressure profile shown in green below.
For practical reasons however, we may not want to change the separator pressure in the field
every month, and we wish to generate a forecast that reflects this. The main requirement in this
field is that the compressor discharge pressure be above 1300 psig. We wish to trigger an
optimisation and a change of separator pressure only if the delivery pressure is greater or lower
than this value by 20 psi.
Therefore the objective of this example is to setup the RESOLVE model such that a change of
separator pressure (i.e. performing optimisation) only happens if the delivery pressure is
greater than 1320 psig or lower than 1280 psig.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both UniSim and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 2410
...\resolve\Section_7-Global_Optimisation\Example_7_3_1-
GAP_UniSim_Optimiser_Control
This folder contains a file "GAP UniSim Optimiser Control Start.rsa" which is a "RESOLVE
archive file" that contains the RESOLVE file, UniSim file, GAP file and other associated files
required to go through the example. The archive file needs to be extracted either in the current
location or a location of the user"s choice.
Go to Step 1
3.9.3.1.2 Step 1: Open the file
Open the RESOLVE model called 'GAP-UniSim.rsl' which was extracted from the example
archive. This contains the model built in Example 7.2.1:
- GAP and UniSim are coupled in RESOLVE
- The SLP optimiser is setup in RESOLVE to optimise the separator pressure
From the Optimisation | Setup menu, the optimisation problem is setup as follows:
- Objective function: GAP oil rate
- Control: GAP separator pressure
- Constraint: UniSim 'Sales' pressure > 1300 psig.
Optimisation is currently setup to run at every time step, and in this example we wish to trigger
optimisation only if a given criteria is met (Psales > 1320 psig OR Psales < 1280 psig). In order
to do this, logic will be applied in pre-solve and post-solve workflows as follows:
- Pre-solve: at the beggining of every time step, disable optimisation and solve the coupled
system
- Post-solve: examine the solve results:
- if the Sales pressure is outside the tolerance, enable optimisation and re-take the time
step
- if the Sales pressure is within the tolerance, move on to the next time step.
Go to Step 2.
3.9.3.1.3 Step 2: Build the workflows
The objective of this step is to build the pre-solve and post-solve workflows as described in
Step 1.
1. Pre-solve workflow
July, 2021
RESOLVE Manual
Examples Guide 2412
Create the following workflow using an Assignment element. Link the elements using the
icon.
Using the RunOptimisers variable, it is also possible to activate/de-activate certain controls and
constraints, as well as have access to the optimisation results themselves. For more information
on the RunOptimisers variable, please refer to the Dynamic Optimisation Setup section of this
manual.
2. Post-solve workflow
Create the following workflow using two Assignment elements and one If-Then element.
July, 2021
RESOLVE Manual
Examples Guide 2414
Note: to swap the 'Yes' and 'No' arrows, click the 'Decision' button within the 'If-then' element.
Go to Step 3.
July, 2021
RESOLVE Manual
Examples Guide 2416
Run the forecast using the icon . From the 'Calculation' tab it is possible to observe the time
steps on which optimisation is being performed. For instance, optimisation is performed on the
first time step but not on the second.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon. These results can also be displayed as the run is proceeding.
The variables available to plot are those that have been imported to RESOLVE in Example
7.2.1.
The following separator pressure profile is obtained in green and the results of Example 7.2.1
are shown in red. We obtain the same trend in separator pressure, and optimisation is only
performed on those time steps where the separator pressure changes. This also enables to
reduce the calculation time as 7 optimisations are required during the forecast compared to 37
previously.
The following plot shows the obtained delivery pressure in green, along with the results of
Example 7.2.1 in red. It can be observed that the delivery pressure always remains within 20 psi
of 1300 psig.
Finally, the impact of this on the produced oil rate is negligible as shown below.
July, 2021
RESOLVE Manual
Examples Guide 2418
The objective of this section is to demonstrate how to control and trigger the optimisers of
RESOLVE dynamically i.e. based on the conditions in the field, from a workflow.
This example builds on Example 7.2.2 and it is recommended that the user follows this example
first. In that example, RESOLVE was used to perform optimisation of a coupled GAP model of
an oil field and a Hysys model of the compression train for compressing the associated gas.
The GAP model contains 9 producing oil wells and is shown below.
In this field, the main requirement is that the associated gas must be compressed to 1300 psig
in order to join an existing export line. In Example 7.2.2, the optimisation opportunity was
discussed: for every separator pressure the field has a given potential and the compressor train
has a maximum amount of gas that it can process.
Taking both potentials into account, it becomes apparent that there exists a separator pressure
which maximises the produced oil rate.
It was also discussed how the optimum separator pressure changes with time, as the
production conditions change. This is illustrated below: when the overall potential of the field
declines, the optimum separator pressure changes.
July, 2021
RESOLVE Manual
Examples Guide 2420
As a result, in Example 7.2.2 optimisation was performed at every time step in order to
maximise production. This resulted in the separator pressure profile shown in green below.
For practical reasons however, we may not want to change the separator pressure in the field
every month, and we wish to generate a forecast that reflects this. The main requirement in this
field is that the compressor discharge pressure be above 1300 psig. We wish to trigger an
optimisation and a change of separator pressure only if the delivery pressure is greater or lower
than this value by 20 psi.
Therefore the objective of this example is to setup the RESOLVE model such that a change of
separator pressure (i.e. performing optimisation) only happens if the delivery pressure is
greater than 1320 psig or lower than 1280 psig.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both Hysys and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_7-Global_Optimisation\Example_7_3_2-
GAP_Hysys_Optimiser_Control
This folder contains a file "GAP Hysys Optimiser Control Start.rsa" which is a "RESOLVE
archive file" that contains the RESOLVE file, Hysys file, GAP file and other associated files
required to go through the example. The archive file needs to be extracted either in the current
July, 2021
RESOLVE Manual
Examples Guide 2422
Go to Step 1
3.9.3.2.2 Step 1: Open the file
Open the RESOLVE model called 'GAP-Hysys.rsl' which was extracted from the example
archive. This contains the model built in Example 7.2.2:
- GAP and Hysys are coupled in RESOLVE
- The SLP optimiser is setup in RESOLVE to optimise the separator pressure
From the Optimisation | Setup menu, the optimisation problem is setup as follows:
- Objective function: GAP oil rate
- Control: GAP separator pressure
- Constraint: Hysys 'Sales' pressure > 1300 psig.
Optimisation is currently setup to run at every time step, and in this example we wish to trigger
optimisation only if a given criteria is met (Psales > 1320 psig OR Psales < 1280 psig). In order
to do this, logic will be applied in pre-solve and post-solve workflows as follows:
- Pre-solve: at the beggining of every time step, disable optimisation and solve the coupled
system
- Post-solve: examine the solve results:
- if the Sales pressure is outside the tolerance, enable optimisation and re-take the time
step
- if the Sales pressure is within the tolerance, move on to the next time step.
Go to Step 2.
3.9.3.2.3 Step 2: Build the workflows
The objective of this step is to build the pre-solve and post-solve workflows as described in
Step 1.
1. Pre-solve workflow
Create the following workflow using an Assignment element. Link the elements using the
icon.
July, 2021
RESOLVE Manual
Examples Guide 2424
Using the RunOptimisers variable, it is also possible to activate/de-activate certain controls and
constraints, as well as have access to the optimisation results themselves. For more information
on the RunOptimisers variable, please refer to the Dynamic Optimisation Setup section of this
manual.
2. Post-solve workflow
Create the following workflow using two Assignment elements and one If-Then element.
July, 2021
RESOLVE Manual
Examples Guide 2426
Note: to swap the 'Yes' and 'No' arrows, click the 'Decision' button within the 'If-then' element.
Go to Step 3.
3.9.3.2.4 Step 3: Run the forecast
Run the forecast using the icon . From the 'Calculation' tab it is possible to observe the time
steps on which optimisation is being performed. For instance, optimisation is performed on the
first time step but not on the second.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon. These results can also be displayed as the run is proceeding.
The variables available to plot are those that have been imported to RESOLVE in Example
7.2.2.
The following separator pressure profile is obtained in green and the results of Example 7.2.2
are shown in red. We obtain the same trend in separator pressure, and optimisation is only
performed on those time steps where the separator pressure changes. This also enables to
reduce the calculation time as 7 optimisations are required during the forecast compared to 37
previously.
July, 2021
RESOLVE Manual
Examples Guide 2428
The following plot shows the obtained delivery pressure in green, along with the results of
Example 7.2.2 in red. It can be observed that the delivery pressure always remains within 20 psi
of 1300 psig.
Finally, the impact of this on the produced oil rate is negligible as shown below.
The objective of this section is to demonstrate how to control and trigger the optimisers of
RESOLVE dynamically i.e. based on the conditions in the field, from a workflow.
This example builds on Example 7.2.3 and it is recommended that the user follows this example
first. In that example, RESOLVE was used to perform optimisation of a coupled GAP model of
an oil field and a ProII model of the compression train for compressing the associated gas.
The GAP model contains 9 producing oil wells and is shown below.
July, 2021
RESOLVE Manual
Examples Guide 2430
In this field, the main requirement is that the associated gas must be compressed to 1300 psig
in order to join an existing export line. In Example 7.2.3, the optimisation opportunity was
discussed: for every separator pressure the field has a given potential and the compressor train
has a maximum amount of gas that it can process.
Taking both potentials into account, it becomes apparent that there exists a separator pressure
which maximises the produced oil rate.
It was also discussed how the optimum separator pressure changes with time, as the
production conditions change. This is illustrated below: when the overall potential of the field
declines, the optimum separator pressure changes.
As a result, in Example 7.2.3 optimisation was performed at every time step in order to
maximise production. This resulted in the separator pressure profile shown in green below.
For practical reasons however, we may not want to change the separator pressure in the field
every month, and we wish to generate a forecast that reflects this. The main requirement in this
field is that the compressor discharge pressure be above 1300 psig. We wish to trigger an
optimisation and a change of separator pressure only if the delivery pressure is greater or lower
than this value by 20 psi.
Therefore the objective of this example is to setup the RESOLVE model such that a change of
July, 2021
RESOLVE Manual
Examples Guide 2432
separator pressure (i.e. performing optimisation) only happens if the delivery pressure is
greater than 1320 psig or lower than 1280 psig.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE drivers
for both ProII and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_7-Global_Optimisation\Example_7_3_3-
GAP_ProII_Optimiser_Control
This folder contains a file "GAP ProII Optimiser Control Start.rsa" which is a "RESOLVE archive
file" that contains the RESOLVE file, ProII file, GAP file and other associated files required to go
through the example. The archive file needs to be extracted either in the current location or a
location of the user"s choice.
Go to Step 1
3.9.3.3.2 Step 1: Open the file
Open the RESOLVE model called 'GAP-ProII.rsl' which was extracted from the example
archive. This contains the model built in Example 7.2.3:
- GAP and ProII are coupled in RESOLVE
- The SLP optimiser is setup in RESOLVE to optimise the separator pressure
From the Optimisation | Setup menu, the optimisation problem is setup as follows:
- Objective function: GAP oil rate
- Control: GAP separator pressure
- Constraint: ProII 'Sales' pressure > 1300 psig.
Optimisation is currently setup to run at every time step, and in this example we wish to trigger
optimisation only if a given criteria is met (Psales > 1320 psig OR Psales < 1280 psig). In order
to do this, logic will be applied in pre-solve and post-solve workflows as follows:
- Pre-solve: at the beggining of every time step, disable optimisation and solve the coupled
system
- Post-solve: examine the solve results:
- if the Sales pressure is outside the tolerance, enable optimisation and re-take the time
step
- if the Sales pressure is within the tolerance, move on to the next time step.
Go to Step 2.
July, 2021
RESOLVE Manual
Examples Guide 2434
1. Pre-solve workflow
Create the following workflow using an Assignment element. Link the elements using the
icon.
Using the RunOptimisers variable, it is also possible to activate/de-activate certain controls and
constraints, as well as have access to the optimisation results themselves. For more information
on the RunOptimisers variable, please refer to the Dynamic Optimisation Setup section of this
manual.
2. Post-solve workflow
Create the following workflow using two Assignment elements and one If-Then element.
July, 2021
RESOLVE Manual
Examples Guide 2436
Note: to swap the 'Yes' and 'No' arrows, click the 'Decision' button within the 'If-then' element.
July, 2021
RESOLVE Manual
Examples Guide 2438
Go to Step 3.
3.9.3.3.4 Step 3: Run the forecast
Run the forecast using the icon . From the 'Calculation' tab it is possible to observe the time
steps on which optimisation is being performed.
The RESOLVE results can be analysed by invoking Results | View Forecast Plots, or by
clicking on the icon. These results can also be displayed as the run is proceeding.
The variables available to plot are those that have been imported to RESOLVE in Example
7.2.3.
The following separator pressure profile is obtained in green and the results of Example 7.2.3
are shown in red. We obtain the same trend in separator pressure, and optimisation is only
performed on those time steps where the separator pressure changes. This also enables to
reduce the calculation time as 8 optimisations are required during the forecast compared to 37
previously.
The following plot shows the obtained delivery pressure in green, along with the results of
Example 7.2.3 in red. It can be observed that the delivery pressure always remains within 20 psi
of 1300 psig.
Finally, the impact of this on the produced oil rate is negligible as shown below.
The section below illustrates how this optimisation algorithm can be setup and used.
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 2440
...\resolve\Section_7-Global_Optimisation\Example_7_4_1-Well_Routing_Problem
July, 2021
RESOLVE Manual
Examples Guide 2442
Step 1: Enable the Global Optimisation in RESOLVE (for Single Solve Calculations).
In this case we want to maximise Gas Production. This will be selected from the GAP Model,
as shown below.
July, 2021
RESOLVE Manual
Examples Guide 2444
July, 2021
RESOLVE Manual
Examples Guide 2446
This can be setup manually or automatically by using the routing wizard. Both methods are
described below.
July, 2021
RESOLVE Manual
Examples Guide 2448
July, 2021
RESOLVE Manual
Examples Guide 2450
Now we need to make the association between the State Variables and the Control
Variables. Each Well needs to be associated to the corresponding pipelines.
Now we define the Control Variables State and assign the values that the State Variables
need to have for each of these States.
Each Control variable will have three possible States: ‘To HP’, ‘To MP’ and ‘To LP’
July, 2021
RESOLVE Manual
Examples Guide 2452
Each of these control variables has three possible states: To HP, To MP and To LP.
Each of these control variables are associated to three State Variables which correspond to
the pipelines status (mask) which connect the wells to the corresponding manifolds
The Control Variables states are hence defined by assigning different values to the
associated State Variables so that, for example, to define the Well 1 State “To HP” we need
to assign a value of 1 (mask) for the pipelines connecting W1 to manifolds LP and MP and a
value of 0 (unmask) to the pipeline connecting W1 to manifold HP.
When dealing with Routing Problems, RESOLVE can populate the previous screens
automatically based on the GAP model topography.
July, 2021
RESOLVE Manual
Examples Guide 2454
RESOLVE will create a list of all potential Routings based on multiple pipelines coming
out of a same joint.
We can then select which ones we want to include in our problem setup, and then click on
‘Generate Controls’
If we want to select all the potential routings, then we can select ‘Select All pipes’, as shown
below.
July, 2021
RESOLVE Manual
Examples Guide 2456
July, 2021
RESOLVE Manual
Examples Guide 2458
The existing constraints are not part of the Global Optimisation setup as they should be in the
underlying GAP model (which will be solved optimised for each routing case to be
evaluated)
Once the Objective Function and Integer Controls have been setup, we can go to
\Optimisation\Summary to define the global settings.
July, 2021
RESOLVE Manual
Examples Guide 2460
Once GIRO has been selected, we can access the Optimiser Settings by clicking on
Parameters:
Initial Population: This is the main parameter which allows us to control how many
evaluations (and hence time) we are willing to spend in solving the problem. The higher the
initial population is, the better quality the result will be (although at the expense of longer
number of evaluations).
Update model if improvement better than: This is the criteria which will be used to change
the model setup. This parameter becomes important when GIRO is used during a forecast
as it controls when the model setup will be changed (as per best solution found).
Another important feature (optional) for GIRO can be found in the main summary screen, as
shown below.
July, 2021
RESOLVE Manual
Examples Guide 2462
This validation flag criteria can be used to instruct GIRO to ignore certain solutions.
The main objective of this flag is to prevent GIRO from using ‘bad solutions’ coming from the
underlying applications (e.g. violating constraints). When dealing with complex models, there
is always the chance that a certain combination (e.g. routing) causes problems and hence
the model does not converge to a valid solution. Unless the model is robust enough to cope
with all possible combinations without having any problem, it is recommended to setup some
high level criteria (e.g. total rate < Constraint * 1.02) to make sure that a potential ‘bad
solution’ does not destroy the global optimisation.
This validation flag criteria can also be used to instruct GIRO to ignore solutions which do not
make practical sense (although they are still part of the potential combinations).
The main screen (Calculation Progress) will show the model evaluations.
The corresponding routing options (and results) can be inspected (even during the run) by
selecteing the Optimisation Progress window (from the windows menu)
July, 2021
RESOLVE Manual
Examples Guide 2464
We can see the value of the objective function for each model evaluation with the best one
highlighted in green. The final result can be seen from the Overall Results tab.
If the best solution is found to improve the result (compared to the original model setup)
beyond the user-specified tolerance then the model will be changed accordingly.
...\resolve\Section_7-Global_Optimisation\Example_7_4_2-Full_Field_Routing
The section below illustrates how this optimisation algorithm can be setup and used.
July, 2021
RESOLVE Manual
Examples Guide 2466
The following GAP model is considered: it is possible to notice that several pipeline routing
options are available within this model, at both the well manifold and the separator level.
In this case, the GIRO optimiser can be used to understand which one of the possible pipeline
routing combinations will be optimum at each prediction timestep.
Step 1: Create a new RESOLVE model and load the GAP instance.
It is important to note that when using the GIRO optimisation algorithm, the
RESOLVE SLP global optimisation algorithm will not be used, therefore any
constraints included in the RESOLVE model will be ignored.
However, the use of the GIRO optimiser can be combined with optimisation
algorithms working within each module, such as the GAP optimisation
procedure for instance.
If this is the case, objective function, constraints and control variables defined in
the GAP surface network model will be respected while the GIRO optimisation
July, 2021
RESOLVE Manual
Examples Guide 2468
To setup the optimisation problem, go to the Optimisation | Setup section in the main
RESOLVE menu.
To specify the objective function, select the "Edit" button in the "Objective Function"
section.
RESOLVE will retrieve the list of variables present in the GAP model (i.e. depending on
the size of the model, this might take some time) and the following screen will be
displayed.
In this case, the objective function is maximising the oil produced at the FPSO: the
equipment selected is then the "FPSO" node and the variable selected is "Oil Rate".
The "Set" button can be used to automatically obtain the OpenServer variable
associated with this variable, as illustrated above.
As the value of this variable is to be maximised during the optimisation process, select
the "Maximise" option.
When using the GIRO optimisation algorithm, the "Minimise" option of the objective
function is ignored: if the objective of the optimisation problem will be to minimise the value of
a certain variable, then the objective function has to be setup so that it maximises the inverse
of the variable considered.
Once the objective function has been setup, it will be required to setup the control
variables. Control variables used in a routing optimisation case will be the status of the
different pipelines considered: these pipelines can either be open or closed.
To setup these controls, go back to the optimisation setup screen and select the "Edit"
July, 2021
RESOLVE Manual
Examples Guide 2470
The following screen will be displayed:at the top of the screen, select the "Integer
Controls" tab, as illustrated below.
The routing optimisation control variables can automatically be generated from the GAP
model by using the "Generate Routing Controls" option.
RESOLVE will scan the GAP model topography and will identify all the nodes with more
than one pipeline outlet.
When a GAP model is to be used for routing optimisation, it is important that all the nodes
and pipelines included in this GAP model are labeled.
July, 2021
RESOLVE Manual
Examples Guide 2472
Click on the "Read Topography" button to list the different pipeline routing options
present in the surface network model, as illustrated below.
Once the list has been populated, select which pipeline routing options have to be
considered. In this case, pipeline routing options for all the active wells will be selected,
as well as all the pipeline routing options for the separators.
This will account for 2048 possible pipeline routing combinations.
July, 2021
RESOLVE Manual
Examples Guide 2474
Once the pipeline routing options considered have been selected, click on the
"Generate Controls" option.
This will automatically generate the different control parameters:
The control variables are the nodes of the surface network model that have more than
one possible outlet pipeline, and the control variables are the status of these outlet
pipelines: either open or close.
Once the control variables have been setup, go back to the main RESOLVE screen, as
the optimisation problem has now been setup.
July, 2021
RESOLVE Manual
Examples Guide 2476
By default, RESOLVE will use the "Petroleum Experts SLP" optimisation algorithm. For
routing optimisation purposes, it will be necessary to select the "GIRO" optimiser, as
described in the snapshot above.
The "Parameters" section enables to access the setup of the GIRO optimisation
algorithm.
July, 2021
RESOLVE Manual
Examples Guide 2478
These parameters have been setup by default to be suitable for most of the cases.
Should you require any guidance when using the GIRO optimiser and these setup
parameters, please contact Petroleum Experts technical support via email:
edinburgh@petex.com.
The optimisation progress screen will enable the user to follow the optimisation
calculations that are performed
The control section of the optimisation progress screen will illustrate the different pipeline
routing combinations that are evaluated.
The function result section of the optimisation progress screen will illustrate the oil rate
obtained for each of these evaluations.
July, 2021
RESOLVE Manual
Examples Guide 2480
Once the optimisation is converged, the best routing option will be selected and reported
in the overall results section, as illustrated below.
The GAP model will also be modified to represent this pipeline routing option.
In case a prediction case is run, the GAP model setup will be changed if an improvement
of more than 1% (i.e. this is a default value and can be modified by the user by modifying
the "Improvement Tolerance" parameter in the GIRO optimiser settings) of the objective
function is observed when performing the optimisation.
This is the final configuration achieved:
July, 2021
RESOLVE Manual
Examples Guide 2482
The main objective of this wizard is to understand what is the optimum setup of the GIRO
optimiser.
This wizard is accessible through the Wizard | GIRO optimiser performance section of
the main RESOLVE menu.
In the example below, two cases are compared: they have the same GIRO optimiser
parameters, but have different initial populations: 8 and 20.
Once both options are run, the following results are observed:
The plot illustrates the oil production obtained at the FPSO for the different cases
evaluated vs. the number evaluations required to obtain these results. The red plots
correspond to the case run with an initial population of 8 whereas the green points
correspond to the case run with an initial population of 20.
It is possible to notice that even so in both cases some evaluations reach the maximum
production at the FPSO, the case with a larger initial population is more consistent: most
of its evaluations are close from this maximum. However. the number of evaluations
required to obtain this result is higher, leading to a longer running time.
The "Statistics" button at the bottom right hand corner enables to quantify the results of
July, 2021
RESOLVE Manual
Examples Guide 2484
the optimiser:
For an initial population of 8, only 25% of the cases evaluated will be within 99% of the
maximum oil produced at the FPSO. 88% of these cases will however be within 95% of
the maximum oil produced at the FPSO.
For an initial population of 20, more than 50% of the cases evaluated will be within 99%
of the maximum oil produced at the FPSO. 98.3% of these cases will however be within
95% of the maximum oil produced at the FPSO.
This GIRO optimiser performance wizard can therefore be used to estimate, before a
forecast run for instance, what will be the optimum way of setting up the GIRO optimiser.
This example illustrated how GIRO can be used for optimisation problems which are not
necessarily about routing. GIRO can be used for any optimisation problem which involves (or is
formulated using) discrete variables. These discrete variables could be equipment on/off, pump
and compressor bypass/unbypass or operational speed.
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 2486
...\resolve\Section_7-Global_Optimisation\Example_7_4_3-Compressor_Prediction
In this example a gas well produces from a reservoir. Compression is also foreseen in the
future.
The objective is to determine if and when to switch on compression based on conditions that
the system maximises at all times the instantaneous revenue.
Step 1: Options.
In the main RESOLVE Options (menu Options/System Options) select the mode Full Forecast
with Global Optimisation:
July, 2021
RESOLVE Manual
Examples Guide 2488
The overall compressor operational cost and the gas sales price are considered to be fixed at
respectively 65000 $/day and 4000 $/MMscf
July, 2021
RESOLVE Manual
Examples Guide 2490
July, 2021
RESOLVE Manual
Examples Guide 2492
July, 2021
RESOLVE Manual
Examples Guide 2494
July, 2021
RESOLVE Manual
Examples Guide 2496
This is to pass the bypass flag to Excel, which will be used within Excel to determine how to
perform the revenue calculation.
July, 2021
RESOLVE Manual
Examples Guide 2498
1. Example Introduction
This example illustrates how an OpenServer macro can be used to automatically populate the
RESOLVE event driven scheduling capabilities described in Example 2.3.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
GAP is registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 2500
...\resolve\Section_8-OpenServer_Examples\Example_8_1-Drilling
This folder contains a file "Drilling.rsa" which is a "RESOLVE archive file" that contains the
RESOLVE file, Excel file, GAP file and other associated files required to go through the
example. The archive file needs to be extracted either in the current location or a location of the
user"s choice.
Go to Step 1
Step 1 Objective:
Extract the RESOLVE model
Prior to running the model, it is required to extract the archive. The following procedure enables
to do so.
Browse to the folder required and select the particular *.rsa file to be extracted and click on
Open. The "Extract archive" screen will be displayed. Click on Extract Archive. Browse to a
new directory where the user would like to store the example files and click OK | OK. RESOLVE
will extract all the files required for this exercise to the folder the user have specified.
When the message box "Open Master File" is displayed, click YES if the objective is to open
the RESOLVE file extracted. Click NO if some other file is to be extracted. For the example
drilling.rsl, we shall click on YES.
Go to Step 2
3.10.1.3 Drilling : Step 2
Step 2 Objective:
Open the RESOLVE model
When YES is clicked on the previous screen, RESOLVE will open the file drilling.rsl and a new
message box will appear.
July, 2021
RESOLVE Manual
Examples Guide 2502
Select YES and specify the location of the GAP model, as specified below.
Make sure that the file name is located in the folder where the files were extracted. Click on OK.
RESOLVE will open the GAP model. The GAP model will start in a new window and the
RESOLVE screen will look like this,
Go to Step 1 or Step 3
Step 3 Objective:
Open the Excel file containing the OpenServer macro required
The associated macro for this file is located in the file Drilling.xls. The location of this file is in the
same folder where the RESOLVE archive files were extracted.
Open the Excel file and make sure that Excel is setup so that it allows to run macros.
Go to Step 2 or Step 4
3.10.1.5 Drilling : Step 4
Step 4 Objective:
July, 2021
RESOLVE Manual
Examples Guide 2504
When both the RESOLVE model and the Excel file are open, the macro can be executed by
clicking on the Run Forecast button in Excel.
Go to Step 3
1. Example Introduction
Two process simulation models are connected together: one main plant model and one single
compressor model.
This example shows how the OpenServer can be used to add generic "Variable connections"
between the two modules.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
Hysys is registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Example_Section_8\Example_8_2-Variable_Connection
This folder contains a file "Variable_Connection.rsa" which is a "RESOLVE archive file" that
contains the RESOLVE file, Excel file, GAP file and other associated files required to go
through the example. The archive file needs to be extracted either in the current location or a
location of the user"s choice.
Go to Step 1
As mentioned in Example 7.1 above, it is required that the RESOLVE Archive file
"Variable_Connection.rsa" be extracted to a suitable directory.
After the stated file has been extracted open the RESOLVE file "Variable_Connection.rsl" and
the Excel File "Variable_Connection.xls."
Go to Step 2
July, 2021
RESOLVE Manual
Examples Guide 2506
Step 2 Objective:
Run the RESOLVE model
Please note that the RESOLVE model has to be loaded prior to the macro being run.
This example shows how the OpenServer can be used to populate the "Composition Mapping"
screen when connecting a GAP surface network model and a process simulation tool in UniSim
Design.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
UniSim Design and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Example_Section_8\Example_8_3-Compositional_Mapping
This folder contains a file "Compositional_Mapping.rsa" which is a "RESOLVE archive file" that
contains the RESOLVE file, Excel file, GAP file and other associated files required to go
through the example. The archive file needs to be extracted either in the current location or a
location of the user"s choice.
Go to Go to
As mentioned in Example 7.1 above, it is required that the RESOLVE Archive file
"Compositional_Mapping.rsa" be extracted to a suitable directory.
After the stated file has been extracted open the RESOLVE file "Compositional_Mapping.rsl"
and the Excel File "Compositional_Mapping.xls."
Go to Step 2
3.10.3.3 Compositional Mapping : Step 2
Step 2 Objective:
Run the RESOLVE model
This example is designed to read and map the components entered in GAP and in UniSim
July, 2021
RESOLVE Manual
Examples Guide 2508
Design.
The composition of the fluid is entered in GAP and in UniSim Design, however the titles given to
each component are different. It is required that the component names be mapped when
running the model.
This example is set up to show three conditions.
The Excel file has these three macros for the execution.
Run model
In this case, the model will be run without mapping the components. Expectedly, this will
not provide the correct solution as UniSim Design has zero mole % entered as the initial
values. This will generate errors in the Resolve calculation window
July, 2021
RESOLVE Manual
Examples Guide 2510
This model illustrates how to link a GAP surface network model to two types of reservoir model:
a REVEAL model and a MBAL model. It also illustrates how the schedule elements
implemented in GAP are respected by a RESOLVE model.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_9-Additional_Examples\Example_9_1-Mixed_reservoir_models
This folder contains a file "Mixed_reservoir_models.rsa" which is a "RESOLVE archive file" that
contains the RESOLVE file, GAP file and other associated files required to go through the
example. The archive file needs to be extracted either in the current location or a location of the
user's choice.
This model illustrates how to link a production network to a process to a re-injection network,
and how the compositions are handled from one model to another.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
Hysys, UniSim Design and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 2512
...\resolve\Section_9-Additional_Examples\Example_9_2-Gas_ReInjection
This model illustrates how the voidage replacement utility can be used to design a voidage
replacement scheme when connecting a REVEAL numerical reservoir model and GAP
production and water injection surface network models.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
REVEAL and GAP are registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the main menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_9-Additional_Examples\Example_9_3-Voidage_replacement
This folder contains a file "Voidage.rsa" which is a "RESOLVE archive file" that contains the
RESOLVE file, GAP file and other associated files required to go through the example. The
archive file needs to be extracted either in the current location or a location of the user's choice.
3.11.4.1 Overview
1. Example Introduction
An oil field is comprised of two reservoirs and each reservoir has a naturally flowing well. At the
surface, the production from the wells is combined into a pipeline which is connected with the
separator where a constant pressure is maintained. Oil delivery rate versus pressure profile will
be calculated in this exercise and can later be used to estimate production potential of the field.
In this example, a step by step workflow to obtain the oil, water, gas and liquid production in the
Separator of the GAP model versus a user entered pressure profile will be built. The results of
calculations will be visualized with chart diagrams using FormBuilder functionality.
July, 2021
RESOLVE Manual
Examples Guide 2514
2. Licences required
Running this example will require following licenses:
RESOLVE GAP
1 1
Before starting with this example, it will be necessary to make sure that the RESOLVE driver
for GAP is registered.
This procedure is automatically performed by selecting Drivers | Auto-register latest drivers
from the menu as illustrated below.
Once this is done, RESOLVE will return a message confirming the number of drivers that have
been registered.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
Start RESOLVE, and open a new project using File | New or the icon and ensure that
the models are set to be reloaded when the forecast starts by going to the Options | System
Options section.
In the System properties section appeared on the screen, select “Single solve/optimisation only”
as the Forecast mode.
From the main menu, go to Edit System | Add Client program or select the icon on the
shortcut bar and from the resulting menu, select “GAP”. Place it on the canvas and give a name
to the label (for instance “GAP”).
The RESOLVE model will then be displayed as illustrated below:
July, 2021
RESOLVE Manual
Examples Guide 2516
Double click on the GAP icon and the following screen will appear. Set up the location of the Oil
Field.gap file which should be extracted from the Oil Field.gar file located in the Section_9-
Additional_Examples\Example_9_4-FormBuilder\Initial folder.
3.11.4.5 Step 4-Place the Workflow client program and required data objects on the
RESOLVE canvas
Place the Workflow client program on the canvas from Edit system | Add client program. This
element will be used to build a Visual workflow. Most of the objects in the workflow will use
OpenServer to transfer the data from GAP into RESOLVE. Therefore, also place OpenServer
on the canvas from Edit System| Add data| OpenServer.
July, 2021
RESOLVE Manual
Examples Guide 2518
To visualise the results of calculations, the generated data must be stored in data sets which
need to be placed on the canvas from Edit System| Add data| DataSet.
Place twp data sets on the canvas and give them the following names: “SepProd” and
“Pressure”.
The “SepProd” data set will be used to visualise oil, gas, water and liquid rates in the separator
for different separator pressures. Once the data set is placed on the canvas, access it to define
the name of the variables (i.e. Pressure, Liquid rate, Oil rate, Gas rate and Water rate) and its
units from the drop down menu of the Unit column.
Click ”OK”.
The “Pressures” data set contains a column with the pressure data used for the calculation of
liquid rates. Within the data set, define “Pressure” as the variable and enter Pressure in the Unit
(pre-defined) column, as shown below.
July, 2021
RESOLVE Manual
Examples Guide 2520
a. Enter pressure values into the Pressure data set via FormBuilder.
b. Calculate the production of the GAP model versus different separator pressures.
c. Visualise the results with FormBuilder.
Access the Operation block and click on Add global function. In the list of operation categories
appearing on the screen, select the Maths library functions.
July, 2021
RESOLVE Manual
Examples Guide 2522
In the associated operations list related to dataset choose “clear data (leave headings)”. This
will clean a data set whilst keeping column headings. In order to indicate the name of the
DataSet to which this operation will be applied , enter Pressures in the Value column.
Once the Operation block is configured, place FormBuilder on the canvas and connect it to the
Operation block.
July, 2021
RESOLVE Manual
Examples Guide 2524
Then, place the Sub-flowsheet block on the canvas and connect it to the Formbuilder. The OK
label will appear below the link between the blocks which means that the run of the workflow will
continue towards the Sub-flowsheet when it is confirmed from the Formbuilder.
Extend the workflow such that there will be a way to terminate its execution by placing the
Terminator on the canvas and connecting it to the FormBuilder. The Cancel label appears on
the link between the blocks which indicate that the Workflow will be terminated if the user
selects this option in the FormBuilder.
Please note, if the blocks are connected in the opposite way (i.e. “Ok” is linked to terminator ),
then instead of progressing, the run will be canceled.
In the next steps, the dialogue box for the pressure input will be created. This box will include a
few images indicating the name of the example and the label of the company this task is
designed for, as well as a table for the pressure input.
Access the FormBuilder block and within the block click Designer.
July, 2021
RESOLVE Manual
Examples Guide 2526
To add an image on the canvas of the Form Designer, click the right mouse button and select
“Image”. This will create a rectangular frame on the canvas.
Click on the frame element and in the right top corner of the screen you will find a dialogue box
related to this image. Select DisplayImage and click on the squared button within this
section.
July, 2021
RESOLVE Manual
Examples Guide 2528
The dialog box will appear on the screen where you can select an image which will be placed
on the Form designer screen. In this case, we have selected the logo of Petroleum Experts.
Next, to show the task for the example, place a Label next to the Image and name it “Form
builder demonstration”, as shown below.
July, 2021
RESOLVE Manual
Examples Guide 2530
July, 2021
RESOLVE Manual
Examples Guide 2532
Click on the right mouse button on the canvas and in the menu that appears navigate to Add
and select Grid .
July, 2021
RESOLVE Manual
Examples Guide 2534
Complete the Designer screen with two buttons which will be placed below the grid.
Label the buttons “Calculate” and “Finish”. Define True as the value in the IsOK section for
“Calculate” and False for "Finish” . This will connect “Calculate” with the Sub-flowsheet of
the workflow and “Finish” will connect with the Terminate block.
July, 2021
RESOLVE Manual
Examples Guide 2536
Please note, if you want to change the id name of an element in the Form designer screen, this
can be done under the Identification section.
Click on the Save and Close button to finish the design.
On the main screen of FormBuilder, enter the Pressures data set in the Input and Output
sections of the Grid (i.e. grid1). Enter FormReturn (a variable with double precision ) as an
Assign return value.
Click Ok. Rename the Terminate block in the Workflow screen to “End”.
b. Calculate the production of the GAP model versus different separator pressures.
This step will allow the calculation of the separator production for each pressure entered in the
form builder dialogue box. This is done by looping through the set of pressure values using the
Loop block and solving the network at each step.
To commence, within Visual Workflow rename the Sub Flowsheet as “Calculate” using the
Change label option.
July, 2021
RESOLVE Manual
Examples Guide 2538
Before any calculation starts, it is necessary to clean up the SepProd data set which will be
used for the output from calculation. For that, place an operation block on the canvas and
connect it to the Start block.
Within the Operation block, click on "Add global function" which will open the Create/Edit
operation screen. Select "Clear the data (leave the column headings) from the data set"
operation from the Math library function category and define the SepProd dataset to which the
operation will be applied. This operation will clear out the columns in the SepProd data set
leaving the headings unchanged.
July, 2021
RESOLVE Manual
Examples Guide 2540
Next, place a Loop block on the canvas and connect it to the Operation block.
The Loop block will be used to pick up indexes of the pressure values entered in the Pressures
data set. This block has a variable to increment which can be defined as i. This variable has to
be created via the Add variables used in the workflow option accessed from the main Workflow
screen.
Define the variable type as integer, the starting value as equal to zero and set initialization to be
at the start of the run.
July, 2021
RESOLVE Manual
Examples Guide 2542
Then provide the loop details to the block so that the Starting value is zero, whereas the End
value is the total number of rows in the Pressures data set minus one, as counting starts from
zero: Pressures.Column[0].DataCount-1.
Create an assignment element called “Set Sep pressure” which will assign each pressure from
the “Pressure” dataset to the separator pressure in the GAP model.
July, 2021
RESOLVE Manual
Examples Guide 2544
To access the separator pressure in GAP, use the following string: OpenServer.GAP[0].MOD
[0].SEP[0].SOLVERPRES[0]. Assign a pressure value from the Pressures data set to the
separator pressure using the string Pressures.Column[0].Value[i]. In this string, the data will be
taken from the first column (i.e. 0) of the Pressures data set, whereas the row number (i.e. i) will
be changing at each looping step to ensure that calculations are done for a new pressure.
Add an Operation block in the workflow which will be solving the GAP network at each loop
step. Change its label to "Solve network".
July, 2021
RESOLVE Manual
Examples Guide 2546
Within the "Solve network" operation block, click on the "Add function on variable"
In the window that appeares on the screen enter the Solvenetwork as a function to call by using
the sting- OpenServer.GAP[0].SOLVENETWORK. No optimization will be used to solve the
network , therefore set the solvemode parameter to - GAPSolveMode.SolverOnly. Since the
production system will be solved, set the system parameter to - GAPSystem.Production.
July, 2021
RESOLVE Manual
Examples Guide 2548
To report the simulation results at each loop step, create an Assignment block and include it in
the existing workflow. Label it "SepResults", as shown below.
July, 2021
RESOLVE Manual
Examples Guide 2550
Once all the calculated results have been collected in the datasets, they can be visualised with
a FormBuilder block which will follow after the Calculate flowsheet.
c. Visualise the results with FormBuilder.
Place the FormBuilder block within the main workflow and connect it to the Calculate
flowsheet.
Access FormBuilder and click on the Designer button. Within the Form Designer screen, place
the same images as were used in the previous FormBuilder block at the beginning of the
Workflow following the same procedure.
Next, place a label at the center of the design screen and name it: “Modelling results in GAP”.
July, 2021
RESOLVE Manual
Examples Guide 2552
Place a Tab element on the screen (Right mouse click| Add| Tab).
Right click on the Tab element and on the window that appears select “Setup Tab”.
July, 2021
RESOLVE Manual
Examples Guide 2554
Add three new tab labels: "Oil", "Liquid", and "Gas" (in the picture below the "Tab" label has
been removed).
Add a Line/scatter chart to each tab which will indicate producing rates in the separator for
each phase.
July, 2021
RESOLVE Manual
Examples Guide 2556
Link production data from the SepProd data set with the scatter chart by navigating to YValues
under the Data section of the chart.
This will open the ChartSeries Collection Editor window. On the left hand side of the window
create a label corresponding to the name of the tab (i.e. “Oil”) by clicking Add. In the properties
section on the right hand side, enter the number of the column in the SepProd data set which
reflects the desired axis. For example, 0 for XColumn will plot the user entered pressure
(along the X axis) and 2 for YColumn will plot the calculated oil production in the separator
(along the Y axis).
July, 2021
RESOLVE Manual
Examples Guide 2558
Add scatter plots to the Liquid and Gas tabs as well and link the corresponding columns of the
SpeProd dataset to the axis on the graph.
After this, place two buttons on the canvas and name them “Recalculate” and “Finish”.
In the “IsOK” section, for the “Recalculate” button, define the value as “True” and for the
“Finish” button define the value as “False” . If “Recalculate” is clicked when the workflow is
run, it will allow the user to continue the calculations for different pressures, whereas “Finish”
will terminate the run.
Click on the “Save and Close” button to finish the design.
To complete this Formbuilder block, enter the SepProd data set for the input of
corresponding charts:
Chart 1- SepProd data set;
Chart 2- SepProd data set;
Chart 3- SepProd data set.
July, 2021
RESOLVE Manual
Examples Guide 2560
July, 2021
RESOLVE Manual
Examples Guide 2562
Click on “Calculate” to proceed further with the simulation for different pressure values.
July, 2021
RESOLVE Manual
Examples Guide 2564
In order to run calculation in GAP again for a different set of pressure values, click on
Recalculate , otherwise the run can be completed by selecting Finish.
The aim of this example is to demonstrate the basic pre-requisite steps necessary for using
MOVE with RESOLVE.
In order for RESOLVE to communicate with MOVE, a link needs to be created between MOVE
and RESOLVE. To do this, the MOVE driver needs to be set up in RESOLVE. This is done by
declaring the location of the executable file for the MOVE software (MOVE.exe) as a RESOLVE
driver. Once the MOVE driver is created, a MOVE model will be added to the RESOLVE
canvas, ready for use in RESOLVE functions, such as a visual workflow.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
3. Files Locations
The files required to complete this example are located in the samples installation folder.
July, 2021
RESOLVE Manual
Examples Guide 2566
...\resolve\Section_10-MOVE_Examples\Example_10_1_Connecting_to_MOVE
This folder contains a file "01_Sheep_Mountain.move" and the associated MOVE directory
folder "01_Sheep_Mountain.movd".
Go to Step 1
3.12.1.2 Step 1 - Define MOVE driver
Step 1 Objective:
Start a new RESOLVE project and create MOVE driver.
Start RESOLVE.
Through the file explorer, navigate to the folder on your computer containing the MOVE.exe
executable file (e.g. C:\Program Files\Petroleum Experts\Move2019\bin).
A connection between RESOLVE and MOVE has now been made. This setting is permanently
recorded by RESOLVE - every time RESOLVE is opened the driver will be pre-defined. These
steps will only need to be repeated if a different version of MOVE is installed in a different folder
Go to Step 2
Now that a connection between MOVE and RESOLVE has been made, MOVE can be called
upon and accessed from RESOLVE. To use MOVE from RESOLVE, an instance of the MOVE
application needs to be created. The MOVE instance can then be called upon and used in
Visual Workflows.
July, 2021
RESOLVE Manual
Examples Guide 2568
In the Enter an alias for the new module instance pop-up box that appears, keep the name
as default (i.e. "MOVE") and press OK.
The MOVE instance can now be referred to by other RESOLVE functions, such as a visual
workflow. However, the MOVE instance is currently blank. Next, we will assign a reference to a
MOVE project to the MOVE instance.
Go to Step 3
Now that an instance of MOVE has been created in RESOLVE, the MOVE instance needs to be
assigned a reference to a MOVE model.
Double-click on the MOVE instance in the RESOLVE window - a MOVE dialogue box will
appear.
July, 2021
RESOLVE Manual
Examples Guide 2570
Navigate to the location of the "01_Sheep_Mountain.move" file in the samples folder (...\resolve
\Section_10-MOVE_Examples\Example_10_1_Connecting_to_MOVE\01-
Sheep_Mountain.move).
Press Open.
Press Apply.
RESOLVE will open the MOVE file you have selected to register the file and the file contents.
This might take a moment. If a dialogue box containing the message Unable to start
“...MOVE.exe” – check the configuration and that this file exists appears, the MOVE
driver has not been successfully defined (repeat Step 1). If other error messages appear (e.g.
MOVE did not start in a timely manner, so open has been aborted) ensure that you are
using an appropriate version of MOVE (version 2019.1 onwards) and that there are MOVE
licences available.
Close the MOVE window once the MOVE project has opened.
The contents of the MOVE project will now be listed in the MOVE dialogue box and a 3D
preview view of the project will be shown.
Press OK.
There is now an instance of MOVE in the RESOLVE window that references a MOVE project.
The instance of MOVE can now be called and used by RESOLVE functions, such as visual
worklfows.
Where there is uncertainty in geological data, interpreting geological features (e.g. faults and
horizons) can be problematic. Despite these difficulties, an interpretation of the geology must
be made in order for models (e.g. simulation grids) to be created. Several techniques have
been developed to aid geoscientists when making a geological interpretation - one of which is
a geometrical assessment of horizon length (line-length) in a geological interpretation.
July, 2021
RESOLVE Manual
Examples Guide 2572
In this example we will automate a simple technique that assumes that material is conserved
during deformation (i.e. mass conservation). If a series of geological units of equal length are
deformed, then in accordance with the assumption of mass conservation, the horizons should all
be the same length after deformation. This simple assumption can be used to help
geoscientists identify components of a geological interpretation that might be incorrect. If a
geoscientist makes an interpretation of geological data and subsequently measures the lengths
of all of the horizon lines, any significant differences in the length of the interpreted horizons
might indicate a component of the interpretation that is not physically possible - referred to as
invalid. The analysis will force the geoscientist to review the interpretation and ensure that the
interpretation honours the basic principle of mass conservation. The process of testing that an
interpretation honours basic physical laws and iteratively adjusting an interpretation to ensure it
is physically possible reduces uncertainty and increases confidence.
The aim of this example is to develop a visual workflow to automate the assessment of line-
length conservation for a 2D geological interpretation and transfer the analysis results from
MOVE into RESOLVE.
2. Licenses Required
Running this example will require the following licenses to be available to the user:
Before starting with this example, it will be necessary to make sure that the RESOLVE driver for
MOVE is registered. Note that this operation is not required if it has been done previously.
This procedure can be performed by completing Step 1 of Example 10.1: Connecting to MOVE.
3. Files Locations
The files required to complete this example are located in the samples installation folder.
...\resolve\Section_10-MOVE_Examples\Example_10_2_MOVE_2D_Line-
Length_Analysis
This folder contains a file "Fault-Related_Folding.move" and the associated MOVE directory
folder "Fault-Related_Folding.movd".
Go to Step 1.
3.12.2.2 Step 1 - Initialise model
Step 1 Objective:
Start a new RESOLVE project and create instances.
The workflow will require two Applications and two Data Objects to run. In this step RESOLVE
will be opened and instances of these four components will be created.
Start RESOLVE.
July, 2021
RESOLVE Manual
Examples Guide 2574
In the Enter an alias for the new module instance pop-up box that appears, keep the name
as default (i.e. "MOVE") and press OK.
One instance of a visual workflow (Edit System | Add Client Program | Workflow) called
"SectionAnalysis"
Go to Step 2.
Now that an instance of MOVE has been created in RESOLVE, the MOVE instance needs to be
assigned a reference to the appropriate MOVE model.
Double-click on the MOVE instance in the RESOLVE window - a MOVE dialogue box will
July, 2021
RESOLVE Manual
Examples Guide 2576
appear.
Press Open.
Press Apply.
RESOLVE will open the MOVE file you have selected to register the file and the file contents.
This might take a moment. If a dialogue box containing the message Unable to start
“...MOVE.exe” – check the configuration and that this file exists appears, the MOVE
driver has not been successfully defined (repeat Step 1). If other error messages appear (e.g.
MOVE did not start in a timely manner, so open has been aborted) ensure that you are
using an appropriate version of MOVE (version 2019.1 onwards) and that there are MOVE
licences available.
Close the MOVE window once the MOVE project has opened.
The contents of the MOVE project will now be listed in the MOVE dialogue box and a 3D
Press OK.
The instance of MOVE now references the desired model (Fault-Related_Folding.move). It will
be this MOVE model that is used when the MOVE instance is called by a RESOLVE function.
Go to Step 3.
When the workflow is run it is important to ensure that any data stored from previous runs are
removed. This is so that the results of the current run are not contaminated with the results of a
previous run. To achieve this, we will clear all data from the FlexDataStore instance.
July, 2021
RESOLVE Manual
Examples Guide 2578
SectionAnalysis).
Select the Display Palette icon to open the Workflow item Palette.
July, 2021
RESOLVE Manual
Examples Guide 2580
Create an operation item in the Visual Workflow window to the immediate right of the Start item
by left-clicking in the Visual Workflow window. By default this will be called Operation-1.
From the Select category of operation drop-down menu, select DataStore functions.
From the Select operation drop-down menu, expand the DataStore list and select Clear the
data and the columns from the DataStore.
From the drop-down list in the DataStore Value cell, select the AnalysisResults instance of
the FlexDataStore object.
July, 2021
RESOLVE Manual
Examples Guide 2582
We have now created an operation to clear any data from the FlexDataStore. One operation will
be listed in the Perform operations window.
July, 2021
RESOLVE Manual
Examples Guide 2584
Press OK to close the Perform operations window. There should now be two items in the
Visual Workflow window, a Start item and a ClearData operation box.
Go to Step 4.
After creating the ClearData operation to ensure that there are no data in the FlexDataStore
item following previous runs of the worklfow, the items to perform the geometrical analysis can
now be created. The first of these will be an operation to open the MOVE project defined in the
MOVE instance.
Ensure the Workflow item Palette is displayed - if you cannot see the palette you might need
to press the Display Palette icon from the Visual Workflow Editor.
Create an operation item in the Visual Workflow window to the immediate right of the
ClearData item by left-clicking in the Visual Workflow window. By default this will be called
Operation-1.
July, 2021
RESOLVE Manual
Examples Guide 2586
From the Select category of operation drop-down menu, select MOVE structural geology.
From the Select operation drop-down menu, expand the list of MOVE operations and select
Open the current MOVE project.
From the drop-down list in the MOVE Value cell, select the MOVE instance of the MOVE object.
July, 2021
RESOLVE Manual
Examples Guide 2588
We have now created an operation to open the MOVE project. One operation will be listed in
the Perform operations window.
July, 2021
RESOLVE Manual
Examples Guide 2590
Press OK to close the Perform operations window. There should now be three items in the
Visual Workflow window, Start, ClearData, and OpenMOVE.
Go to Step 5.
3.12.2.6 Step 5 - Open Section Analysis
Step 5 Objective:
Create a visual workflow operation to open the Section Analysis tool from the 2D
Kinematic Modelling module.
In the previous step the OpenMOVE operation box was created to open a MOVE project. We
will now create an operation box that opens the tool in MOVE that will be used to perform the 2D
geometrical analysis.
Ensure the Workflow item Palette is displayed - if you cannot see the palette you might need
to press the Display Palette icon from the Visual Workflow Editor.
Create an operation item in the Visual Workflow window to the immediate right of the
OpenMOVE item by left-clicking in the Visual Workflow window. By default this will be called
Operation-1.
July, 2021
RESOLVE Manual
Examples Guide 2592
From the Select category of operation drop-down menu, select MOVE structural geology.
From the Select operation drop-down menu, expand the list of MOVE operations and select
Open/close the given tool.
From the drop-down list in the MOVE Value cell, select the MOVE instance of the MOVE object.
July, 2021
RESOLVE Manual
Examples Guide 2594
In the tool Value cell, type the OpenServer string for the Section Analysis module -
MOVE.SectionAnalysis.
In the open Value cell type 1 to open the toolbox (1 = open, 0 = close).
We have now created an operation to open the Section Analysis tool, which is part of the 2D
Kinematic Modelling module in MOVE. One operation will be listed in the Perform operations
window.
July, 2021
RESOLVE Manual
Examples Guide 2596
Press OK to close the Perform operations window. There should now be four items in the
Visual Workflow window, Start, ClearData, OpenMOVE, and OpenMOVETool.
Go to Step 6.
Now that the OpenMOVETool operation box has been created to open (activate) the Section
Analysis tool, an operations box will be created to perform the geometrical analysis.
Ensure the Workflow item Palette is displayed - if you cannot see the palette you might need
to press the Display Palette icon from the Visual Workflow Editor.
Create an operation item in the Visual Workflow window to the immediate right of the
OpenMOVETool item by left-clicking in the Visual Workflow window. By default this will be
called Operation-1.
July, 2021
RESOLVE Manual
Examples Guide 2598
Instead of using a function to perform the analysis, the input parameters can be defined by
assigning values to specific properties in the tool. This will be done by assigning values to
properties using the appropriate OpenServer string and the Add variable assignments option
in the Perform operations window.
Press OK.
A Variable assignments operation with 3 entries will appear in the Perform operations
window.
Press OK to close the Perform operations window. There should now be five items in the
Visual Workflow window, Start, ClearData, OpenMOVE, OpenMOVETool, and
PerformAnalysis.
July, 2021
RESOLVE Manual
Examples Guide 2600
Go to Step 7.
3.12.2.8 Step 7 - Transfer Results
Step 7 Objective:
Create a visual workflow operation to transfer the results of the MOVE analysis into
RESOLVE.
The previous PerformAnalysis operation defined the input parameters for the geometrical
analysis and calculated the results. As it stands, the results that will have been calculated as part
of the workflow will be stored in MOVE. However, to perform any numerical analysis on the
results or to incorporate the results of the analysis into further workflows the results need to be
transferred from MOVE into RESOLVE. In this step we will create an operation to transfer the
results of the analysis from MOVE into the FlexDataStore instance in RESOLVE.
To transfer the results from RESOLVE into MOVE we will use the variable assignment method.
We will assign the results table in MOVE as being equal to the AnalysisResults
FlexDataStore item created in Step 1. This will copy the table from MOVE into the
AnalysisResults FlexDataStore in RESOLVE.
Ensure the Workflow item Palette is displayed - if you cannot see the palette you might need
to press the Display Palette icon from the Visual Workflow Editor.
Create an operation item in the Visual Workflow window to the immediate right of the
PerformAnalysis item by left-clicking in the Visual Workflow window. By default this will be
called VarAssign-1.
From the drop-down list in the first cell of the Variable column, select the AnalysisResults
July, 2021
RESOLVE Manual
Examples Guide 2602
In the set equal to column, type the OpenServer string referencing the analysis results
(MOVE.SectionAnalysis.TableView0).
Press OK to close the Assign variables/perform commands window. There should now be
six items in the Visual Workflow window, Start, ClearData, OpenMOVE, OpenMOVETool,
PerformAnalysis, and TransferResults.
Go to Step 8.
In the previous TransferResults operation, the results of the geometrical analysis were
transferred from MOVE into the AnalysisResults FlexDataStore in RESOLVE. The results
could be used as part of further analysis in RESOLVE or passed into additional visual
workflows. The MOVE analyses are now complete and the MOVE software can be closed - we
will now create an operation to do this.
Ensure the Workflow item Palette is displayed - if you cannot see the palette you might need
to press the Display Palette icon from the Visual Workflow Editor.
Create an operation item in the Visual Workflow window to the immediate right of the
TransferResults item by left-clicking in the Visual Workflow window. By default this will be
called Operation-1.
July, 2021
RESOLVE Manual
Examples Guide 2604
From the Select category of operation drop-down menu, select MOVE structural geology.
From the Select operation drop-down menu, expand the list of MOVE operations and select
Close the current MOVE project.
From the drop-down list in the MOVE Value cell, select the MOVE instance of the MOVE object.
July, 2021
RESOLVE Manual
Examples Guide 2606
We have now created an operation to close the MOVE project. One operation will be listed in
the Perform operations window.
July, 2021
RESOLVE Manual
Examples Guide 2608
Press OK to close the Perform operations window. There should now be seven items in the
Visual Workflow window, Start, ClearData, OpenMOVE, OpenMOVETool,
PerformAnalysis, TransferResults, and CloseMOVE.
Go to Step 9.
3.12.2.10 Step 9 - End Workflow
Step 9 Objective:
Add an operation box to terminate the visual workflow and connect the items together.
Now the CloseMOVE operation has been created, the final item required to complete the
workflow is a Terminator item. This indicates to RESOLVE that the workflow has completed
successfully. In this step we will add the terminator item to the workflow and connect the
workflow items together.
Ensure the Workflow item Palette is displayed - if you cannot see the palette you might need
to press the Display Palette icon from the Visual Workflow Editor.
Create a Terminator item in the Visual Workflow window to the immediate right of the
CloseMOVE item by left-clicking in the Visual Workflow window. By default this will be called
Terminator-1.
The terminator item has now been added to the visual workflow. For the visual workflow to run,
the operations boxes must be connected in the desired order. In this example, the operations
boxes have been aligned linearly between the Start item and the Terminator item in the
desired sequence of operations. However, visual worklfow items can be placed in any order or
position in the Visual Workflow editor. For the workflow to run as expected, the sequence of
operations needs to be defined explicitly. We will do this now.
Press the Connect workflow objects together icon from the Visual Worklfow editor
window.
July, 2021
RESOLVE Manual
Examples Guide 2610
Left-click on the Start item and, while continuing to hold down the left mouse button, drag the
cursor across to the ClearData item. Release the left mouse button when the cursor is above
the ClearData item.
Repeat this process to connnect the remaining workflow items sequentially (i.e. connect the
ClearData item to the OpenMOVE item, the OpenMOVE item to the OpenMOVETool item...).
Go to Step 10.
All components of the workflow have now been created and the workflow is ready to be tested.
This is an important process that ensures that there are no errors in the logical steps or
parameters that have been defined in the workflow. Prior to testing a workflow the expected
outcome should be identified. For this workflow, it is expected that MOVE will run and the
MOVE project defined in Step 2 will be opened. A geometrical analysis will be performed on
the geological interpretation in the MOVE project. The results will be transferred into the
AnalysisResults FlexDataStore in RESOLVE. MOVE will then be closed and the workflow will
end. Once the workflow has completed, it is expected that the AnalysisResults FlexDataStore
in RESOLVE will contain the results of the geometrical analysis.
We will now save and test run the workflow we have created to ensure that the geometrical
analysis runs successfully and that the results are transferred into RESOLVE.
The Start item should become highlighted to indicate that the workflow is running and the
currently active worklfow step.
Press the Test run one step icon twice so that the OpenMOVE item is highlighted.
Keeping the Visual Workflow Editor window open, return to the RESOLVE canvas and inspect
the contents of the AnalysisResults FlexDataStore item by double-clicking on the icon (press
Cancel to close). The FlexDataStore should be empty after the workflow has implimented the
ClearData operation item.
July, 2021
RESOLVE Manual
Examples Guide 2612
Press the Test run one step icon six more times, waiting for each workflow step to
complete before pressing the Test run one step icon again.
To validate that the workflow has run successfully, return to the RESOLVE canvas and double-
click on the AnalysisResults FlexDataStore. The FlexDataStore should now contain the results
of the geometrical analysis.
If desired, further numerical analysis could be performed on the analysis results using
RESOLVE's native libraries or the results could be passed into another workflow.
For comparison purposes, a completed version of the RESOLVE visual workflow created in this
example as well as the MOVE project are provided as a RESOLVE archive file. This is available
in the samples installation folder. Within this folder, the file is located under:
...\resolve\Section_10-MOVE_Examples\Example_10_2_MOVE_2D_Line-
Length_Analysis
July, 2021
RESOLVE Manual