Professional Documents
Culture Documents
ISight User Guide 1
ISight User Guide 1
Isight and Fiper are products of, and are offered by, Dassault Systèmes Simulia Corp. under license from
Engineous Software, Inc., Cary, North Carolina, USA, a member of the Dassault Systèmes Group. These
products may be used or reproduced only in accordance with the terms of such license.
This documentation is subject to the terms and conditions of either the software license agreement signed
by the parties, or, absent such an agreement, the then current software license agreement to which the
documentation relates.
This documentation and the software described in this documentation are subject to change without prior
notice. No part of this documentation may be reproduced or distributed in any form without prior written
permission of Dassault Systèmes or its subsidiary.
Export and re-export of the Isight and Fiper Software and this documentation is subject to United States
and other export control regulations. Each user is responsible for compliance with applicable export
regulations.
Isight, Fiper, the 3DS logo, and SIMULIA are trademarks or registered trademarks of Dassault Systèmes
or its subsidiaries in the United States and/or other countries. Other company, product, and service names
may be trademarks or service marks of their respective owners. For additional information concerning
trademarks, copyrights, and licenses, see the notices at: http://www.simulia.com/products/
products_legal.html.
OPEN SOURCE PROGRAMS: This release of Isight and Fiper uses several open source or free programs
(“OS Programs”). Each such program is distributed with the Isight and Fiper Software in binary form and,
except as permitted by the applicable license, without modification. Each such program is available online
for free downloading and, if required by the applicable OS Program license, the source code will be made
available by Dassault Systèmes or its subsidiary upon request. For a complete list of OS Programs utilized
by Isight and Fiper, as well as licensing documentation for these programs, see http://www.simulia.com/
products/products_legal.html.
Dassault Systèmes or its subsidiaries may have patents or pending patent applications, trademarks,
copyrights, or other intellectual property rights covering the Isight and Fiper Software and/or its
documentation. No license of such patents, trademarks, copyrights, or other intellectual property rights is
provided or implied except as may be expressly provided in a written license agreement from Dassault
Systèmes or its subsidiary.
3
North America Central, Main office, West Lafayette, IN, Tel: +1 765 497 1373, simulia.central.support@3ds.com
Central, Cincinnati office, West Chester, OH, Tel: +1 513 275 1430, simulia.central.support@3ds.com
Central, Minneapolis/St. Paul office, Woodbury, MN, Tel: +1 612 424 9044, simulia.central.support@3ds.com
East, Main office, Warwick, RI, Tel: +1 401 739 3637, simulia.east.support@3ds.com
Erie office, Beachwood, OH, Tel: +1 216 378 1070, simulia.erie.info@3ds.com
Great Lakes, Main office, Northville, MI, Tel: +1 248 349 4669, simulia.greatlakes.info@3ds.com
South, Main office, Lewisville, TX, Tel: +1 972 221 6500, simulia.south.info@3ds.com
West, Main office, Fremont, CA, Tel: +1 510 794 5891, simulia.west.support@3ds.com
Argentina Dassault Systèmes Latin America, Buenos Aires, Tel: +54 11 4345 2360, Horacio.Burbridge@3ds.com
Australia Dassault Systèmes Australia Pty. Ltd., Richmond VIC, Tel: +61 3 9421 2900, simulia.au.support@3ds.com
Austria Vienna, Tel: +43 1 929 16 25-0, simulia.at.info@3ds.com
Benelux Huizen, The Netherlands, Tel: +31 35 52 58 424, simulia.benelux.support@3ds.com
Brazil SMARTTECH Mecânica, São Paulo SP, Tel: +55 11 3168 3388, smarttech@smarttech.com.br
SMARTTECH Mecânica, Rio de Janeiro RJ, Tel: + 55 21 3852 2360, smarttech@smarttech.com.br
Czech Republic Synerma s. r. o., Psáry, Prague-West, Tel: +420 603 145 769, abaqus@synerma.cz
France SIMULIA France SAS, Main office, Velizy Villacoublay, Tel: +33 1 61 62 72 72, simulia.fr.support@3ds.com
Toulouse office, Toulouse, Tel: +33 5 61 16 99 47 (phone for sales only), simulia.fr.support@3ds.com
Germany Aachen office, Aachen, Tel: +49 241 474 01 0, simulia.de.info@3ds.com
Munich office, Munich, Tel: +49 89 543 48 77 0, simulia.de.info@3ds.com
Greece 3 Dimensional Data Systems, Athens, Tel: +30 694 3076316, support@3dds.gr
India Abaqus Engineering India Pvt Ltd., Main office, Chennai, Tamil Nadu, Tel: +91 44 43443000, simulia.in.info@3ds.com
Israel ADCOM, Givataim, Tel: +972 3 7325311, shmulik.keidar@adcomsim.co.il
Italy Milano (MI), Tel: +39 02 39211211, simulia.ity.info@3ds.com
Japan Tokyo office, Tokyo, Tel: +81 3 5474 5817, tokyo@simulia.com
Osaka office, Osaka, Tel: +81 6 4803 5020, osaka@simulia.com
Korea Mapo-Gu, Seoul, Tel: +82 2 785 6707/8, simulia.kr.info@3ds.com
Malaysia WorleyParsons Advanced Analysis, Kuala Lumpur, Tel: +603 2039 9405, abaqus.my@worleyparsons.com
New Zealand Matrix Applied Computing Ltd., Auckland, Tel: +64 9 623 1223, support@matrix.co.nz
Poland BudSoft Sp. z o.o., Sw. Marcin, Tel: +48 61 8508 466, budsoft@budsoft.com.pl
Russia, Belarus & Ukraine TESIS Ltd., Moscow, Tel: +7 495 612 44 22, info@tesis.com.ru
Scandinavia Västerås, Sweden, Tel: +46 21 15 08 70, info@simulia.se
Singapore WorleyParsons Advanced Analysis, Singapore, Tel: +65 6735 8444, abaqus.sg@worleyparsons.com
South Africa Finite Element Analysis Services (Pty) Ltd., 7700 Mowbray, Tel: +27 21 448 7608, feas@feas.co.za
Spain Principia Ingenieros Consultores, S.A., Madrid, Tel: +34 91 209 1482, abaqus@principia.es
Taiwan Simutech Solution Corporation, Taipei, R.O.C., Tel: +886 2 2507 9550, ariel@simutech.com.tw
Thailand WorleyParsons Advanced Analysis Group, Bangkok, Tel: +66 2 689 3000, abaqus.th@worleyparsons.com
Turkey A-Ztech Ltd., Istanbul, Tel: +90 216 361 8850, info@a-ztech.com.tr
United Kingdom Dassault Systèmes Simulia Ltd., Main office, Warrington, Tel: +44 1 925 830900, simulia.uk.info@3ds.com
Dassault Systèmes Simulia Ltd., Kent office, Nr. Sevenoaks, Tel: +44 1 732 834930, simulia.uk.info@3ds.com
Sales Only
North America East, Mid-Atlantic office, Forest Hill, MD, Tel: +1 410 420 8587, simulia.east.support@3ds.com
Great Lakes, East Canada office, Toronto, ON, Canada, Tel: + 1 416 466 4009, simulia.greatlakes.info@3ds.com
South, Southeast office, Acworth, GA, Tel: +1 770 795 0960, simulia.south.info@3ds.com
West, Pacific Southwest office, Southern CA and AZ, Tustin, CA, Tel: +1 510 794 5891, Ext. 152, simulia.west.info@3ds.com
West, Rocky Mountains office, Boulder, CO, Tel: +1 510 794 5891, Ext. 151, simulia.west.info@3ds.com
Finland Vantaa, Tel: +358 9 2517 8157, info@simulia.fi
India Abaqus Engineering India Pvt Ltd, Delhi office, New Delhi, Tel: +91 11 65171877, simulia.in.info@3ds.com
Abaqus Engineering India Pvt Ltd, Pune office, Tel: +91 20 30483382, simulia.in.info@3ds.com
China Main office, Chaoyang District, Beijing, P. R. China, Tel: + 86 10 65362345, simulia.cn.info@3ds.com
Shanghai office, Shanghai, P. R. China, Tel: + 86 21 5888 0101, simulia.cn.info@3ds.com
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
What is Isight?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Conventions Used in This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Typographical Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Mouse Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Keyboard Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Platform Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
SIMULIA Online Support System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Technical Engineering Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Systems Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Anonymous FTP Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Contacting Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1 Components Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Editing Component Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Setting Component Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Using the Component API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
Preface
This book is your guide to using Engineous-supplied process and activity components.
What is Isight?
Isight provides a suite of visual and flexible tools to set up and manage computer
software required to execute simulation-based design processes, including commercial
CAD/CAE software, internally developed programs, and Excel spreadsheets. The open
API supports the development of custom interfaces to link additional in-house and
commercial applications by partners and customers.
The rapid integration of these applications and Isight’s ability to automate their
execution greatly accelerates the evaluation of product design alternatives.
Using advanced techniques such as optimization, DFSS (Design for Six Sigma),
approximations, and DOE, engineers are able to thoroughly explore the design space.
Advanced, interactive post processing tools allow engineers to see the design space
from multiple points of view. Design trade-offs and the relationships between
parameters and results are easily understood and assessed, leading to the best possible
design decisions.
The process integration and design optimization capabilities in Isight enable design
organizations to reduce design cycle time and manufacturing cost, and significantly
improve product performance, quality, and reliability.
Documentation
The following manuals are available in the Isight and Fiper library:
Typographical Conventions
This book uses the following typographical conventions:
Convention Explanation
italics Introduces new terms with which you may not be familiar,
and is used occasionally for emphasis.
bold Emphasizes important information. Also indicates button,
menu, and icon names on which you can act. For example,
click Next.
UPPERCASE Indicates the name of a file. For operating environments
that use case-sensitive filenames (such as UNIX), the
correct capitalization is used in information specific to
those environments.
Also indicates keys or key combinations that you can use.
For example, press the ENTER key.
monospace Indicates syntax examples, values that you specify, or
results that you receive.
monospaced Indicates names that are placeholders for values that you
italics specify. For example, filename.
forward slash / Separates menus and their associated commands. For
example, Select File / Copy means that you should select
Copy from the File menu.
The slash also separates directory levels when specifying
locations under UNIX.
vertical rule | Indicates an “OR” separator used to delineate items.
brackets [ ] Indicates optional items. For example, in the following
statement: SELECT [DISTINCT], DISTINCT is an
optional keyword.
Also indicates sections of the Windows Registry.
braces { } Indicates that you must select one item. For example, {yes |
no} means that you must specify either yes or no.
ellipsis . . . Indicates that the immediately preceding item can be
repeated any number of times in succession. An ellipsis
following a closing bracket indicates that all information in
that unit can be repeated.
Mouse Conventions
This action... Means to...
Click Point to an object with the mouse pointer and momentarily
press the left mouse button.
Double-click Press the left mouse button twice.
Right-click Momentarily press the right mouse button.
Drag Press and hold the left mouse button while dragging item(s)
to another part of the screen.
SHIFT+Click Click an object to select it; then, press and hold the SHIFT
key. Click another object to select the intervening series of
objects.
CTRL+Click Press and hold the CTRL key; then, click a selection. This
lets you select or deselect any combination of objects.
Keyboard Conventions
Select menu items by using the mouse or pressing ALT+ the key letter of the menu
name or item.
Platform Information
For complete details on supported platforms, refer to the following website:
http://www.simulia.com/support/sup_systems_info.html
Support
Both technical engineering support (for problems with creating a model or performing
an analysis) and systems support (for installation, licensing, and hardware-related
problems) for Isight are offered through a network of local SIMULIA support offices.
Contact information is listed in the front of each manual.
To use the SOSS, you need to register with the system. Visit the My Support page at
www.simulia.com for instructions on how to register.
Many questions can also be answered by visiting the Products page and the Support
page at www.simulia.com. The information available online includes:
Please have the following information ready before contacting the technical
engineering support hotline, and include it in any written contacts:
The release of Isight that are you using, which can be obtained by accessing the
VERSION file at the top level of your Isight installation directory.
The symptoms of any problems, including the exact error messages, if any.
When contacting support about a specific problem, any available product output files
may be helpful in answering questions that the support engineer may ask you.
The support engineer will try to diagnose your problem from the model description and
a description of the difficulties you are having. The more detailed information you
provide, the easier it will be for the support engineer to understand and solve your
problem.
If the support engineer cannot diagnose your problem from this information, you may
be asked to supply a model file. The data can be attached to a support incident in the
SIMULIA Online Support System (SOSS). It can also be sent by means of e-mail,
tape, disk, or ftp. Please check the Support Overview page at www.simulia.com for the
media formats that are currently accepted.
If you are contacting us to discuss an existing problem, please give the receptionist the
support engineer's name if contacting us via telephone or include it at the top of any e-
mail correspondence.
Systems Support
Systems support engineers can help you resolve issues related to the installation and
running of the product, including licensing difficulties, that are not covered by
technical engineering support.
You should install the product by carefully following the instructions in the installation
guide. If you encounter problems with the installation or licensing, first review the
instructions in the installation guide to ensure that they have been followed correctly. If
this does not resolve the problems, consult the SIMULIA Answers database in the
SIMULIA Online Support System for information about known installation problems.
If this does not address your situation, please create an incident in the SOSS and
describe your problem.
In addition, contact information for offices and representatives is listed in the front of
this manual.
Training
SIMULIA offices offer regularly scheduled public training classes, including classes
on Isight. We also provide training seminars at customer sites. All training classes and
seminars include workshops to provide practical experience with our products. For a
schedule and descriptions of available classes, see www.simulia.com or call your local
representative.
Feedback
We welcome any suggestions for improvements to Isight software, the support
program, or documentation. We will ensure that any enhancement requests you make
are considered for future releases. If you wish to make a suggestion about the service
or products, refer to www.simulia.com. Complaints should be addressed by contacting
your local office or through www.simulia.com.
1 Components Overview
“Introduction,” on page 22
Introduction
Components are an essential part of model construction since they are the building
blocks of a workflow. Numerous components have been developed by Engineous and
are included with Isight. These components are accessed using the Design Gateway,
but each has a different editor and options that can be configured.
All Isight components can be divided into two types: process and activity.
Process components are components that are designed to contain a workflow, which is
executed some number of times depending on the component’s own specific logic,
essentially “driving” the execution of that workflow (thus, they are also sometimes
referred to as drivers or design drivers). This book discusses eight Engineous-supplied
process components: DOE, Loop, Monte Carlo, Optimization, SDI, Six Sigma,
Taguchi Robust Design, and Task.
The following list provides a brief overview of the components described in this book.
The components available to you may differ slightly based on your Isight license.
DOE. This component is a process component that allows you to use Design of
Experiment (DOE) techniques to intelligently sample the design space to improve
your design.
Database. This activity component allows you to retrieve data from an existing
relational database and use the data inside an Isight model.
Excel. This activity component allows you to map parameters and execute macros
using information from an Excel spreadsheet.
Fast Parser. This activity component allows an Isight model to use file parsing
programs. This component is mostly to aid in converting models created in
iSIGHT to Isight.
iSIGHT File Parser. This activity component allows an Isight model to use file
parsing programs (FDL) created using the iSIGHT Advanced Parser. This feature
is mostly to aid in converting iSIGHT description files into Isight models.
Mail. This activity component allows you to send e-mail messages containing
parameter information. This is often used to send status or completion notification
for long-running jobs.
MATLAB. This activity component allows you to send data to MATLAB, execute
commands/scripts, and retrieve data from MATLAB.
OS Command. This activity component allows you to execute a command to your
operating system, typically to run an executable analysis code.
Pause. This activity component provides a mechanism for inserting pre-defined
pauses within a workflow and optionally performing various interactive activities
during the pause.
Reference. This activity component allows you to reference internal models and
submodels for use in your current model. It provides a means for collaboration by
referencing models that have been published to a Fiper Library. It is also the
primary means of enabling Federation (Business-to-Business) capabilities in the
Fiper environment, allowing you to select a model that resides at a partner
organization (which has a Fiper ACS) and insert a reference to it within your own
workflow.
Script. This activity component allows you to execute Java code in your model. It
is used to perform calculations that are too complex for the Calculator component,
such as those involving loops or conditional statements.
Word. This activity component allows you to populate Word document files
(*.doc, *.docx) with the values of Isight parameters.
1. Select the component whose properties you want to edit; then, click the Properties
button , right-click on the component and select Properties, or select
Properties from the Design Gateway View/Component menu.
This example shows a DOE component’s properties. The properties are the same
for all components of the same type - process or activity. The dialog box is divided
into six tabs: Execution, Affinities, Icon, Description, Component Type, and
Content. If you have enabled the advanced LSF DRM option in the Preferences
editor, then a DRM Settings tab is available. For more information on this option,
refer to the Isight User’s Guide.
Note: Once open, the Properties dialog box can be toggled between different
components by selecting a new component on the Design Gateway Model
Explorer. If you want the component that is currently displayed to always be
displayed no matter what component is selected on the Model Explorer, click the
Lock check box at the bottom of the dialog box. This allows you to view the
properties for multiple components side-by-side if desired.
Delay before execution. This option allows you to set a length of time that the
component will wait before it executes. This is commonly used for
demonstrations in which you want some delay to occur to allow for discussion
as the model is executing.
Timeout. Execution of the component will stop when this time limit is
reached. If set to 0, no timeout limit is enforced. Typically, you would only set
this property if you are concerned that this component could encounter
problems during execution that would not allow it to complete and return. If it
is necessary to set this limit, it should be set to a time much greater than the
expected normal time for execution so that the component is not incorrectly
terminated.
If the model is being executed in the Fiper environment, Fiper will attempt to
execute the component on a different Fiper Station if one is available.
(Process components only) Maximum parallel batch size. This option sets
the maximum number of design points (subflow executions) that a process
component can submit simultaneously if it supports parallel execution. If the
Automatic check box is selected (the recommended setting), Isight will
automatically determine the best number of executions to run in parallel.
If you do not select the Automatic check box, you must enter the maximum
number of design points to be run in parallel. The value must be greater than or
equal to one (1).
Note: Regardless of this property value, some process components may run
fewer subflows than the maximum; and some process components may run
only a single subflow at a time. Refer to the process component documentation
for details on parallel execution options.
(Process components only) Estimated subflow runs. Process components that
execute their subflow numerous times can (and typically do) provide an
estimate of the number of times they will run that subflow based on the details
of how they are configured. These estimates can be used to track the progress
of the execution toward completion. Typically, this number is quite accurate,
especially for components that are configured to run an explicit number of
design points (subflow runs). However, since the estimates are sometimes just
based on simple rules using the values of configuration options, it is possible
they could be quite far from what you know to be accurate. In those cases, you
can override the estimate provided by the component so that the workflow
progress can be more accurately monitored.
(Process components only) Allow subflow runs for this component to save
parameters to the database. Select this option to save the subflow runs of the
selected component’s parameters to the database. Disabling (turning off) this
option can increase the execution speed of the model, but it does so by
sacrificing the saving of data that can be used for post-processing or a future
re-execution using the database lookup feature. For more information about
the Save to Database option, refer to the Isight User’s Guide.
(Activity components only) Database Lookup. Isight has the capability, prior
to executing an activity component, to examine all past executions (or only
those within the current job) of this component to find out if this component
has been run with this exact set of input values. If so, the assumption is that the
output values will be identical and can be restored. Isight can restore these
values rather than executing the activity component. For long-running
components, such as many simulation codes, this can save valuable execution
time, since the results are already in the database. Also, for a Fiper ACS,
database lookup will bypass the distribution of the component to the Fiper
Station and reduce the execution overhead in the Fiper ACS workflow, saving
the additional dispatch time in a Fiper ACS environment.
Database lookup can be set at the model level (in the Model Properties editor)
or the component level. For information about database lookup at the model
level, refer to the Isight User’s Guide.
If the model enables database lookup, all activity components in the model are
enabled for database lookup, unless otherwise disabled by the activity
component’s metamodel (for example, the Mail component). Similarly, if the
model does not allow database lookup, then all activity components in the
model are disabled for database lookup.
Once you have enabled database lookup, you can specify how the component
restores runs:
• Do not restore any runs. The component is always executed. This option
is useful in the case of data being passed to this component which would
affect the input-output relationship.
• Restore only successful runs. This is the default option because it is
assumed that if the run failed it was due to some external or transient cause
that may be corrected when this later execution occurs.
• Restore whether successful or not. There are, however, cases when you
want to restore failed runs. For example, you may want to restore simcodes
(or any other activity components) that fail with certain inputs. This often
occurs when exploring design spaces with exploratory techniques and with
very input-sensitive simcodes. It is not uncommon for these simcodes to
run for a long time before crashing and then crash every time for a given
input set. In this case, it is beneficial to restore failed runs to avoid
needlessly executing the simcode that will fail for this input anyway.
(Activity components only) Click the Advanced... button to change the
database lookup Compatibility Version number.
Note: If database lookup is disabled for the model, this button is only available
if you activate the database lookup feature using the Enable button described
above.
Isight allows you to specify commands that are executed prior to (Prologue) or
after (Epilogue) execution.
• Prologue. Click the Prologue tab; then, enter commands in the prologue
area to be executed prior to the execution of this component. The
commands must be expressed using Java syntax. You can enter the
commands after the predefined statements.
• Epilogue. Click the Epilogue tab; then, enter commands in the epilogue
area to be executed after the execution of this component. The commands
must be expressed using Java syntax. You can enter the commands after the
predefined statements.
Note: You may specify both Prologue and Epilogue commands for the same
component.
• Note: You can click the Details... button to get more information on a
syntax error.
Repeat this procedure, as desired, until your Java script is complete; then, click
the Close button.
3. Click the Affinities tab. This tab contains information that helps a component
running in an ACS environment find the proper Fiper Station (and in doing so, a
remote computer) for execution. Affinities are ignored when executing the model
locally in Isight.
Note: The affinities for the Fiper Station (which are used to match against
component affinities) are set in the station.properties file in the Fiper
installation directory for that Station. For more information on using Fiper
Stations, refer to the Fiper Installation and Configuration Guide that matches your
ACS combination.
Set any of the following options (in some cases, these options are
automatically set) for execution in the Fiper environment:
user, such as Pause, Excel (when the Show Excel option is used), or
MATLAB (when displaying graphs).
• Other. Specify any additional information that the component must search
for during execution. You can use this setting for custom keywords or other
requirements such as installed software. Any character string can be used.
Only Stations that have this matching keyword defined as an affinity are
used during execution.
• Group Name. You can define a group to execute various components
(within the same level of the model hierarchy) on the same machine simply
by specifying a name for the group. For any defined group, Fiper will
determine which Fiper Station to execute those components on based on the
combination of all of the other affinities for all components in the group.
For example, if component A is specified to run only on Windows,
component B specifies an “Other” affinity of “CAE”, and both A and B
specify the group “groupAB”, then Isight will only select a Fiper Station
that is running on Windows and that says it supports “CAE” on which to
execute both components A and B. If no Station exists to support the
combined affinities, none of the components in the group will be executed
and the job will fail for this reason.
• DRM Mode. Specify the distributed resource manager (DRM) that will be
used to dispatch workload items in a Fiper environment. If you are
changing this option for a top level component, you will be prompted to
indicate if you want to change the DRM mode for all components in the
subflow. The default is Any.
The meaning of this setting at runtime will depend upon the DRM types
enabled on the ACS. For example, if only the Fiper DRM is enabled on the
ACS, then all components will be dispatched with the Fiper DRM. In this
case, then components with the setting LSF will fail.
If both Fiper and the LSF DRMs are enabled on the ACS, then components
that specify Any will be dispatched with the ACS’s default DRM (the first
DRM listed in the ACS configuration file), with the exception of script, data
exchanger, calculator, and process components, which will be dispatched
with the Fiper DRM. Components with explicit Fiper or LSF settings will
be dispatched with the DRM specified.
4. Click the Icon tab. The default icon is the one provided by the component itself. If
desired, you can change this to use an image of your choice that might better
represent/identify this specific instance of the component as it is used in this
model.
6. Click the Component Type tab. This tab shows you information about the
component type (as it is published to the Library). The information displayed
includes the component’s full name, display name, version number, icon, and
description.
Set the When Loading Model Use option. This option gives you the ability to
specify which version of the component to use, if the component has more
than one published version in the Library (only available with a local Library -
standalone installation). You can set this option for each component in your
model. When the model is reloaded, the version of the component that is
fetched from the Library is based on this setting.
• The Latest Version. This option always loads the most recent version of
the component from the Library. It is the default setting.
• This Specific Version. This option always loads the version specified in
the Version field (to the left of the drop-down list) when the selection is
originally made.
7. (optional) Click the DRM Settings tab. This tab allows you to set the specific LSF
requirements for an individual component in the workflow when that component is
dispatched with the LSF DRM. These settings affect the properties of the LSF job
used to dispatch the workitem at runtime.
This tab is only available if the Enable advanced LSF DRM options is set in the
Preferences editor. For more information on this option, refer to the Isight User’s
Guide.
8. Click the Contents tab. A tab similar to the one shown below appears.
The Contents tab shows the parameters and files for the current component. The
total and types of parameters and files are displayed.
9. Click OK to close the dialog box and save your changes. Click Apply to save your
changes, but keep the dialog box open.
To access the interface that allows you to set Isight component preferences:
1. Select Preferences from the Edit menu. The Isight Preferences dialog box
appears.
2. Select Components on the left side of the dialog box. The General Component and
Plugin Preferences dialog box appears.
3. Click a radio button for the When a new version of a component or plugin is
published in the library option on the right side of the dialog box. This option is
used to determine when a new version of a component or plugin, which has been
added to the library, will be used by Isight. You can begin using the new version
the next time Isight is started (this setting is recommended) or the next time a
model is loaded or executed.
4. Expand the Components link to view the set of components that have specific
preferences available (note that not all components provide preferences). Proceed
to any of the following sections to set preferences for specific components:
This chapter describes the usage of the Engineous-supplied process components that
are included with a standard Isight installation. It is divided into the following topics:
“Introduction,” on page 40
Introduction
Components are an essential part of model construction. Numerous components have
been developed by Engineous, and are included with Isight. These components are
accessed using the Design Gateway, but each has a different editor and options that can
be configured.
All Isight components can be divided into two types: process and activity.
Process components are components that are designed to contain a workflow, which is
executed some number of times depending on the component’s own specific logic,
essentially “driving” the execution of that workflow (thus, they are also sometimes
referred to as drivers or design drivers).
Overview
Design of Experiments is a general term that refers to any of the many formal methods
available for setting parameter values in a set of experiments. In Isight, a DOE
experiment is defined as an execution of the workflow defined within the DOE
component. You can use the DOE component in Isight for the following purposes:
creating a data set that can be used to generate approximation models of the design
space
parameters you want to study (i.e., the input parameters that may impact
performance/quality)
Available Techniques
The following Design of Experiments techniques are available in Isight (although all
techniques listed below may not be included in your installation):
For more detailed information on these techniques, see “DOE Reference Information,”
on page 508.
Note: Techniques can be added to the DOE component by publishing new “DOE
Technique” plug-ins to the Library. For more information, refer to the Isight
Development Guide.
1. Double-click the component icon to start the component editor. For more
information on inserting components and accessing component editors, refer to the
Isight User’s Guide.
The Component Editor dialog box appears.
The editor is divided into four tabs: General, Factors, Design Matrix, and Post
Processing. The General tab allows you to select a technique, set technique
options, and set general DOE execution options (see step 2). The Factors tab
allows you to select factors to study for your design. The Design Matrix tab
provides options for viewing and modifying the generated design matrix. The Post
Processing tab allows you to specify actions taken after running your set of
experiments. The options available on each of these tabs are defined in greater
detail in the following sections.
2. Select the desired technique using the DOE Technique button. The technique’s
options appear in the Technique Options area, and information about the technique
appears in the Technique Description area on the right side of the editor.
Note: You can set default behavior for this option using the component preferences
as described in “Setting DOE Component Preferences,” on page 65.
4. Set the options for your technique as desired in the Technique Options area. For
more information on these options, see “Setting Technique-Specific Options,” on
page 58; then, return to this section.
5. Set the following options in the Execution Options area of the General tab:
Execute DOE design points in parallel. If selected, all of the design points
defined by the design matrix will be submitted for execution simultaneously.
You may need to clear (deselect) this option if components within the DOE
subflow have license limitations or other requirements that mandate only one
execution at a time.
Action when design point fails. If a design run fails during execution of the
DOE technique, you can choose to have Isight ignore the failed run, fail the
entire execution, retry the failed run (a specific number of times), or replace
the failed run by re-executing with a specified percentage modification of the
failed run.
6. Click the Advanced Options... button. The Advanced Execution Options dialog
box appears.
Execute designs in random order. Select this option if you want the design
points defined by the design matrix to be executed in random order.
Read response values from file (don’t execute subflow). Select this option if
you do not want the DOE component to actually execute an Isight subflow for
each design point, but to instead read the response values for each design point
from a specified file. You can locate the file using the Browse... button. The
format of the file must be a first row with parameter names and each
subsequent row containing the values for the parameters for that design point.
This feature is useful for when you have a complete data set (input and output
values) already compiled and simply want to use DOE as a post-processing
tool. This capability is more directly accessible using the Data File technique
and selecting the File contains responses option. For more information on this
technique, see “Setting Technique-Specific Options,” on page 58.
Note: For the purpose of storing results in the results database, the DOE
component still makes a call to execute the subflow for every design point.
Thus, any components that reside in the subflow will actually be executed
(unless they are disabled). Since the intent of this option is to not require
execution of a subflow to obtain outputs, it is assumed that in most cases the
subflow will be empty.
Allow the following to be turned off by parameters. This option allows you
to de-activate (turn off) the selected types of design parameters (factors,
responses) using input parameters to the DOE component. If this option is
selected, boolean input parameters will be created in an Active Factors/Active
Responses aggregate under the Mapped Options and Attributes aggregate
parameter. The parameters are selected by default. If any of the parameters are
not selected at runtime, then those design parameters will not be used during
execution.
The tab presents a table of available input parameters to select as factors to study,
along with columns for attributes to specify for each factor. The attributes to
specify are determined by the DOE technique being used.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Select the parameters you want to define as factors to study by clicking the check
box that corresponds to the parameter. To add all the selected parameters as
factors, click the Check button at the bottom of the tab. If no parameters are
selected, you will be prompted to add all parameters as factors. To deselect all the
parameters, click the Uncheck button.
If you select an array as a factor, a single quantity will represent this in the model
and it will be expanded during execution to define a factor for each element in the
array. You can override the attributes for each individual element by selecting
them separately.
Note that aggregates themselves cannot be selected. You must open them (click the
) and select scalar members and array members.
4. Change any of the default values, which are provided for all attributes for a
selected factor, as desired.
Although each technique defines the appropriate set of attributes for a factor, the
following set of attributes are common among many of the techniques.
# Levels. This attribute is the number of levels at which you wish to study the
factors. A change to this attribute will automatically readjust the Levels for
this factor.
Lower/Upper. These attributes are lower/upper levels for the factor. A change
to one of these attributes will automatically calculate new levels evenly
distributed between the Lower and Upper values.
Baseline. This attribute is the value to be used for converting Levels into
values when the Relation is “%” or “diff.”
5. Select the Update factor baselines to current values when executing option at
the bottom of the tab if you want the baseline values updated to the current
parameter values before executing. If Isight previously modified a parameter, this
option allows for automatic updating of the baseline of all factors to the current
parameter values in Isight, prior to executing the DOE technique, which will
re-adjust the values to be studied appropriately. The default is to have this option
deactivated, retaining user-defined settings. This option is useful if the DOE
technique is executed after another element of the workflow that might change the
parameter values (for example, after an Optimization component), so that the DOE
study can be performed around the new design point.
Note: You can also set factor options using the Edit... button at the bottom of the
editor. For more information, see “Editing Attributes for Multiple Parameters,” on
page 57.
6. Proceed to “Using the Design Matrix Tab,” on this page.
1. Click the Design Matrix tab. The contents of the tab appear.
This tab displays the design matrix generated for the technique selected (“Opening
the Editor and Using the General Tab,” on page 43) and the factors selected and
attributes specified on the Factors tab (“Using the Factors Tab,” on page 46).
Note: If an array was selected as a factor, only a single item will be displayed in
the design matrix for the entire array. At runtime, the individual elements of the
array will be assigned to columns of the design matrix, as necessary.
2. (Optimal Latin Hypercube technique only). Click the Generate button to create
your design matrix. This ability to manually generate your matrix is present since
this process can take a significant amount of time.
The Design Matrix Generation Status dialog box appears, showing you how the
matrix generation is progressing.
Note: If you do not manually generate the design matrix in the component editor, it
will automatically be generated as part of the execution.
Examine the information displayed on the Status dialog box; then, click the Close
button once the matrix has been created. If you click the Close button before the
optimization of the design matrix is complete, the resulting design matrix will be
the last one generated during the optimization process.
3. If the selected DOE technique provides any special options for configuring the
design matrix, an Options... button appears above the Design Matrix table. Click
this button to view/edit the options provided by the technique. For information
about Engineous-supplied DOE techniques that provide these options, see “Setting
Technique-Specific Options,” on page 58.
4. Select how you want the design matrix displayed using the Show button. You can
display the design matrix as values (the actual values to be set at execution) or
levels (the level number as generated by the technique algorithm).
5. Perform any of the following actions, as desired, to manipulate the design matrix:
Deactivate a single design point. Click any row number to deactivate that
design point so that it will not be executed (the icon appears). You can
also click the button at the top of the design matrix, or right-click the
design matrix and select the Skip selected design point(s) option from the
menu that appears.
Deactivate all design points. To deactivate all design points in the design
matrix, highlight all of the design points and click the button. You can
also right-click the design matrix and select Skip all design points from the
menu that appears.
Note: Deactivating design points is useful when it is known that a specific set
of values in the design matrix represents a design that cannot be evaluated for
whatever reason.
Activate a design point. To activate a design point that has been set to be
skipped, click the icon that represents the design point. The row number
reappears. You can also click the button at the top of the design
matrix, or right-click the design point and select Activate selected design
point(s) from the menu that appears.
Activate all design points. To activate all of the design points in the design
matrix, highlight all of the design points and click the button. You can
also right-click the design matrix and select Activate all design points from
the menu that appears.
Copy the design matrix. This option copies the design matrix to the clipboard
so that is can be pasted elsewhere (for example, in a text file). This option can
Save the design matrix. Saves the design matrix to a file. This option can be
1. Click the Post Processing tab. The contents of the tab appear.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Select the output parameters you want to define as responses by clicking the check
box that corresponds to the parameter. To add all the selected parameters as
responses, click the button at the bottom of the tab. If no parameters are
selected, you will be prompted to add all parameters as responses. To deselect all
If you select an array as a response, a single quantity will represent this in the
model and it will be expanded during execution to define a Response for each
element in the array. You can override the attributes for each individual element by
selecting them separately.
Note: While the number of response quantities specified has no effect on the cost
of executing the experiments, and only slightly affects the cost of analyzing
results, you should be careful to focus on key quantities that directly relate to the
performance/quality of the product to specify as responses. This action eases the
interpretation of results in making design decisions.
Note: You can also set response options using the button at the bottom of
the editor. For more information, see “Editing Attributes for Multiple Parameters,”
on page 57.
Calculate basic statistics. This option calculates the minimum and maximum
mean, standard deviation, and range of all of the data for each factor or
response.
Perform regression analysis. This option performs linear regression analysis
to determine the main effects of each factor on each response.
Write experiment data to a file. This option allows you to write all of the
DOE run data to the specified file after execution. Use the Browse... button to
specify this file. Click the Output file parameter ‘Results.Data Set’ check
box to also provide this data as an output file parameter (called “Data Set”) in
the DOE Results output parameter.
6. Click OK to close the editor and save your changes. Click Apply to save your
changes, but keep the editor open.
3. To map a technique option to a parameter, click the value in the Value column for
the technique option whose value you want to map. The value is highlighted.
4. Right-click the value; then, select the Map this value to a parameter option. The
Select a Parameter dialog box appears prompting you to enter the name of a
parameter to which you want to map.
5. Type a name for the parameter (by default, the technique option name is used);
then, click OK.
Once mapped, the icon appears next to the technique option’s value. You can
click this icon to view or change the parameter name. You can also right-click on
the setting again to remove the mapping.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter.
6. Click Apply to save your changes; then, proceed to step 15.
Execute subflow only once (you need to click the Advanced Options... button
to gain access to this option)
8. Select the Map this value to a parameter option. The Select a Parameter dialog
box appears prompting you to enter the name of a parameter to which you want to
map.
9. Type a name for the parameter (by default, the execution option name is used);
then, click OK. Once mapped, the icon appears next to the execution option.
You can click this icon to view or change the parameter name. You can also
right-click on the setting again to remove the mapping.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter.
10. Click Apply to save your changes; then, proceed to step 15.
11. To map a factor or response attribute to a parameter, click the appropriate tab on
the component editor.
If you want to apply the mapping to only the selected parameter, select the
Map <Attribute_Name> to a parameter for selected option.
If you want to apply the mapping to all the parameters, select the Map
<Attribute_Name> to a parameter for all option.
The icon appears next to the attribute’s value. You can click the icon to view or
change the parameter name.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as member(s) of a special aggregate parameter called Mapped
Attributes and Options.
Note: You can remove these mappings at any time. Simply right-click the
appropriate parameter attribute; then, select the Remove mapping of
<Attribute_Name> to a parameter for selected or Remove mapping of
<Attribute_Name> to a parameter for all option, depending on how you
originally mapped the attribute.
array, make sure you construct the model in such a way that the mapped attribute
array parameter is resized in sync with the factor/response array. If this is not done,
warning messages will be sent to your job log.
14. Click OK to close the component editor and return to the Design Gateway.
16. Locate the new special aggregate parameter called Mapped Attributes and
Options; then, click the icon to expand the parameter. The new parameters
appear, which will be mapped into the appropriate attributes/options at the
beginning of execution.
1. Select the parameters you want to add/remove/edit on either the Factors or Post
Processing tab.
To add all the selected parameters as factors, click the Check button (
button on the Post Processing tab) at the bottom of the tab. If no parameters are
selected, you will be prompted to add all parameters as factors. To deselect all the
parameters, click the Uncheck button ( button on the Post Processing tab).
2. Click the Edit... button ( button on the Post Processing tab) at the bottom
of the component editor. The Edit dialog box appears. In the following example, a
parameter on the Factors tab is being edited.
3. Update the listed values, as desired. Only options with defined values appear on
this dialog box.
4. Click OK. The values are updated for all the parameters that were selected.
Alpha. The Lower and Upper levels specify the two levels at which a 2-level
full-factorial study is performed. The center point is also studied. The Alpha
option is a ratio defining two other points (also known as “star points”) at
which to study the given factor. For example, Alpha set to 0.25 indicates the
factor is to be studied at points 1/4 of the way from the baseline to the lower
and upper levels. Alpha set to 1.6 indicates the factor is to be studied 50%
beyond the lower and upper levels. For example, Lower = 5, Upper = 20,
Baseline = 10, and Alpha = 0.25 results in Levels = {5, 8.75, 10, 12.5, 20}.
Return to step 5 on page 44 for information on using the other options and tabs on the
DOE component editor.
Technique options:
File. The first step in setting Data File technique options is to select the file
that you want to use with the technique. Click the Browse... button to specify
the file that contains the values you want to use. The Open dialog box appears.
Select the file to be used.
Note: A file parameter will be added to the component to represent this file,
allowing you to map in a completely new file at runtime.
Use Header Row. Select this check box if your data file has a header row
containing the factor names. Specify which row contains the names of your
factors by entering its corresponding number.
Data Starts on Row. Specify the row in your file where the data begins by
entering its corresponding number. If you are using a header row, the specified
row must be greater than the header row.
Scan for Parameters.... Click this button to scan the specified file for
parameters to define. Names that are found will be presented to select as
factors (if specified as inputs) or responses (if specified as outputs). Any
names found that are already defined as parameters will not be presented in the
list.
that design point. This feature is useful for when you have a complete data set
(input and output values) already compiled and simply want to use DOE as a
post-processing tool.
Data is transposed. (Advanced Options...) Select this check box if you want
to use a data file in which each row represents a set of values for the parameter
whose name is the first item in that row (thus, the file is “transposed” from the
standard format in which each column represents a parameter and its values).
A transposed data file must have the parameter name as the first item in the
row (enclosed in quotes if spaces exist), and each row must have the same
number of values in it.
Factor options:
Column. If you have specified not to use a header row to find factors in the
file, you must provide the column number for each factor.
Return to step 5 on page 44 for information on using the other options and tabs on the
DOE component editor.
Return to step 5 on page 44 for information on using the other options and tabs on the
DOE component editor.
Technique options:
Number of Points. Enter the number of points that you want to study. For
each defined factor, the levels are defined by uniformly distributing this
number of points between specified lower and upper values. These levels are
then randomly combined with levels of the other factors to define each design
point to execute.
Use a fixed seed. If desired, specify a fixed seed to use for determining the
random combinations of factor levels.
Note: A fixed seed can also be set and used for the entire model. If a fixed seed
is set for the model and also defined separately in DOE, then the one in DOE
will be used. If a fixed seed is set for the model and this option is not selected,
then the sequence of random numbers will be based on the model’s random
seed (and thus is still reproducible). For information on setting a fixed seed,
refer to the Isight User’s Guide.
Return to step 5 on page 44 for information on using the other options and tabs on the
DOE component editor.
Technique options:
Number of Points. Enter the number of points that you want to study. For
each defined factor, the levels are defined by uniformly distributing this
number of points between specified lower and upper values. These levels are
then randomly combined with levels of the other factors to define each design
point to execute.
Max Time to Optimize (minutes). Enter the maximum amount of time (in
minutes) that the component can optimize. Once this time limit is reached, the
optimization stops.
Use a fixed seed. The seed can be fixed by clicking this check box and
specifying the seed manually in the corresponding text box. If this check box is
not activated, the seed is determined randomly.
Note: A fixed seed can also be set and used for the entire model. If a fixed seed
is set for the model and also defined separately in DOE, then the one in DOE
will be used. If a fixed seed is set for the model and this option is not selected,
then the sequence of random numbers will be based on the model’s random
seed (and thus is still reproducible). For information on setting a fixed seed,
refer to the Isight User’s Guide.
Return to step 5 on page 44 for information on using the other options and tabs on the
DOE component editor.
Technique options:
The Orthogonal Array technique allows you to select one of the following options
from the Array drop-down list to specify the array size (and thus, the resolution) to
use for the experiment:
Only those arrays that are appropriate for the current number of factors (and
number of levels) will appear in this list.
Note: The Orthogonal Array technique will allow you to specify a mixed number
of levels for the factors, and will automatically use the appropriate array to
accommodate your settings, possibly modifying the basic structure of the array to
ensure orthogonality. You can always manually choose a larger array, if desired.
The following options are available by clicking the Options... button on the Design
Matrix tab when Orthogonal Array is selected.
When using Orthogonal Arrays, the factors are automatically assigned to columns
in a way that minimizes confounding with interaction effects. Preference is given
to factors in the order in which they are listed in the dialog. Additionally, you may
define specific columns for factors to be assigned to and an attempt will be made to
assign to those columns exactly (though it may fail if a specified column is not
appropriate for a given factor).
Return to step 5 on page 44 for information on using the other options and tabs on the
DOE component editor.
Technique options:
Run baseline point. Select this option if you want an additional point to be
executed in which all factors are set to their Baseline values. Including a
baseline design point allows for the Parameter Study technique to be used
exactly as a simple finite differencing technique for sensitivity calculations
(i.e., the effective difference caused by each factor is independently varied by
some small difference from a baseline design).
Return to step 5 on page 44 for information on using the other options and tabs on the
DOE component editor.
1. From the Isight Design Gateway, select Preferences from the Edit menu. The
Preferences dialog box appears.
2. Expand the Components folder on the left side of the dialog box; then, select
DOE.
The following preference options are available with the DOE component:
3. Click OK to save your changes and close the Preferences dialog box.
The API for the DOE component consists of the following method:
get("DOEPlan"). This returns a DOEPlan object from the
com.engineous.sdk.designdriver.plan package. From that entry point you have access
to numerous methods for setting/configuring the technique, adding/editing factors and
responses, and configuring execution options. For more information on configuring the
DOE component, refer to the javadocs for the DOEPlan interface in your
<Isight_install_directory>/javadocs directory.
For. This loop type is used to execute subflows while continuously incrementing
the value of a selected parameter.
For Array. This loop type is used to execute subflows for each element of one or
more arrays, and accumulate subflow results into one or more arrays.
For Each. This loop type is used to execute subflows while iterating through a list
of values for a selected parameter.
While. This loop type is used to execute subflows as long as a specified condition
is met.
Do Until. This loop type is used to execute subflows until a condition is satisfied.
The number of executions is not constant.
Important: This component is subject to certain limitations. You should review these
limitations before attempting to create a loop. For more information on these
limitations, see “Understanding Loop Limitations,” on page 81.
The available loop types differ in the way the conditions are expressed. This section
describes the options available for each type of loop.
3. Select a parameter using the Parameter drop-down list, or create a new parameter
using the button.
Note: Only parameters of type Real or Integer can be used in a For loop.
The selected parameter's value is altered during every run from a starting value
(the From option) to a final value (the To option) incremented in steps with a
specified value (the Increment option).
Since the parameters in the list might be from other components, you can sort the
parameters in the list by component. Right-click in the parameter text box, and
then select Group Parameters. This action sorts the parameters in the list by
component. The setting is now saved in the preferences for this component. There
is a Preference option to set the initial default for this option for all applicable
component editors. You may also choose to not group parameters for the currently
selected component. For more information on these Preference options, refer to the
Isight User’s Guide.
4. Select Parameter or Constant using the From button; then, based on your
selection, either select a parameter using the corresponding drop-down list, or enter
a constant in the adjacent text box. This setting represents your starting value.
Note: If you choose to select a parameter, the button appears. Click the button
to add a new parameter.
5. Select Parameter or Constant using the To button; then, based on your selection,
either select a parameter using the corresponding drop-down list, or enter a
constant in the adjacent text box. This setting represents your final value.
Note: If you choose to select a parameter, the button appears. Click the button
to add a new parameter.
6. Select Parameter or Constant using the Increment button; then, based on your
selection, either select a parameter using the corresponding drop-down list, or enter
a constant in the adjacent text box.
The Increment value, if the Constant option is selected, can be either a positive
value or a negative value. Zero (0) is not a valid value for this option.
Note: The Increment value can be a real or integer value.
Note: If you choose to select a parameter, the button appears. Click the button
to add a new parameter.
7. Click the Action when a run fails drop-down list to determine the action Isight
should take if a subflow fails. If Fail Loop is selected, the Loop component will
fail when the subflow fails. If Ignore run and Continue is selected, the run
continues after a failed subflow. This option applies to both parallel and sequential
execution.
8. Set the Execute all iterations in parallel option using the corresponding check
box. If selected, all of the subflows will be submitted for execution simultaneously
(in parallel). If not selected, the subflows are executed one after the other
(sequentially).
9. Select OK to close the editor, save your changes, and return to the Design
Gateway.
Note: For information on how parameters other than the arrays behave for
sequential and parallel For loops, see “Understanding Parallel Loop Execution,” on
page 80.
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide.
2. Click the Loop Type button; then, select the For Array option.
3. Select an array parameter using the Array Parameter drop-down list, or create a
new array parameter using the button.
Since the parameters in the list might be from other components, you can sort the
parameters in the list by component. Right-click in the parameter text box, and
then select Group Parameters. This action sorts the parameters in the list by
component. The setting is now saved in the preferences for this component. There
is a Preference option to set the initial default for this option for all applicable
component editors. You may also choose to not group parameters for the currently
selected component. For more information on these Preference options, refer to the
Isight User’s Guide.
4. Select the mapping direction of the array parameter using the corresponding
button. The following options are available:
copied back into the array element. This setting is the default for an array with
mode in/out.
From subflow. The value the scalar variable is copied into an element of the
array after the subflow runs. This setting is the default for an output array
parameter.
5. Select the subflow parameter using the Subflow Parameter drop-down list, or
create a new scalar parameter using the button.
6. Click the button to add your selection to the table in the center of the
editor. The loop will iterate through these values.
Note: You can delete an item from the table by selecting the item and clicking the
button.
By default, the loop will execute once for each element of the array. If there is only
one array, or all of the arrays are the same size, this is an obvious behavior. If there
are several arrays and arrays have different sizes, the loop will execute once for
each element of the smallest array. The extra elements in the other arrays are not
used or modified.
7. (optional) Repeat step 3 through step 6 for additional arrays. There is no limit to
the number of arrays that can be mapped by one For Array loop.
8. Click the Design Matrix button to view the values that will be used for each
subflow run. The For Array Design Matrix dialog box appears.
Only the input and in/out mappings are shown. The first column is the run number,
and the other columns are the values of the scalar subflow variables for that run.
9. Click OK to close the dialog box and return to the Loop component editor.
10. Click the Limit the number of loops check box if you want to limit the number of
loops. You would normally use this option if some earlier step in the workflow
filled in only part of the array(s) and you want to process only those parts.
The number of times the loop runs can be set to a constant or the value of an input
integer parameter. Once you have clicked the check box, you can use the
corresponding drop-down list to select a constant or parameter, and then fill in the
constant value or the name of the input parameters. You can use the button to
create a new parameter as the loop limit. Only input integer parameters are
allowed.
11. Click the Action when a run fails drop-down list to determine the action Isight
should take if a subflow fails. If Fail Loop is selected, the Loop component will
fail when the subflow fails. If Ignore run and Continue is selected, the run
continues after a failed subflow, but the run is ignored and no results are returned
for it. This option applies to both parallel and sequential execution.
12. Set the Execute all iterations in parallel option using the corresponding check
box. If selected, all the subflows will be submitted for execution simultaneously
(in parallel). If not selected, the subflows are executed one after the other
(sequentially).
13. Select OK to close the editor, save your changes, and return to the Design
Gateway.
Note: For information on how parameters other than the arrays behave for sequential
and parallel For Array loops, see “Understanding Parallel Loop Execution,” on
page 80. The output arrays are correctly assigned values from the subflow for both
sequential and parallel execution.
Note: In the subflow of a For Array loop, the array index being processed can be
obtained from the parameter “Run #”. You will need to adjust for the fact that the run
counter starts at 1, but the array indexes start at 0. It may be convenient to include a
calculator component in the subflow with the following expression:
N = Run-1
then, map parameter “Run #” in the loop to parameter “Run” in the calculator.
Note: Array parameters can vary in size at runtime. If a resizable input or in/out array
is used in a For Array loop, the actual size at runtime will be used to determine the
number of loops. If a resizable output array is used, its size will be adjusted at runtime
to match the number of loops.
Note: Arrays of all data types are supported, including arrays of Files.
The value list contains any set of values, which are not necessarily evenly spaced.
String and Boolean data types can also be used, not just Integers and Reals.
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide.
2. Click the Loop Type button; then, select the For Each option. The editor is
updated to show the loop’s options.
3. Select a parameter using the Parameter drop-down list, or create a new parameter
using the button.
Note: Parameters of type Real, Integer, String, and Boolean can be used in a For
Each loop.
The selected parameter's value is altered during every run from the values list
specified. This values list is determined in the next step.
Since the parameters in the list might be from other components, you can sort the
parameters in the list by component. Right-click in the parameter text box, and
then select Group Parameters. This action sorts the parameters in the list by
component. The setting is now saved in the preferences for this component. There
is a Preference option to set the initial default for this option for all applicable
component editors. You may also choose to not group parameters for the currently
selected component. For more information on these Preference options, refer to the
Isight User’s Guide.
4. Select Parameter or Constant from the button in the middle of the editor; then,
based on your selection, either select a parameter using the corresponding
drop-down list, or enter a constant in the adjacent text box.
Note: All values in the list must match the data type of the selected iteration
parameter. For example, Boolean parameter types will only accept Boolean values.
5. Click the button to add your selection to the Values List in the center of
the editor. The loop will iterate through these values.
Note: You can delete an item from the Values List by selecting the item and
clicking the button.
6. Repeat step 4 and step 5 for each item you want to add to the Values List.
8. Click the Action when a run fails drop-down list to determine the action Isight
should take if a subflow fails. If Fail Loop is selected, the Loop component will
fail when the subflow fails. If Ignore run and Continue is selected, the run
continues after a failed subflow. This option applies to both parallel and sequential
execution.
9. Set the Execute all iterations in parallel option using the corresponding check
box. If selected, all of the subflows will be submitted for execution simultaneously
(in parallel). If not selected, the subflows are executed one after the other
(sequentially).
10. Select OK to close the editor, save your changes, and return to the Design
Gateway.
Note: For information on how parameters other than the arrays behave for
sequential and parallel For Each loops, see “Understanding Parallel Loop
Execution,” on page 80.
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide.
2. Click the Loop Type button; then, select the While option. The editor is updated
to show the loop’s options.
Now you need to create an expression to add to the Conditions area (at the bottom
of the editor) using constants, parameters, or both. This component will execute
the subflows as long as the expression evaluates to true.
Complex expressions can also be created using Logical AND and OR expressions.
Furthermore, the grouping of logical conditions is supported.
3. Select a parameter using the Parameter drop-down list, or create a new parameter
using the button.
Since the parameters in the list might be from other components, you can sort the
parameters in the list by component. Right-click in the parameter text box, and
then select Group Parameters. This action sorts the parameters in the list by
component. The setting is now saved in the preferences for this component. There
is a Preference option to set the initial default for this option for all applicable
component editors. You may also choose to not group parameters for the currently
selected component. For more information on these Preference options, refer to the
Isight User’s Guide.
4. Click the Condition button; then, select your desired condition from the list that
appears.
5. Select Parameter or Constant from the button below the Condition button; then,
based on your selection, either select a parameter using the corresponding
drop-down list, or enter a constant in the adjacent text box.
6. Click the button to add your selection to the Conditions area in the center
of the editor.
Note: You can delete a condition from the Conditions area by selecting the
condition and clicking the button.
7. Repeat step 3 through step 6 for each condition you want to add to the Conditions
area.
8. In the Conditions area, click the AND/OR column for any condition except the
first one listed; then, select AND or OR from the menu that appears based on the
type of condition you are creating. By default, once a second condition is added to
the list, the AND operator is selected.
9. Click in the open and close parentheses columns to determine if you want
open/close single or double parentheses to group AND and OR clauses.
10. Use the Up button and Down button to rearrange the conditions,
if necessary.
11. Click the Maximum number of iterations check box; then, set the number of
maximum iterations in the corresponding text box. Your conditional expression
may never be satisfied in some cases. In such cases, you can limit the number of
iterations by setting the value for maximum iterations.
12. Select OK to close the editor, save your changes, and return to the Design
Gateway.
Note: The While (and Until) loops cannot be executed in parallel. The condition must
be checked after each execution of the subflow.
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide.
2. Click the Loop Type button; then, select the Do Until option.
Now you need to create an expression to add to the Conditions area (at the bottom
of the editor) using constants, parameters, or both. This component will execute
the subflows until the expression evaluates to true.
Complex expressions can also be created using Logical AND and OR expressions.
Furthermore, the grouping of logical conditions is supported.
3. Select a parameter using the Parameter drop-down list, or create a new parameter
using the button.
Since the parameters in the list might be from other components, you can sort the
parameters in the list by component. Right-click in the parameter text box, and
then select Group Parameters. This action sorts the parameters in the list by
component. The setting is now saved in the preferences for this component. There
is a Preference option to set the initial default for this option for all applicable
component editors. You may also choose to not group parameters for the currently
selected component. For more information on these Preference options, refer to the
Isight User’s Guide.
4. Click the Condition button; then, select your desired condition from the list that
appears.
5. Select Parameter or Constant from the button below the Condition button; then,
based on your selection, either select a parameter using the corresponding
drop-down list, or enter a constant in the adjacent text box.
6. Click the button to add your selection to the Conditions area in the center
of the editor.
Note: You can delete a condition from the Conditions area by selecting the
condition and clicking the button.
7. Repeat step 3 through step 6 for each condition you want to add to the Conditions
area.
8. In the Conditions area, click the AND/OR column (the first column) for any
condition except the first one listed; then, select AND or OR from the menu that
appears based on the type of condition you are creating. By default, once a second
condition is added to the list, the AND operator is selected.
9. Click in the open and close parentheses columns to determine if you want
open/close single or double parentheses to group AND and OR clauses.
10. Use the Up button and Down button to rearrange the conditions,
if necessary.
11. Click the Maximum number of iterations check box; then, set the number of
maximum iterations in the corresponding text box. The conditional expression may
never be satisfied in some cases. In such cases, you can limit the number of
iterations by setting the value for maximum iterations.
12. Select OK to close the editor, save your changes, and return to the Design
Gateway.
When the iterations are run sequentially, each iteration sees the value of in/out
parameters as they are updated by previous iterations. When the iterations are run in
parallel, all iterations see the value the in/out parameter had when the Loop component
started, ignoring any changes made by other iterations. For example, assume a For
Loop that increments parameter i from 1 to 5, and has an in/out parameter n initially set
to 0. The subflow consists of a calculator that evaluates the expression: n = n+i.
If the loop is run sequentially, n will take on the values 1, 3, 6, 10, and 15 at the end of
each iteration. The final value of n is 15. If the iterations are run in parallel, each
subflow sees the input value of n as 0, so the output values are 1, 2, 3, 4, and 5, and the
final value of n is 5.
The component does not allow the selection of other parameter types (other than
the selected parameter type) for expression formation or value substitution. For
example, a For Each loop that has a loop parameter is of type Real does not allow
selection of a parameter of type Integer as one of its values. It can only select
another parameter of type Real.
Proper loop execution requires that the parameters (iteration parameter in For and
For Each loops, and condition parameters in While and Do Until loops) are
properly mapped between the parent loop and its subflows. After creating a new
loop component, make sure that the mappings are created correctly.
In order for While and Do Until loops to function properly, all parameters that are
used in the conditions and that are expected to change their values during iterations
must be of mode Input/Output. In this case their value will be preserved from one
iteration to another by the loop component. For example, when creating a While
loop with the condition (N < 10), and a calculator in the subflow with the
expressing N = N+1, you must ensure that parameter N is of mode Input/Output in
both the loop component and the calculator component. If this is not done,
parameter N’s value will always be reset back to the initial value at every
execution of the calculator, and the loop will end up executing infinitely or until it
reaches the maximum number of iterations.
For all loops except For Array, the values of all output or in/out parameters after
the loop finishes are taken from the last iteration (highest run number, not the last
iteration to finish). Only the output array(s) of a For Array loop allow the values of
other iterations to be mapped to subsequent components.
The API for the Loop component consists of the following methods:
condition[0]->parameterName
condition[1]->operator
condition[2]->operand
condition[3]->operandType
“Opening the Editor and Using the General Tab,” on this page
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide.
The editor is divided into three tabs: General, Random Variables, and Responses.
The General tab offers setup options, and options related to the sampling
technique. The Random Variables tab allows for the selection and configuration of
the random variables. The Responses tab allows for the selection and configuration
of the response parameters.
2. Click the Sampling Technique button; then, select the technique you want to use
from the drop-down list that appears. The sampling technique’s options appear
directly below the Sampling Technique button, and information about the
technique appears in the Sampling Technique Description area on the right side of
the editor. Sampling techniques in the Monte Carlo component are implemented as
“plug-ins”. As such, they are extendable by creating new plug-ins for new
sampling techniques. For more information, refer to the Isight Development Guide.
The sampling technique plug-ins currently available in Isight are the following:
Simple Random Sampling. This method is the default selection, and is the
most commonly used sampling method. Sample points for simulation are
generated randomly and independently.
Use a fixed seed. The seed can be fixed by clicking this check box and
specifying the seed manually in the corresponding text box. If this check box is
not activated, the seed is determined randomly.
Note: A fixed seed can also be set and used for the entire model. If a fixed seed
is set for the model and also defined separately in Monte Carlo, then the one in
Monte Carlo will be used. If a fixed seed is set for the model and this option is
not selected, then the sequence of random numbers will be based on the
model’s random seed (and thus is still reproducible). For information on
setting a fixed seed, refer to the Isight User’s Guide.
4. Set the following options in the Execution Options area, as desired:
Execute Monte Carlo sample points in parallel. Select this option if you'd
like the sample points defined by the Monte Carlo simulation to be executed in
parallel. Note that the convergence check interval may limit the number of
sample points that are executed in parallel to batches that are the size specified
for this convergence check interval.
After execution, reset to mean value point and run. With this option
selected (default), after execution of all Monte Carlo simulation points, the
random variables will be set to their mean values and the Monte Carlo subflow
will be executed one additional time. In this case, the Monte Carlo parameters
(inputs and outputs) will be left at the values calculated for this “mean value
point”. If this option is not selected, the Monte Carlo parameters after
execution will be left at the values associated with the last (random) Monte
Carlo simulation point.
Note: You can map these settings to parameters. For more information, see
“Mapping Options and Attributes to Parameters,” on page 94.
5. (optional) Click the Advanced Options... button; then, set either of the following
options:
Execute subflow only once. If selected, the subflow executes only one time.
This is useful in models that need to turn the driver logic on/off parametrically.
This option is also helpful in debugging the process.
Note: You can map this setting to a parameter. For more information, see
“Mapping Options and Attributes to Parameters,” on page 94.
Allow the following to be turned off by parameters. This option allows you
to de-activate (turn off) the selected types of design parameters (random
variables, responses) using input parameters to the Monte Carlo components.
If this option is selected, boolean input parameters will be created in an Active
Random Variables/Active Responses aggregate under the Mapped Options
and Attributes aggregate parameter. The parameters are selected by default. If
any of the parameters are not selected at runtime, then those design parameters
will not be used during execution.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Select the parameters you want to use as random variables by clicking the check
box that corresponds to the parameter. To select all parameters, click the
button at the bottom of the tab. If no parameters are selected, you will be prompted
to add all parameters as random variables. To deselect all the parameters, click the
button. Once you select a random variable, its name is displayed in the
Distribution Information area, and the rest of the tab is activated.
4. Click the Correlation Matrix button if you want to use random variable
correlation to sample the random variable distributions. This feature induces the
required correlations on the given sample of random variables while preserving the
individual distributions. The Linear Correlation Matrix for Random Variables
dialog box appears.
5. Type the desired values (-1.0 to 1.0) in the white text boxes in the dialog box.
Note: To completely remove a number already present in a text box, press the
BACKSPACE key on your keyboard.
7. Set any of the following options, some of which vary based on your distribution
selection:
Click the Distribution button to set the probability distribution option for the
random variable. Like sampling techniques, random variable distributions are
implemented as “plug-ins” used by the Monte Carlo component. They are
extendable by creating new “plug-ins” for new distributions.
Note: You can use the button to edit the information for multiple random
variables. For more information, see “Editing Attributes for Multiple Parameters,”
on page 100.
Note: You can map any setting to a parameter. For more information, see
“Mapping Options and Attributes to Parameters,” on page 94.
8. Review the preview graphs on the right side of the tab. These graphs are
automatically updated based on changes made to the selected random variables
distribution properties. A legend below the graph explains the color coding. The
graphs display the following information:
Probability Density. This graph shows the actual shape of the selected
distribution with regard to the probability density function.
Cumulative Distribution. This graph shows the actual shape of the selected
distribution with regard to the cumulative distribution function.
9. Set the Update random variable mean values to current parameter values
before execution option. This option, at the bottom of the tab, allows for
automatic updating of mean values of all random variables to the current parameter
values in this component, prior to executing the Monte Carlo component. The
default is to have this option deactivated, and to retain settings. If you want to
automatically change the settings to the current point when the Monte Carlo
component is executed, click this check button to activate it. This option is useful
if the Monte Carlo component is executed after another component, and parameter
values are taken from the previous component.
10. Proceed to “Using the Responses Tab,” on this page.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Select the output parameters you want to use as responses by clicking the
corresponding check boxes. To select all parameters, click the Check button at the
bottom of the tab. If no parameters are selected, you will be prompted to add all
parameters as responses. To deselect all the parameters, click the Uncheck button.
Lower Limit. If a response value is specified in the Lower Limit column for a
response, the probability of response values greater than that lower limit will
be calculated and reported after all simulations are complete.
Upper Limit. If a response value is specified in the Upper Limit column for a
response, the probability of response values less than that upper limit will be
calculated and reported after all simulations are complete.
Note: If both Lower and Upper limits are specified, the Total probability
between the limits is also reported. This response probability can be used to
characterize the reliability of that response with respect to remaining between
the limits. The total probability of failure with respect to the specified limits
can be determined by subtracting the total probability from one.
Percentile. If a percentile value (a value between 0 and 1) is specified in the
Percentile column for a response, the response value corresponding to that
percentile of the resulting response distribution will be reported after all
simulations are complete.
Note: When you move your mouse pointer over or click a specific column, the
graph that represents that value is outlined on the right side of the tab. The graph
not only gives you a visual aid, but provides text information about the individual
settings.
Note: You can also set response options using the Edit... button at the bottom of
the editor. For more information, see “Editing Attributes for Multiple Parameters,”
on page 100.
5. (optional) If desired, map any of these response attributes to parameters. For more
information, see “Mapping Options and Attributes to Parameters” on this page.
6. Click OK to close the editor and save your changes. Click Apply to save your
changes, but keep the editor open.
4. Right-click the value; then, select Map this value to a parameter option. The
Select a Parameter dialog box appears prompting you to enter the name of a
parameter to which you want to map.
5. Type a name for the parameter (by default, the technique option name is used);
then, click OK. Once mapped, the icon appears next to the technique option’s
value. You can click this icon to view or change the parameter name. You can also
right-click on the setting again to remove the parameter name.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter.
6. Click Apply to save your changes; then, proceed to step 18.
7. Right-click any of the following execution options in the bottom left corner of the
editor:
Execute subflow only once (you need to click the Advanced Options... button
to gain access to this option)
8. Select the Map this value to a parameter option. The Select a Parameter dialog
box appears prompting you to enter the name of a parameter to which you want to
map.
9. Type a name for the parameter (by default, the execution option name is used);
then, click OK. Once mapped, the icon appears next to the execution option.
You can click this button to view or change the parameter name. You can also
right-click on the setting again to remove the mapping.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter.
10. Click Apply to save your changes; then, proceed to step 18.
12. Right-click in the Standard Deviation text box; then, click Map this value to a
parameter. The Select a Parameter dialog box appears prompting you to enter the
name of a parameter to which you want to map.
13. Type a name for the parameter (by default, the Standard Deviation option name is
used); then, click OK. Once mapped, the icon appears next to the tuning
parameter’s value. You can click this icon to view or change the parameter name.
You can also right-click on the setting again to remove the parameter name.
14. Click Apply to save your changes; then, proceed to step 18.
If you want to apply the mapping to only the selected parameter, select the
Map <Attribute_Name>to a parameter for selected option.
If you want to apply the mapping to all the parameters, select the Map
<Attribute_Name> to a parameter for all option.
The icon appears next to the tuning parameter’s value. You can click this icon
to view or change the parameter name.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as member(s) of a special aggregate parameter called Mapped
Attributes and Options.
Note: You can remove these mappings at any time. Simply right-click the
appropriate parameter attribute; then, select the Remove mapping of
<Attribute_Name> to a parameter for selected or Remove mapping of
<Attribute_Name> to a parameter for all option, depending on how you
originally mapped the attribute.
Note: If an entire array is selected as a random variable/response (as opposed to
individual elements), a corresponding array will be created in the Mapped
Attributes and Options parameter with elements that will map to the
18. Click OK to close the component editor and return to the Design Gateway.
20. Locate the new aggregate parameter called Mapped Attributes and Options;
then, click the icon to expand the parameter. The new mappings appear.
To select all parameters, click the button (Check button on the Responses
tab) at the bottom of the tab. To deselect all the parameters, click the
button (Uncheck button on the Response tab).
2. Click the button (Edit... button on the Response tab) at the bottom of the
component editor.
The Edit dialog box appears. In the following example, a parameter on the
Random Variables tab is being edited.
3. Update the listed values, as desired. Only options with defined values appear on
this dialog box.
4. Click OK. The values are updated for all the parameters that were selected.
1. From the Isight Design Gateway, select Preferences from the Edit menu. The
Preferences dialog box appears.
2. Expand the Components folder on the left side of the dialog box; then, select
Monte Carlo.
3. Set the Default Sampling Technique preference option. This option determines
which sampling technique will automatically appear every time the component
editor for a newly inserted component is first opened.
4. Click OK to save your changes and close the Preferences dialog box.
The API for the Monte Carlo component consists of the following method:
get("MonteCarloPlan"). This returns a MonteCarloPlan object from the
com.engineous.sdk.designdriver.plan package. From that entry point you have access
to numerous methods for setting/configuring the sampling technique, adding/editing
random variables and responses, and configuring execution options. For more
information on configuring the Monte Carlo component, refer to the javadocs for the
MonteCarloPlan interface in your <Isight_install_directory>/javadocs directory.
Note: Some of the algorithms listed above may not be included in your installation,
depending on your license.
The list of available optimization algorithms includes the best algorithms from several
major classes of optimization algorithms, such as gradient-based numeric techniques,
direct search techniques, and exploratory techniques. This approach assures that most
of the needs of the optimization design engineer area are covered by this Optimization
component.
For detailed information on each of these techniques, as well as their parameters, see
the remaining topics in this section.
“Opening the Editor and Using the General Tab,” on page 105
[*] - A feasible design is always considered better than any infeasible design. As a
result, feasibility codes 2 and 3 end up being used only until the first feasible design is
evaluated.
The feasibility code values 4, 5, and 6 are not currently used. The code values used in
Isight have been chosen to coincide with the feasibility code values used in the earlier
product iSIGHT; within that product, those code values are used only by a particular
optimization technique which has not (as yet) been re-implemented for Isight.
Assigned feasibility codes are meaningful only for designs which have been
successfully evaluated. If execution of the Optimization component’s workflow failed
for some design, the value of the “Feasibility” member for that design should be
disregarded (most likely it will have the value 0, denoting an unusable design).
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide. The Component Editor dialog box appears.
Note: If you are connected to a Fiper ACS, and a message appears about some
optimization techniques not being published, you’ll need to publish to the ACS
Library if you plan on using these listed techniques. Execute the publishall
file located in the <Isight_install_directory>\bin\win32 directory (on UNIX/Linux,
this file is located in the <Isight_install_directory>/bin directory) from a
Command Prompt (Windows) or terminal window (UNIX/Linux). You need to
have Fiper administrator privileges to execute this file. When prompted, connect to
the ACS you are currently using with Isight and the optimization techniques are
published. You’ll have to close and re-open the Optimization component before
The dialog box is divided into four tabs: General, Design Variables, Constraints,
and Objectives. The General tab is used for selecting the optimization technique
and configuring execution options (see step 2). The three remaining tabs are used
for specifying parameters as design variables, constraints, and objectives, as well
as setting their corresponding attributes.
2. Click the Optimization Technique button; then, select the technique you want to
use from the drop-down list that appears. The technique’s tuning parameters
appear directly below the Technique button, and information about the technique
appears in the Technique Description area on the right side of the editor.
Note: You can set the default selection for the optimization technique using the
component preferences as described in “Setting Optimization Component
Preferences,” on page 152.
3. Set the following execution options, if available, in the Execution Options area:
Execute in parallel. This option controls how the design points will be
executed during optimization. (This option is not available for all techniques.)
If selected, design points will be executed in parallel, in small batches. The
size of the batch depends on the number of selected Design Variables for
numerical techniques or the size of the population for exploratory techniques.
Restore optimum design point after execution. If this option is selected,
optimization will set parameters to the best design point after execution. If this
option is not selected, the optimization will keep all parameters at their last
values (last execution of the subflow) after execution.
Variables page of the dialog, since the values of scale factors will be
automatically calculated.
Note: You must define all upper and lower bounds (or allowed values) for all
design variables when this option is selected. If some design variables do not
have bounds, you will not be able to use this option.
You can set default selection for this option using the component preferences
as described in “Setting Optimization Component Preferences,” on page 152.
Note: You can map these settings to parameters. For more information, see
“Mapping Options and Attributes to Parameters,” on page 118.
4. (optional) Click the Advanced Options... button on the Execution tab. The
Advanced Options dialog box appears with the Execution tab in front.
Execute subflow only once. If selected, the subflow executes only one time,
and constraints, objectives, and penalty values are calculated. This is useful in
models that need to turn the driver logic on/off parametrically. This option is
also helpful in debugging the process.
Note: You can map this setting to a parameter. For more information, see
“Mapping Options and Attributes to Parameters,” on page 118.
Allow the following items to be turned off by parameter. This option allows
you to disable some design parameters at runtime by using parameters. When
this option is selected, a set of special “switch” variables are created. By
changing the values of these special variables, you can disable their
corresponding design parameters.
7. Select any of the following termination criteria. If you select more than one
criterion, optimization will terminate whenever any of the criteria are met.
The number of subflow runs reaches. Enter the number of subflows that
should complete before optimization terminates.
The elapsed execution time for this component reaches. Enter the amount
of time that should elapse before optimization terminates. Time can be entered
in seconds, minutes, or hours.
The date and time reaches. Enter the date and time when optimization should
terminate. Enter the date and time in the following format:
6/12/07 11:20 AM
8. Click the Parameter Termination tab. The contents of the tab appear.
9. Select the Terminate optimization when the following parameter criteria are
satisfied check box, if desired. The following options are available:
Criteria will be tested after the following number of initial subflow runs.
Enter the amount of subflow runs that should complete before the criteria are
tested.
10. Select a parameter; then, enter the upper and/or lower threshold that, when this
parameter crosses, optimization should terminate. Select the Check button to
select all parameters. Click the Uncheck button to uncheck all the parameters and
the Terminate optimization when the following parameter criteria are
satisfied check box.
12. Review the information in the Optimization Technique Description area. For
additional information on the technique, see “Optimization Reference
Information,” on page 575.
13. Set the tuning parameter values for your technique in the Optimization Technique
Options area.
Note: You can map tuning parameter settings to parameters. For more information,
see “Mapping Options and Attributes to Parameters,” on page 118.
For more information on these options, see one of the following sections:
The list of parameters on this tab includes all parameters of modes Input and
In/Out from the Optimization component, and also parameters of the same modes
from the subflow components, if they are not mapped to any parameters of the
Optimization component.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Perform any of the following actions, which vary based on your model design:
You can also select all of the listed variables (including array elements) using
the Check button at the bottom of the tab.
Specify the lower bound for the variables in the corresponding column. If you
do not specify lower bound values, the default value -1e15 will be used. This
will result in a very large range for the design variables and may reduce the
chances of finding a good design. It is recommended that you specify lower
bounds for all variables. This setting is required if you are using the automatic
scaling component preference. For more information, see “Setting
Optimization Component Preferences,” on page 152.
Set the scale factor for the variable in the corresponding column. Scale factors
are used to bring variable values to the same order of magnitude to improve the
efficiency of the optimizers.
Note: This column does not appear if you use the automatic scaling component
preference. For more information, see “Setting Optimization Component
Preferences,” on page 152.
(Multi-Island Genetic Algorithm technique only) Set the value in the Gene
Size column. This value controls the number of bits N in all genes used for
encoding the value of each variable. Every bit of the gene can change its value
between 0 and 1. The total number of possible combinations in every gene is
then 2^N. This number of combinations determines the minimum change in
the value of any design variable during all genetic operations - take the
allowed range of values for a design variable, and divide it by the total number
of combinations. To increase the minimum change in design variable values
(i.e., to decrease the number of possible bit combinations when the allowed
range of design variable values is small), decrease the gene size.
Note: You can also set variable options using the Edit... button at the bottom of the
editor. For more information, see “Editing Attributes for Multiple Parameters,” on
page 123.
4. (optional) If desired, map any of these variable attributes to parameters. For more
information, see “Mapping Options and Attributes to Parameters,” on page 118.
1. Click the Constraints tab. The constraints for your problem are displayed.
The list of parameters on this tab includes all parameters of mode Output from the
Optimization component, and also parameters of mode Output from the subflow
components, if they are not mapped to any parameters in the Optimization
component. When any subflow parameter is selected as a constraint, a
corresponding parameter is created in the Optimization component.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Perform any of the following options, which vary based on your model design:
Set the lower and upper bound of a constraint in the corresponding columns.
Note: You can also set constraint options using the Edit... button at the bottom of
the editor. For more information, see “Editing Attributes for Multiple Parameters,”
on page 123.
4. (optional) If desired, map any of these constraint attributes to parameters. For more
information, see “Mapping Options and Attributes to Parameters,” on page 118.
where LB is the lower bound value, UB is the upper bound value, T is the target
value, Parm is the parameter value, W is the weight factor, and S is the scale factor.
The list of parameters on this tab includes all parameters of mode Output from the
Optimization component, and also parameters of the same modes from the subflow
components, if they are not mapped to any parameters in the Optimization
component. Also, design variables (selected on the Variables tab) are included in
the list of parameters.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Perform any of the following actions, which vary based on your model design:
where OBJi is the contribution to the objective function of the i-th objective
component (parameter), Wi is the corresponding weight factor, and Si is the
corresponding scale factor.
If the direction of a specific objective parameter is “minimize”, its contribution
to the objective function equals the parameter value itself:
OBJi = Parmi
Click the Direction column to specify whether or not you want to minimize
the objective, maximize the objective, or set a target for the objective. A
drop-down list appears, allowing you to alter the specification.
If you selected target in the Direction column, set the objective’s target in the
corresponding column.
Set the objective’s scale and weight factors in the corresponding columns.
Note: You can also set objective options using the Edit... button at the bottom of
the editor. For more information, see “Editing Attributes for Multiple Parameters,”
on page 123.
4. (optional) If desired, map any of these objective attributes to parameters. For more
information, see “Mapping Options and Attributes to Parameters” on this page.
5. Click OK to close the editor and save your changes. Click Apply to save your
changes, but keep the editor open.
3. Click the value in the Value column for the tuning parameter whose value you
want to map. The value is highlighted.
4. Right-click the value; then, select the Map this value to a parameter option. The
Select a Parameter dialog box appears prompting you to enter the name of a
parameter to which you want to map.
5. Type a name for the parameter (by default, the technique option name is used);
then, click OK. Once mapped, the icon appears next to the technique option’s
value. You can click this icon to view or change the parameter name. You can also
right-click on the setting again in order to remove the mapping.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter called Mapped Attributes and
Options.
7. Right-click any of the following execution options in the bottom left corner of the
editor:
Execute in parallel
Restore optimum design point after execution
Re-execute optimum design point
Use automatic variable scaling
Execute subflow only once (you need to click the Advanced Options... button
to gain access to this option)
8. Select the Map this value to a parameter option. The Select a Parameter dialog
box appears prompting you to enter the name of a parameter to which you want to
map.
9. Type a name for the parameter (by default, the turning parameter name is used);
then, click OK. Once mapped, a button appears next to the execution option.
You can click this button to view or change the parameter name. You can also
right-click on the setting again to remove the mapping.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter called Mapped Attributes and
Options.
10. Click Apply to save your changes; then, proceed to step 15.
11. Determine if you want to map a variable, constraint, or objective attribute; then,
click the appropriate tab on the component editor.
Note: If there is no value defined for the attribute, the menu may not appear. You
must first create the attribute by typing a value and pressing the ENTER key.
If you want to apply the mapping to only the selected parameter, select the
Map <Attribute_Name> to a parameter for selected option.
If you want to apply the mapping to all the parameters, select the Map
<Attribute_Name> to a parameter for all option.
The icon appears next to the tuning parameter’s value. You can click this icon
to view or change the parameter name.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as member(s) of a special aggregate parameter called Mapped
Attributes and Options.
Note: You can remove these mappings at any time. Simply right-click the
appropriate parameter attribute; then, select the Remove mapping of
<Attribute_Name> to a parameter for selected or Remove mapping of
<Attribute_Name> to a parameter for all option, depending on how you
originally mapped the attribute.
14. Click OK to close the component editor and return to the Design Gateway.
16. Locate the new aggregate parameter called Mapped Attributes and Options;
then, click the icon to expand the parameter. The new parameters appear.
1. Select the parameters you want to edit on either the Variables, Constraints, or
Objectives tab.
To select all parameters on a particular tab, click the Check button at the bottom of
the tab. To deselect all the parameters, click the Uncheck button.
2. Click the Edit... button at the bottom of the component editor. The Edit dialog box
appears. In the following example, a parameter on the Variables tab is being
edited.
3. Update the listed values, as desired. Only options with defined values appear on
this dialog box.
4. Click OK. The values are updated for all the parameters that were selected.
Initial Size (multiple of 4). This parameter controls the number of solutions
generated initially. It specifies the size of the starting population. In general, the
larger the size of the starting population, the better the coverage of the search
space. The initial population is generated randomly (or assigned from an external
source) and is controlled by this parameter. The default value is 40. Other possible
values are ≥ 4.
Population Size (multiple of 4). This parameter controls the size of the parent
population created at each iteration. Unlike other evolutionary algorithms, this
population is created based on the search history of the algorithm. The population
size must be less than or equal to the initial size. The number of new solutions
created at each iteration is half of the population size. It is recommended that you
not change the default setting for this parameter. A small overall value is
recommended. The default value is 40. Other possible values are ≥ 4.
Archive Size Limit. This parameter controls the amount of search history
information held by the algorithm (the larger the archive size, the better the
simulation results). However, the complexity of the algorithm varies as N2 log(N),
where N is the archive size. Therefore, increasing the archive size will increase the
execution time for identical numbers of function evaluations. It is recommended
that you not change the value of this parameter. The archive size must be less than
or equal to the number of function evaluations and greater than or equal to the size
of the initial population. The default value is 500.
Pareto Size Limit. This parameter defines the upper limit on the number of
solutions desired at the end of the simulation run. The actual number of solutions
reported need not be equal to the Pareto Size Limit (it will depend on the problem
and the number of solutions that were found by the algorithm). You can safely
increase the Pareto Size Limit to the Archive Size Limit without any loss in
solution quality or increase in computation time. The default value is 100.
Crossover Probability. This parameter controls the probability with which parent
solutions should be recombined to generate the offspring solutions. This parameter
should be set between 0.5 and 1.0. In general, a high probability of crossover
(0.9~1.0) is recommended. The default value is 0.9.
Mutation Probability. This parameter controls the probability with which an
offspring solution should be mutated. The mutation has the effect of slightly
perturbing the offspring solution. The recommended value for this parameter is
1/n, where n is the number of design variables. By default, the algorithm uses the
1/n value when the Use optimal mutation rate option (described below) is enabled.
This parameter is provided for cases where a high mutation rate is deliberately
desired.
Use optimal mutation probability. This parameter controls whether or not the
optimal mutation probability is used. By default, this option is enabled. It is
recommended that you leave this option enabled. If this option is enabled, any
custom value specified in the Mutation Probability option (described above) is
ignored.
Note: This setting is used in conjunction with the Crossover Distribution Index
setting to increase the accuracy of your solutions.
Initialization Mode. The initialization mode is used to specify how the initial
(starting) population (set of solutions) should be generated. AMGA incorporates
three different methods for generating the initial population. It can be generated
randomly, seeded from a starting solution, or read from a user-specified file. The
default and recommended value is Random. All three behavior options are
described below.
Random. This option causes the algorithm to generate the initial population
randomly. This method is based on Latin Hypercube sampling coupled with
unbiased Knuth Shuffling. It generates an almost uniform distribution of points
inside of the search space.
Starting Solution. This option causes the algorithm to generate a point cloud
around the starting point. This cloud’s density reduces exponentially as you
move away from the point. The probability density drops to zero at the
variable boundaries. The starting solution is always present as the first member
of the population.
Starting Population. This option allows you to specify an input file from
which to read the initial population. The Pareto file output generated by the
algorithm can be used as the input file for subsequent simulations. A
user-generated file that contains all the design variables can also be used as the
input file. The file’s column headers are used to identify the variables. All of
the design points read by the algorithm are evaluated. The initialization file
must contain at least one solution. If the number of solutions in the
initialization file is less than the population size, the remaining solutions are
generated randomly. If the number of solutions in the initialization file is more
than the population size, extra solutions at the end of the file are discarded (not
read).
Initialization Filename. This parameter is used only when the initialization mode
is set to the Starting Population option. The specified file name is read during the
initialization process. The location of the file is validated and checked only during
runtime. The following explains the requirements of the initialization file:
The first line must contain parameter names, separated by a space or tab; the
remaining lines must contain data values.
Each line must have the same number of values as the number of parameter names
in the header line. Only the input values are used from the initialization file; it is
not necessary to include output values. All design points read from the
Only the required number of data points will be used from the initialization file.
This number equals the size of the population configured in the technique options
for your component.
If your data file contains more designs than necessary, then make sure that the
desired data points are located at the beginning of the file. If the number of
solutions in the initialization file is more than the population size, extra solutions at
the end of the file are discarded (not read).
The rest of the design points will be generated randomly if the data file does not
contain enough data points to fill the initial population.
Diversity Option. AMGA uses two different diversity preservation operators with
varying computational complexity and solution quality. The two operators are
Crowding and ENNS (Efficient Nearest Neighbor Search). ENNS is
computationally more expensive and provides better solution quality. The
Crowding distance computation is quicker, but the solution quality is poor when
more than two objectives are present. You should use the Crowding option if the
number of objectives is two. If the number of objectives is more than 2, you should
use the ENNS option. The default value is Crowding.
Use fixed random seed. If this option is selected, the random number generator
used by the optimization algorithm is seeded using the value specified in the
corresponding text box. All executions of the Optimization component will use
exactly the same sequence of random numbers and, therefore, will produce exactly
the same design points. This arrangement is useful for debugging the optimization
process when it is necessary to reproduce the same sequence of design points.
If this option is not selected, the random number generator is seeded by using the
clock time at the moment of execution.
optimum. Increasing the value up from 1.0 will activate the rapid parameter
temperature reduction. Reducing the value down from 1.0 will greatly extend the
time required to reduce parameter temperature for convergence. Use quenching
only if you want to considerably speed up or slow down the convergence of the
algorithm. The type of value is real. The default value is 1.0. Other possible values
are > 0.
Relative Rate of Cost Quenching. Cost quenching is a process of rapid reduction
of the cost temperatures and in effect overrides the slow annealing process and
turns it into a fast quenching process. Using quenching considerably reduces the
acceptance probability, reducing the chances of finding a global optimum and
greatly increasing the chances of convergence to a local optimum. Increasing the
value up from 1.0 will activate the rapid cost temperature reduction. Reducing the
value down from 1.0 will greatly extend the time required to reduce cost
temperature for convergence. Use quenching only if you want to considerably
speed up or slow down the convergence of the algorithm. The type of value is real.
The default value is 1.0. Other possible values are > 0.
Max Number of Failed Designs. Maximum number of consecutive design
analysis failures before the algorithm terminates. Due to the random nature of the
algorithm, it is possible to generate designs that cannot be handled by the analysis
code(s). Such occasional failures are ignored by the ASA algorithm. If the failures
become persistent, the algorithm will stop executing. The type of value is integer.
The default value is 5. Other possible values are ≥ 1.
Reanneal Cost Function. When the algorithm comes to a stagnation point, it may
be beneficial to re-start the annealing process again using the best design point
found so far. If this option is set to yes, ASA algorithm will employ several criteria
Min Ratio of Accepted Designs for Reannealing. When the ratio of the number
of accepted designs to the number of generated designs reaches this value,
reannealing of parameter and/or cost function temperatures will be performed, if
allowed by the previous options. The type of value is real. The default value is
1.0E-6. Other possible values are ≥ 0.
Penalty Multiplier. This parameter is used to increase or decrease the effect of the
total constraint violations on the measure of the design goodness. The default value
is 1000.0.
Failed Run Penalty Value. This parameter represents the value of the Objective
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Failed Run Objective Value. This parameter represents the value of the Objective
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Use fixed random seed. If this option is selected, the random number generator
used by the optimization algorithm is seeded using the value specified in the
corresponding text box. All executions of the Optimization component will use
exactly the same sequence of random numbers and, therefore, will produce exactly
the same design points. This arrangement is useful for debugging the optimization
process when it is necessary to reproduce the same sequence of design points.
If this option is not selected, the random number generator is seeded by using the
clock time at the moment of execution.
Return to step 1 on page 111 for information on using the other tabs on the
Optimization component editor.
Initial Simplex Size. This is the size of the starting simplex as a fraction of your
domain. It should be a real value greater than zero and less than or equal to one. It
has the highest probability of finding a global optimum when started with a bigger
initial size. Since it spans a larger fraction of the design space, the chances of
getting trapped in a local minimum are smaller. The default value is 0.1 which is
equivalent to 10% of the domain.
Max Iterations. This parameter sets the maximum number of design iterations
that you want the optimizer to run. The type of value is integer. The default value
is 40.
Max Iterations. This parameter sets the maximum number of iterations you want
the optimizer to run. An iteration, in this technique, includes the following actions.
Starting with the current base point, each design variable is perturbed up or down
(cost: one or two runs per variable), and any improvements are saved. The
accumulated change is used to determine the next base point. The default value is
10.
Max Evaluations. This parameter sets the maximum number of evaluations. The
default is 100.
Relative Step Size. This parameter determines the initial step size during the
design perturbations as a fraction of the parameter value (i.e., if a design variable
has a starting value of 1.0 and Relative Step Size is 0.5, the initial perturbation will
be 0.5). The default value is 0.5.
Step Size Reduction Factor. This parameter sets the step size reduction factor. It
should be set to a value between 0.0 and 1.0. Larger values give greater probability
of convergence on highly nonlinear functions, at a cost of more function
evaluations. Smaller values reduce the number of evaluations (and the program
running time), but increase the risk of nonconvergence. The default value is 0.5.
Termination Step Size. This parameter sets the termination step size. When the
algorithm begins to make less and less progress on each iteration, it checks this
parameter. If the step size is below Termination Step Size, the optimization
terminates, and returns the current best estimate of the optimum. Larger values of
Termination Step Size (for example, 1.0E-4) have a quicker running time, but a
less accurate estimate of the optimum. Smaller values of Termination Step Size
(for example, 1.0E-7) have a longer running time, but a more accurate estimate of
the optimum. The default value is 1.0E-6.
Penalty Multiplier. This parameter is used to increase or decrease the effect of the
total constraint violations on the measure of the design goodness. The default value
is 1000.0.
Max Failed Runs. This parameter is used to set the maximum number of failed
subflow evaluations that can be tolerated by the optimization technique. If the
number of failed runs exceeds this value, the optimization component will halt
execution.
Failed Run Penalty Value. This parameter represents the value of the Penalty
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Failed Run Objective Value. This parameter represents the value of the Objective
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Return to step 1 on page 111 for information on using the other tabs on the
Optimization component editor.
Max Iterations. This parameter sets the maximum number of design iterations
you want the optimizer to run. The type of value is integer. The default value is 10.
Other possible values are ≥ 1.
Rel Step Size. This parameter sets the value of the relative gradient step size for
LSGRG2 when calculating gradients by finite differencing. The absolute step
value is calculated by LSGRG2 as follows:
dx = (1+x)*GradientStepSize
where x is the current value of a design variable. Note that, for small values of x,
GradientStepSize becomes the absolute value of the step (when x = 0). For large
values of x, GradientStepSize becomes the relative step size (when x >> 1). In
general, the best value for GradientStepSize is sqrt(eps), where eps is the relative
error in the computed function values (simcode outputs), such as if function values
are available to full double precision (eps = 1e-16), GradientStepSize should be
about 1e-8. The type of value is real. The default value is 0.0010. Other possible
values are > 0.
Convergence Iterations. This parameter sets the number of iterations used in the
convergence check (see definition of Convergence Epsilon above). The type of
value is integer. The default value is 3. Other possible values are ≥ 1.
Binding Constraint Epsilon. This parameter sets the value of the threshold for
binding constraints. If a constraint is within this epsilon of its bound, it is assumed
to be binding. This parameter may have a strong effect on the speed of the
optimization convergence. Increasing it can sometimes speed convergence (by
requiring fewer Newton iterations during a one-dimensional search), while
decreasing it occasionally yields more accurate solution, or gets optimization
moving if the algorithm gets “stuck.” Values larger than 1e-2 should be treated
cautiously, as should values smaller than 1e-6. The type of the value is real. The
default value is 1.0E-4. Other possible values are > 0.
Phase 1 Objective Ratio. This parameter sets the ratio of the true objective value
to the sum of constraint violations to be used as the objective function during the
so-called Phase 1 of optimization. LSGRG2 uses the Generalized Reduced
Gradient method, which is designed to work in the feasible domain. If the initial
design is not feasible, the first step is to obtain a feasible point from which
feasibility is maintained thereafter. This is known as Phase 1 of optimization with
LSGRG2. The Phase 1 objective function is the sum of the constraint violations
plus, optionally, a fraction of the true objective. This optimization phase terminates
either with a message that the problem is infeasible or with a feasible solution. If
an infeasibility message is produced, the program may have become stuck at a
local minimum of the Phase 1 objective (or too large a part of the true objective
was incorporated), and the problem may actually have feasible solutions. The
suggested remedy, if you suspect that this is so, is to choose a different starting
point (or reduce the proportion of the true objective) and try again. The default
value is 1.0. Other possible values are > 0.
Max Failed Runs. This parameter is used to set the maximum number of failed
subflow evaluations that can be tolerated by the optimization technique. If the
number of failed runs exceeds this value, the optimization component will halt
execution.
Failed Run Penalty Value. This parameter represents the value of the Penalty
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Failed Run Objective Value. This parameter represents the value of the Objective
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Return to step 1 on page 111 for information on using the other tabs on the
Optimization component editor.
Max Number of Iterations. An iteration consists of two phases. In the first phase,
a plausible search direction is computed from the gradient of the objective function
and constraints at the current design point. In the second phase, new designs are
evaluated along the selected direction (cost: one run per design) until no
improvements are found, or until a constraint is violated. The two phases are
repeated until the specified convergence requirements are met. This option
controls how many of these pairs of phases will take place. The type of value is
integer. The default value is 40.
Relative Gradient Step. This parameter sets the relative finite difference step size
to be used by the optimizer, when calculating gradients using finite difference
techniques. The default value is 0.01 (1 percent).
Min Abs Gradient Step. This parameter sets the smallest (minimum) absolute
value of the finite difference step when calculating gradients. It prevents a step
from being too small when a parameter value is near zero. The default value is
0.001.
Failed Run Penalty Value. This parameter represents the value of the Penalty
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Failed Run Objective Value. This parameter represents the value of the Objective
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Return to step 1 on page 111 for information on using the other tabs on the
Optimization component editor.
Convergence Epsilon. This parameter sets the convergence criterion for MOST.
The necessary Kuhn-Tucker optimality conditions are satisfied to within
Convergence Epsilon, optimization is terminated. The default value is 1.0E-4.
Rel Step Size. This parameter sets the relative finite difference Step size for the
creation of the linear model. The type of value is real. The default value is 1.0E-4.
Other possible values are > 0.0.
Min Abs Step Size. This parameter sets the minimum absolute finite difference
Step for the creation of the linear model. The type of value is real. The default
value is 1.0E-4. Other possible values are > 0.0.
Max Confirmation Runs. This parameter specifies the number of iterations that
must satisfy the objective convergence criterion before optimization is terminated.
The default is 5.
Acceptable Obj Change. This parameter specifies the acceptable value for
objective change for optimization termination. The default is 1.0E-10.
Failed Run Penalty Value. This parameter represents the value of the Penalty
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Failed Run Objective Value. This parameter represents the value of the Objective
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Return to step 1 on page 111 for information on using the other tabs on the
Optimization component editor.
Elite Size. Number of best individuals carried over from the parent generation to
the child generation in each subpopulation. The type of value is integer. The
default value is 1. Other possible values are ≥ 0.
The tournament size of 1 means that the child generation is selected randomly: first
1 design is selected randomly to create the tournament, then the best (the only one)
individual is selected from the tournament. The maximum tournament size is equal
to the size of the sub-population. In this case, the child generation will consist
entirely of duplicates of the best individual from the parent generation. The type of
value is real. The default value is 0.5. Other possible values are ≥ 0 and ≤ 1.0.
Penalty Base. Multi-Island Genetic Algorithm evaluates the goodness of a design
point using the combined value of the objective function and penalty function.
When calculating the penalty function of the design, Penalty Base can be used for
all designs that violate at least one constraint. This option allows you to better
differentiate feasible designs with slightly higher objective function from
infeasible designs with a slightly lower objective function.
Max Failed Runs. This parameter is used to set the maximum number of failed
subflow evaluations that can be tolerated by the optimization technique. If the
number of failed runs exceeds this value, the optimization component will halt
execution.
Failed Run Penalty Value. This parameter represents the value of the Penalty
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Failed Run Objective Value. This parameter represents the value of the Objective
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Default Variable Bound (Abs Val). This value will be used as the upper and
lower bound for all variables without bounds. For lower bound, this value will be
multiplied by -1 (negative value). It is important for MIGA to have bounds for all
variables because the algorithm works by dividing the range of each variable into a
very large number of steps.
Use fixed random seed. If this option is selected, the random number generator
used by the optimization algorithm is seeded using the value specified in the
corresponding text box. All executions of the Optimization component will use
exactly the same sequence of random numbers and, therefore, will produce exactly
the same design points. This arrangement is useful for debugging the optimization
process when it is necessary to reproduce the same sequence of design points.
If this option is not selected, the random number generator is seeded by using the
clock time at the moment of execution.
Return to step 1 on page 111 for information on using the other tabs on the
Optimization component editor.
Crossover Type. This parameter controls the type of crossover operation - one
point crossover or two point crossover. The default value is 1. Other possible value
is 2.
Crossover Rate. This parameter controls the probability of crossover operation for
each individual in every generation during execution of NCGA algorithm. The
type of value is real. The default value is 1.0. Other possible values are > 0 and ≤
1.0.
Use Optimal Mutation Rate. This parameter controls whether or not NCGA will
use the optimum mutation rate value calculated internally. The default value is
false (no). Other possible value is true (yes).
Mutation Rate. This parameter specifies the probability of mutation for each
individual. The type of value is real. The default value is 0.01. Other possible
values are > 0 and ≤ 1.0.
Gene Size. This parameter controls the size of the gene used to represent each
individual. The type of value is integer. The default value is 20. Other possible
values are ≥ 1 and ≤ 63.
Use initialization file. This parameter controls whether or not the algorithm will
use a data file for the initial generation. The default value is false (no). Other
possible value is true (yes).
Initialization File. This parameter specifies the name of the data file to be used for
initial generation. The following explains the requirements of the initialization file:
The first line must contain parameter names, separated by a space or tab; the
remaining lines must contain data values.
Each line must have the same number of values as the number of parameter names
in the header line. Only the input values are used from the initialization file; it is
not necessary to include output values. All design points read from the
initialization file will be sent for evaluation by the Optimization component, as if
they were randomly generated points.
Only the required number of data points will be used from the initialization file.
The NCGA technique always includes the starting point sent into the component
(or set manually in the Parameters table); the remaining initial population will be
read by NCGA from the initialization file, if configured to do so. This means that
NCGA will read one data point fewer than the size of the initial population.
If your data file contains more designs than necessary, then make sure that the
desired data points are located at the beginning of the file. If the number of
solutions in the initialization file is more than the population size, extra solutions at
the end of the file are discarded (not read).
The rest of the design points will be generated randomly if the data file does not
contain enough data points to fill the initial population.
Use fixed random seed. If this option is selected, the random number generator
used by the optimization algorithm is seeded using the value specified in the
corresponding text box. All executions of the Optimization component will use
exactly the same sequence of random numbers and, therefore, will produce exactly
the same design points. This arrangement is useful for debugging the optimization
process when it is necessary to reproduce the same sequence of design points.
If this option is not selected, the random number generator is seeded by using the
clock time at the moment of execution.
Return to step 1 on page 111 for information on using the other tabs on the
Optimization component editor.
Max Iterations. This parameter sets the maximum number of design iterations
you want the optimizer to run. The type of value is integer. The default value is 10.
Other possible values are ≥ 1.
Termination Accuracy. This parameter sets the termination criterion for NLPQL.
The stopping algorithm of NLPQL uses several alternative convergence checks,
with the main convergence parameter based on the Karush-Kuhn-Tucker necessary
optimality condition and the complementary slackness. Termination accuracy is
applied in such a way that the scale of the objective and constraint parameters has
little or no effect on the convergence check.
The accuracy of the gradients calculation must be considered when selecting the
value of Termination Accuracy. If the simcode outputs are accurate up to 8-10
digits, and the calculated gradients have at least 7 accurate digits, then the
recommended value for Termination Accuracy is 1.0E-7. If the gradient’s accuracy
is lower, the value of Termination Accuracy must be increased to 1.0E-5...1.0E-4.
The type of value is real. The default value is 1.0E-6. Other possible values are > 0
and ≤ 0.1.
Rel Step Size. This parameter sets the relative finite difference Step size for the
creation of the linear model. The type of value is real. The default value is 0.0010
(0.1 percent). Other possible values are > 0.0.
Min Abs Step Size. This parameter sets the minimum absolute finite difference
Step for the creation of the linear model. The type of value is real. The default
value is 1.0E-4. Other possible values are > 0.0.
Use Central Differences. This parameter allows you to determine whether or not
Isight will use the central difference method for calculating output derivatives.
When this option is not selected, the forward difference method is used. Using this
option will increase the accuracy of the gradient calculations at the expense of
double the number of design point evaluations.
Max Failed Runs. This parameter is used to set the maximum number of failed
subflow evaluations that can be tolerated by the optimization technique. If the
number of failed runs exceeds this value, the optimization component will halt
execution.
Failed Run Penalty Value. This parameter represents the value of the Penalty
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Failed Run Objective Value. This parameter represents the value of the Objective
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Save Technique Log. Most optimization techniques create a log file of
information/messages as they run. This information can be useful for determining
why an optimizer took the path that it did or why it converged. Some of these log
files can get extremely large, so they are not stored with the run results by default.
Select this option if you want to store the log with the run results (as a file
parameter) for later viewing.
Return to step 1 on page 111 for information on using the other tabs on the
Optimization component editor.
Population Size (multiple of 4). This parameter controls the number of solutions
generated at each iteration (generation). The initial population is generated
randomly (or assigned from an external source) and is controlled by this parameter.
The minimum allowed value is 4 for this tuning parameter. There is no restriction
on the maximum allowed value for this parameter.
Crossover Probability. This parameter controls the probability with which parent
solutions should be recombined to generate the offspring solutions. This parameter
should be in the range of 0.5 and 1.0. In general, high probability of crossover
(0.0~1.0) is recommended.
solution is obtained, but the solution lacks accuracy, a value as large as 100.0 may
be used.
Initialization Mode. The initialization mode can be used to specify how the initial
(starting) population (set of solutions) should be generated. The NSGA-II
algorithm incorporates three different methods for generating the initial
population:
randomly
seeded from a starting solution
read from a user-specified file
Random. This choice causes the algorithm to generate the initial population
randomly. This method is based on Latin-Hypercube sampling coupled with
unbiased Knuth Shuffling. It generates almost uniform distribution of points
inside the search space.
Starting Population. If this option is selected, you must specify an input file
from which to read the initial population. The Pareto file output by the
algorithm can be used as the input file for a subsequent simulation. A file that
contains all the design variables can also be given as the input file. The column
headers identify the variables. All the design points read by the algorithm are
evaluated. The initialization file must contain at least one solution. If the
number of solutions in the initialization files is less than the population size,
the remaining solutions are generated randomly. If the number of solutions in
the initialization file is more than the population size, extra solutions at the end
of the file are discarded (not read).
Initialization Filename. This parameter is used only when the initialization mode
is Starting Population. The filename specified is read during the initialization
process. The location of the file is validated and checked only during runtime. The
following explains the requirements of the initialization file:
The first line must contain parameter names, separated by a space or tab; the
remaining lines must contain data values.
Each line must have the same number of values as the number of parameter names
in the header line. Only the input values are used from the initialization file; it is
not necessary to include output values. All design points read from the
initialization file will be sent for evaluation by the Optimization component, as if
they were randomly generated points.
Only the required number of data points will be used from the initialization file.
This number equals the size of the population configured in the technique options
for your component.
If your data file contains more designs than necessary, then make sure that the
desired data points are located at the beginning of the file. If the number of
solutions in the initialization file is more than the population size, extra solutions at
the end of the file are discarded (not read).
The rest of the design points will be generated randomly if the data file does not
contain enough data points to fill the initial population.
Use fixed random seed. If this option is selected, the random number generator
used by the optimization algorithm is seeded using the value specified in the
corresponding text box. If this option is selected, the random number generator
used by the optimization algorithm is seeded using the value specified in the
Random seed value text box. All executions of the Optimization component will
use exactly the same sequence of random numbers and, therefore, will produce
exactly the same design points. This arrangement is useful for debugging the
optimization process when it is necessary to reproduce the same sequence of
design points.
If this option is not selected, the random number generator is seeded by using the
clock time at the moment of execution.
Return to step 1 on page 111 for information on using the other tabs on the
Optimization component editor.
Maximum allowable job time (hr). This parameter represents the time that
Pointer has to complete a job. Pointer will use all the time you give it, even if it
finds the global optimum early on. The longer it is stuck without finding an
improvement, the more radical changes it will try. Accepts a real input greater than
0.0. The default value is 1.0.
Average analysis time (sec). This parameter is the average wall clock time it takes
to run a single simulation, including all Isight-related overhead (for file parsing,
etc.) Pointer takes the ratio of allowable job time and average simulation time to
estimate how many simulations it will be able to do and adjusts its search strategy
accordingly. Accepts a real input greater than 0.0. The default value is 1.0.
Topography type. The default value is nonlinear. The following options are
available:
linear. This option assumes that objectives and constraints are linear
combinations of the design variables.
smooth. Indicates that the function is differentiable everywhere and contains
no discrete steps, but could still contain multiple local minima.
rough. Indicates that the function is not necessarily smooth, but only contains
small scale discontinuities or noise.
discontinuous. Indicates that the design space could contain large scale
discrete steps, and points where the function is not differentiable.
nonlinear. The only assumption made is that objectives and constraints are not
linear combinations of the design variables (i.e., the problem is not a linear
one).
unknown. No assumptions have been made about the nature of the design
space.
Failed Run Penalty Value. This parameter represents the value of the Penalty
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Failed Run Objective Value. This parameter represents the value of the Objective
parameter that is used for all failed subflow runs. The default value is 1.0E30.
why an optimizer took the path that it did or why it converged. Some of these log
files can get extremely large, so they are not stored with the run results by default.
Select this option if you want to store the log with the run results (as a file
parameter) for later viewing.
Use fixed random seed. If this option is selected, the random number generator
used by the optimization algorithm is seeded using the value specified in the
corresponding text box. All executions of the Optimization component will use
exactly the same sequence of random numbers and, therefore, will produce exactly
the same design points. This arrangement is useful for debugging the optimization
process when it is necessary to reproduce the same sequence of design points.
If this option is not selected, the random number generator is seeded by using the
clock time at the moment of execution.
Return to step 1 on page 111 for information on using the other tabs on the
Optimization component editor.
Power. This is the parameter used to control convergence of the algorithm. This
should be a real value greater than zero and less than one. The default value is 0.4.
Constraint Critical Ratio. This parameter controls how satisfied constraints are
handled. A value of 1.0 means that the associated dimension for any satisfied
constraint is reduced according to the standard stress ratio equation. A value of 0.5
means that the dimension of a satisfied constraint will not be reduced until
(actual/allowable) is less than 0.5. A value of 0.0 means that the dimension of a
satisfied constraint will never be reduced.
Max Failed Runs. This parameter is used to set the maximum number of failed
subflow evaluations that can be tolerated by the optimization technique. If the
number of failed runs exceeds this value, the optimization component will halt
execution.
Failed Run Penalty Value. This parameter represents the value of the Penalty
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Failed Run Objective Value. This parameter represents the value of the Objective
parameter that is used for all failed subflow runs. The default value is 1.0E30.
Return to step 1 on page 111 for information on using the other tabs on the
Optimization component editor
1. From the Isight Design Gateway, select Preferences from the Edit menu. The
Preferences dialog box appears.
2. Expand the Components folder on the left side of the dialog box; then, select
Optimization.
The following preference options are available with the Optimization component:
The API for the Optimization component consists of the following method:
get("OptimizationPlan"). This returns an OptimizationPlan object from the
com.engineous.sdk.designdriver.plan package. From that entry point you have access
to numerous methods for setting/configuring the technique, adding/editing design
variables, constraints, and responses, and configuring execution options. For more
information on configuring the Optimization component, refer to the javadocs for the
OptimizationPlan interface in your <Isight_install_directory>/javadocs directory.
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide. The Component Editor dialog box appears.
The editor is divided into four tabs: General, Design Variables, Random Variables,
and Responses. The General tab offers execution options and a description of SDI.
The Design Variables tab allows for the selection and configuration of the design
variables. The Random Variables tab allows for the selection and configuration of
the random variables. The Responses tab allows for the selection and configuration
of the response parameters.
Analyze Design. This mode simply performs a Monte Carlo simulation around
the current design point (current parameter values), varying only random
variables.
Note: The Design Variables tab is not displayed when the Analyze Design
option is selected.
3. Set the following execution options, if available:
Number of SDI Steps. Specify the number of new designs chosen and new
sample sets executed. Each step produces a new “best” design point unless no
improvement (no points closer to the target) greater than the Termination
Threshold Distance is identified. The default is five steps.
Number of Samples per Step. Specify the number of Monte Carlo samples to
be performed at each SDI iteration. The samples are taken across the
distributions of the random variables and local range of the design variables.
Since the goal is to find a design closer to the targets, not to calculate response
statistics, a large sample set is not necessary. The default is 16 samples.
Execute sample points in parallel. Select this option of you’d like the sample
points for each SDI step or the complete sample set in Analyze Design mode to
be executed in parallel.
Note: The number of CPUs available may limit the number of sample points
that are actually executed in parallel.
Use a fixed seed. The seed can be fixed by clicking this check box and
specifying the seed manually in the corresponding text box. If this check box is
not activated, the seed is determined randomly.
Note: You can map the values of these settings (with the exception of Use a fixed
seed) to a parameter. For more information, see “Mapping Options and Attributes
to Parameters,” on page 165.
4. (optional) Click the Advanced Options... button; then, set either of the following
options:
Execute subflow only once. If selected, the subflow executes only one time.
This is useful in models that need to turn the driver logic on/off parametrically.
This option is also helpful in debugging the process.
Note: You can map this setting to a parameter. For more information, see
“Mapping Options and Attributes to Parameters,” on page 165.
Allow the following to be turned off by parameters. This option allows you
to de-activate (turn off) the selected types of design parameters (design
variables, random variables, responses) using input parameters to the SDI
component. If this option is selected, boolean input parameters will be created
in an Active Design Variables/Active Random Variables/Active Responses
aggregate under the Mapped Options and Attributes aggregate parameter. The
parameters are selected by default. If any of the parameters are not selected at
runtime, then those design parameters will not be used during execution.
If you selected the Improve Design option, proceed to “Using the Design
Variables Tab,” on page 157.
If you selected the Analyze Design option, proceed to “Using the Random
Variables Tab,” on page 159.
1. Click the Design Variables tab. The contents of the tab appear.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Select the parameters you want to use as design variables by clicking the check
box that corresponds to the parameter. To select all parameters, click the Check
button at the bottom of the tab. To deselect all the parameters, click the Uncheck
button.
4. Specify the lower and upper bounds and initial value for each design variable.
Unlike optimization, SDI requires lower and upper bounds for each design
variable. The sample ranges at each step for SDI are determined by the range of the
design variable, the number of steps, and the current value of the design variable.
The design variable bounds will first default to the problem formulation bounds
defined for the selected parameter on the Design Gateway Formulation tab. If no
problem formulation bounds are defined, the bounds will default to +/- 10% of the
current parameter value if the current parameter value is non zero. If there are no
problem formulation bounds and the current parameter value is 0, the bounds will
be 0.0 and 1.0. For more information on problem formulation, refer to the Isight
User’s Guide.
Note: You can also set variable options using the Edit... button at the bottom of the
editor. For more information, see “Editing Attributes for Multiple Parameters,” on
page 170.
5. (optional) If desired, map the lower and upper bound variable attributes to
parameters. For more information, see “Mapping Options and Attributes to
Parameters,” on page 165.
1. Click the Random Variables tab. The contents of the tab appear.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Select the parameters you want to use as random variables by clicking the check
box that corresponds to the parameter. To select all parameters, click the Check
button at the bottom of the tab. To deselect all the parameters, click the Uncheck
button.
Once you select a random variable, its name is displayed in the Distribution
Information area, and the rest of the tab is activated.
4. Click the Correlation Matrix button if you want to use random variable
correlation to sample the random variable distributions. This feature induces the
required correlations on the given sample of random variables while preserving the
individual distributions.
The Linear Correlation Matrix for Random Variables dialog box appears.
5 Type the desired values (-1.0 to 1.0) in the white text boxes in the dialog box.
Note: To completely remove a number already present in a text box, press the
BACKSPACE key on your keyboard.
7. Set any of the following options, some of which vary based on your distribution
selection:
Click the Distribution button to set the probability distribution option for the
random variable. Like sampling techniques, random variable distributions are
implemented as “plug-ins” used by the SDI component. They are extendable
by creating new “plug-ins” for new distributions.
Note: You can also set variable options using the button at the bottom of
the editor. For more information, see “Editing Attributes for Multiple Parameters,”
on page 170.
Note: You can map any setting to a parameter. For more information, see
“Mapping Options and Attributes to Parameters,” on page 165.
8. Review the preview graphs on the right side of the tab. These graphs are
automatically updated based on changes made to the selected random variables
distribution properties. A legend below the graph explains the color coding. The
graphs display the following information:
Probability Density. This graph shows the actual shape of the selected
distribution with regard to the probability density function.
Cumulative Distribution. This graph shows the actual shape of the selected
distribution with regard to the cumulative distribution function.
9. Set the Update random variable mean values to current parameter values
before execution option. This option, at the bottom of the tab, allows for
automatic updating of mean values of all random variables to the current parameter
values in this component, prior to executing the SDI component. The default is to
have this option deactivated, and to retain settings. If you want to automatically
change the settings to the current point when the SDI component is executed, click
this check button to activate it. This option is useful if the SDI component is
executed after another component, and parameter values are taken from the
previous component.
10. Proceed to “Using the Responses Tab” on this page.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Select the output parameters you want to use as responses by clicking the
corresponding check boxes. To select all parameters, click the Check button at the
bottom of the tab. To deselect all the parameters, click the Uncheck button.
Target. Specify the desired goal target value for all responses that define
performance of the system. The SDI process will attempt to move the design
towards the specified targets.
Note: The target is not needed when the execution mode is “Analyze Design”.
Additionally, for highly correlated responses, a target should only be defined
for one response in the correlated set so as not to over determine the problem.
Target Type. Select the type of the reponse goal for all responses with a
Target value defined (Target Type is required when a target value is entered).
The Target Type determines how the distance from the target is calculated and
how the SDI iterations proceed.
Three options are available for Target Type:
• Hard (= target). Select this target type if the goal is to keep the response as
close as possible to the specified target.
• Soft (<= target). Select this target type if the goal is to get the response
value equal to or below the target value. In this case, if the response value
goes below the target, the response will be removed from the distance
calculation; the response will not be moved back towards the target if it
goes below the target.
• Soft (>= target). Select this target type if the goal is to get the response
value equal to or above the target value. In this case, if the response value
goes above the target, the response will be removed from the distance
calculation; the response will not be moved back towards the target if it
goes above the target.
Lower Limit. If a response value is specified in the Lower Limit column for a
response, it will be used to assess penalty on the calculation of the distance
from the target. For all design points with response value below the lower
limit, the distance will be penalized, effectively removing the design point
from consideration as a new best design. Also, the probability of response
values greater than the lower limit will be calculated and reported after all
simulations are complete.
Upper Limit. If a response value is specified in the Upper Limit column for a
response, it will be used to assess penalty on the calculation of the distance
from the target. For all design points with response value above the upper
limit, the distance will be penalized, effectively removing the design point
from consideration as a new best design. Also, the probability of response
values less than the upper limit will be calculated and reported after all
simulations are complete.
Note: If both Lower and Upper limits are specified, then the Total probability
between the limits is also reported.
Percentile. If a percentile value (a value between 0 and 1) is specified in the
Percentile column for a response, the response value corresponding to that
percentile of the resulting response distribution will be reported after all
simulations are complete.
Note: You can also set response options using the Edit... button at the bottom of
the editor. For more information, see “Editing Attributes for Multiple Parameters,”
on page 170.
5. (optional) If desired, map any of the response attributes to parameters (with the
exception of Target Type). For more information, see “Mapping Options and
Attributes to Parameters” on this page.
6. Click OK to close the editor and save your changes. Click Apply to save your
changes, but keep the editor open.
3. Click the value in the Value text box for the execution option whose value you
want to map. The value is highlighted.
Note: You can also map the Execute subflow only once option. Click on
Advanced Options... to access this option.
4. Right-click the value; then, select the Map this value to a parameter option. The
Select a Parameter dialog box appears prompting you to enter the name of a
parameter to which you want to map.
5. Type a name for the parameter (by default, the execution option name is used);
then, click OK. Once mapped, the icon appears next to the execution option.
You can click this button to view or change the parameter name. You can also
right-click on the setting again to remove the mapping.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter.
6. Click Apply to save your changes; then, proceed to step 14.
8. Right-click in the Standard Deviation text box; then, click Map this value to a
parameter.
Note: Standard Deviation is the only random value attribute that can be mapped.
The Select a Parameter dialog box appears prompting you to enter the name of a
parameter to which you want to map.
9. Type a name for the parameter (by default, the Standard Deviation attribute name
is used); then, click OK. Once mapped, the icon appears next to the tuning
parameter’s value. You can click this icon to view or change the parameter name.
You can also right-click on the setting again to remove the parameter name.
10. Click Apply to save your changes; then, proceed to step 14.
11. Determine if you want to map a Design Variable or a Response attribute; then,
click the appropriate tab on the component editor. In the example below, a Design
Variable is being mapped.
If you want to apply the mapping to only the selected parameter, select the
Map <Attribute_Name>to a parameter for selected option.
If you want to apply the mapping to all the parameters, select the Map
<Attribute_Name> to a parameter for all option.
The icon appears next to the variable’s value. You can click this icon to view or
change the parameter name.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a member(s) of a special aggregate parameter called Mapped
Attributes and Options.
Note: You can remove these mappings at any time. Simply right-click the
appropriate parameter attribute; then, select the Remove mapping of
<Attribute_Name> to a parameter for selected or Remove mapping of
<Attribute_Name> to a parameter for all option, depending on how you
originally mapped the attribute.
Note: If an entire array is selected as a random variable/response (as opposed to
individual elements), a corresponding array will be created in the Mapped
Attributes and Options parameter with elements that will map to the
14. Click OK to close the component editor and return to the Design Gateway.
1. Select the parameter(s) you want to edit on the Design Variables, Random
Variables or Responses tab.
To select all parameters, click the Check button at the bottom of the tab. To
deselect all the parameters, click the Uncheck button.
2. Click the Edit... or button at the bottom of the component editor. The Edit
dialog box appears. In the following example, a parameter on the Random
Variables tab is being edited.
3. Update the listed values, as desired. Only options with defined values appear on
this dialog box.
4. Click OK. The values are updated for all the parameters that were selected.
“Opening the Editor and Using the General Tab,” on page 173
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide.
The Component Editor dialog box appears.
The editor is divided into four or five tabs, depending on the analysis type selected.
If a Reliability Technique or Monte Carlo Sampling technique type is used, the
editor contains four tabs: General, Random Variables, Responses, and Options.
The General tab offers setup options, and analysis type and technique options. The
Random Variables tab allows you to select and configure the random variables.
The Responses tab allows you to select and configure the response parameters. The
Options tab allows you to select the Six Sigma outputs to be included in the Six
Sigma Results output parameter.
If the Design of Experiments analysis type is used for the probabilistic analysis, the
Random Variables tab is replaced by a Factors tab and a Design Matrix tab is
added. The Factors tab allows you to select and configure the DOE factors. The
Design Matrix tab displays the matrix resulting from the DOE technique selected
and the selected/configured factors.
2. Click the Six Sigma Analysis or Six Sigma Optimization radio button.
3. Click the Analysis Type button; then, select the analysis technique type you want
to use from the drop-down list that appears. The Six Sigma component can use
Reliability techniques, Monte Carlo Sampling techniques, and Design of
Experiments techniques for probabilistic analysis. The available techniques in the
Technique drop-down list change based on the Analysis Type selected. The tabs in
the editor may also change to reflect the selected Analysis Type.
Note: You can set default behavior for this option using the component preferences
as described in “Setting Six Sigma Component Preferences,” on page 210.
4. Click the Technique button; then, select the technique you want to use from the
drop-down list that appears. The technique's options appear in the Technique
Options area (below the Analysis Type button), and information about the
technique appears in the Technique Description area on the right side of the editor.
Techniques used in the Six Sigma component are implemented as “plug-ins”. As
such, they are extendable by creating new plug-ins for a given analysis type. For
more information, refer to the Isight Development Guide.
Note: You can set default behavior for this option using the component preferences
as described in“Setting Six Sigma Component Preferences,” on page 210.
The Reliability technique plug-ins currently available in Isight are the following:
FORM (First Order Reliability Method). The first order reliability method is
an MPP (Most Probable Point) search method. It uses an optimization strategy
to find the closest point on each constraint to the current design, the mean
value point, or MVP. FORM uses a first order analysis to determine the
probability of failure at the MPP.
Mean Value Method. This method is the default selection and can be the most
efficient method. The mean value method is based on a first or second order
Taylor's expansion to estimate response mean and standard deviation.
For more detailed information on these sampling techniques, see “Six Sigma
Reference Information,” on page 534.
The Monte Carlo Sampling technique plug-ins currently available in Isight are the
following:
For more detailed information on these sampling techniques, see “Monte Carlo
Reference Information,” on page 515.
The Design of Experiments technique plug-ins currently available in Isight are the
following (although all techniques listed below may not be included in your
installation):
Central Composite Design
Data File
Full Factorial
Latin Hypercube
Optimal Latin Hypercube
Orthogonal Array
Parameter Study
For more detailed information on these DOE techniques, see “DOE Reference
Information,” on page 508.
Note: Techniques can be added to the Six Sigma component by publishing new
technique plug-ins to the Library. For more information, refer to the Isight
Development Guide.
5. Set the following options in the Technique Options area, which vary based on your
selected analysis type and technique:
• Taylor Series Order. Select First Order or Second Order analysis. The
first order analysis will require evaluation of the mean value point and one
finite difference step evaluation for each random variable gradient (n+1
total design evaluations, where n is the number of random variables). The
second order analysis will require 2n+1 evaluations (pure second order
terms included only, no mixed second order terms).
• Finite Difference Step Size. This parameter sets the relative finite
difference step size to be used by the reliability method when calculating
gradients. The default value is 0.01 (1 percent).
• Min Finite Difference Step Size. This parameter represents the minimum
absolute finite difference step size to be used by the reliability analysis
• Set the options for your technique as desired in the Technique Options area.
For more information on these options, see “Setting Technique-Specific
Options,” on page 58.
6. Set the following options in the Execution Options area, which vary based on your
selected analysis type:
parameters after execution will be left at the values associated with the last
(random) Monte Carlo simulation point.
• Convergence Check Interval. This integer value specifies the frequency
at which the convergence of the Monte Carlo response statistics (low order
- mean and standard deviation) are checked. The default setting requires the
convergence of response statistics to be checked after every 25 sample
points. Decreasing this value may result in premature convergence; since
the sample points are random, or in random order/combinations, sufficient
sample points are needed to determine if the response statistics are
accurately estimated.
• Convergence Tolerance. This value is the Monte Carlo termination
criterion, which specifies the fractional change in response mean and
standard deviation that will allow the simulation to be terminated before
reaching the specified number of simulations. For example, if you want the
simulation process to stop if the change in the response statistics goes
below 5 percent, enter a value of 0.05 in the corresponding text box.
Convergence is checked after each set of simulations defined by the
Convergence Check Interval (by default, after each set of 25 simulations).
The default Convergence Tolerance is 0.001.
Design of Experiments:
7. (optional) Click the Advanced Options... button; then, set the following options,
depending on the analysis type selected:
• Execute subflow only once. If selected, the subflow executes only one
time. This is useful in models that need to turn the driver logic on/off
parametrically. This option is also helpful in debugging the process.
Note: You can map this setting to a parameter. For more information, see
Mapping Options and Attributes to Parameters.
• Allow the following to be turned off by parameters. This option allows
you to de-activate (turn off) the selected types of design parameters
(random variables, responses) using input parameters to the Six Sigma
components. If this option is selected, boolean input parameters will be
created in an Active Random Variables/Active Responses aggregate under
the Mapped Options and Attributes aggregate parameter. The parameters
are selected by default. If any of the parameters are not selected at runtime,
then those design parameters will not be used during execution.
Design of Experiments Analysis Type:
• Execute designs in random order. Select this option if you want the
design points defined by the design matrix to be executed in random order.
Note: For computer simulations, which are deterministic, the order of
execution should have no bearing on the results. This option is provided
primarily for cases in which the design matrix is going to be exported for
use in physical experiments where randomness can alter the results through
the influence of noise (unknown) effects.
• Execute subflow only once. If selected, the subflow executes only one
time. This is useful in models that need to turn the driver logic on/off
parametrically. This option is also helpful in debugging the process.
• Read response values from file (don't execute subflow). Select this
option if you do not want the DOE component to actually execute an Isight
subflow for each design point, but to instead read the response values for
each design point from a specified file. You can locate the file using the
Browse... button. The format of the file must be a first row with parameter
names and each subsequent row containing the values for the parameters
for that design point. This feature is useful for when you have a complete
data set (input and output values) already compiled and simply want to use
DOE as a post-processing tool. This capability is more directly accessible
using the Data File technique and selecting the File contains responses
option. For more information on this technique, see “Setting
Technique-Specific Options,” on page 58.
• Allow the following to be turned off by parameters. This option allows
you to de-activate (turn off) the selected types of design parameters
(factors, responses) using input parameters to the DOE components. If this
option is selected, boolean input parameters will be created in an Active
Factors/Active Responses aggregate under the Mapped Options and
Attributes aggregate parameter. The parameters are selected by default. If
any of the parameters are not selected at runtime, then those design
parameters will not be used during execution.
8. Proceed to one of the following sections, based on the analysis type you are using:
Monte Carlo Sampling: “Using the Random Variables Tab,” on page 180
1. For Reliability or Monte Carlo Sampling analysis type, click the Random
Variables tab (for Design of Experiments, proceed to step 1 on page 185). The
contents of the tab appear.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Select the parameters you want to use as random variables by clicking the check
box that corresponds to the parameter. To select all parameters, click the
button at the bottom of the tab. If no parameters are selected, you will be prompted
to add all parameters as random variables. To deselect all the parameters, click the
button.
Once you select a random variable, its name is displayed in the Distribution
Information area, and the rest of the tab is activated.
4. Click the Correlation Matrix button if you want to use random variable
correlation to sample the random variable distributions.
Note: This button is only active if you have selected Monte Carlo Sampling from
the General tab on the Six Sigma editor.
This feature induces the required correlations on the given sample of random
variables while preserving the individual distributions.
The Linear Correlation Matrix for Random Variables dialog box appears.
5 Type the desired values (-1.0 to 1.0) in the white text boxes in the dialog box.
Note: To completely remove a number already present in a text box, press the
BACKSPACE key on your keyboard.
6. Set any of the following options, some of which vary based on your distribution
selection:
Click the Distribution button to set the probability distribution option for the
random variable. Like sampling techniques, random variable distributions are
implemented as plug-ins used by the Six Sigma component. They are
extendable by creating new plug-ins for new distributions.
• Triangular
• Uniform
• Weibull
Note: For more information on these distribution options, see “Understanding
Distribution Types,” on page 519.
Mean. This distribution parameter represents the measure of central tendency
of a random variable. Its default setting is the current value of the parameter.
Standard Deviation. This distribution parameter represents the measure of
dispersion of a random variable. Its default setting is 10% of the mean value. If
the mean value is 0, the default standard deviation is 1.0.
Note: You can use the button to edit the information for multiple
random variables. For more information, see “Editing Attributes for Multiple
Parameters,” on page 208.
Note: You can map any setting to a parameter. For more information, see
“Mapping Options and Attributes to Parameters,” on page 203.
7. Review the preview graphs on the right side of the tab. These graphs are
automatically updated based on changes made to the selected random variables
distribution properties. A legend below the graph explains the color coding. The
graphs display the following information:
Probability Density. This graph shows the actual shape of the selected
distribution with regard to the probability density function.
Cumulative Distribution. This graph shows the actual shape of the selected
distribution with regard to the cumulative distribution function.
8. Set the Update random variable mean values to current parameter values
before execution option. This option, at the bottom of the tab, allows for
automatic updating of mean values of all random variables to the current parameter
values in this component, prior to executing the Six Sigma component. If you want
to automatically change the settings to the current point when the Six Sigma
component is executed, click this check button to activate it. This option is useful
if the Six Sigma component is executed after another component, and parameter
values are taken from the previous component.
1. For Design of Experiments analysis type, click the Factors tab (for Reliability or
Monte Carlo Sampling, proceed to step 1 on page 181). The contents of the tab
appear.
The tab presents a table of available input parameters to select as factors to study,
along with columns for attributes to specify for each factor. The attributes to
specify are determined by the DOE technique being used.
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Select the parameters you want to define as factors to study by clicking the check
box that corresponds to the parameter. To add all the selected parameters as
factors, click the Check button at the bottom of the tab. If no parameters are
selected, you will be prompted to add all parameters as factors. To deselect all the
parameters, click the Uncheck button. Note that arrays and aggregates themselves
cannot be selected. You must open them (click the ) and select scalar members.
4. Change any of the default values, which are provided for all attributes for a
selected factor, as desired.
Note: Depending on the technique, a change in one of the columns might
automatically update any other columns that are related.
Although each technique defines the appropriate set of attributes for a factor, the
following set of attributes are common among many of the techniques.
# Levels. This attribute is the number of levels at which you wish to study the
factors. A change to this attribute will automatically re-adjust the Levels for
this factor.
Lower/Upper. These attributes are lower/upper levels for the factor. A change
to one of these attributes will automatically calculate new levels evenly
distributed between the Lower and Upper values.
Baseline. This attribute is the value to be used for converting Levels into
values when the Relation is “%” or “diff”. This attribute is set to the current
parameter value by default.
Alpha. The Lower and Upper levels specify the two levels at which a 2-level
full-factorial study is performed. The center point is also studied. The Alpha
option is a ratio defining two other points (also known as “star points”) at
which to study the given factor. For example, Alpha set to 0.25 indicates the
factor is to be studied at points 1/4 of the way from the baseline to the lower
and upper levels. Alpha set to 1.6 indicates the factor is to be studied 50%
beyond the lower and upper levels. For example, Lower = 5, Upper = 20,
Baseline = 10, and Alpha = 0.25 results in Levels = {5, 8.75, 10, 12.5, 20}.
5. Select the Update factor baselines to current values when executing option at
the bottom of the tab if you want the baseline values updated to the current
parameter values before executing. If Isight previously modified a parameter, this
option allows for automatic updating of the baseline of all factors to the current
parameter values in Isight, prior to executing the DOE technique, which will
re-adjust the values to be studied appropriately. The default is to have this option
deactivated, retaining user-defined settings. This option is useful if the DOE
technique is executed after another element of the workflow that might change the
parameter values (for example, after an Optimization component), so that the DOE
study can be performed around the new design point.
Note: You can also set factor attributes for one or multiple factors using the Edit...
button at the bottom of the editor. For more information, see “Editing Attributes
for Multiple Parameters,” on page 208.
1. For Design of Experiments analysis type, click the Design Matrix tab. The
contents of the tab appear.
This tab displays the design matrix generated for the technique selected on the
General tab and the factors selected and attributes specified on the Factors tab.
Note: If an array was selected as a factor, only a single item will be displayed in
the design matrix for the entire array. At runtime, the individual elements of the
array will be assigned to columns of the design matrix, as necessary.
2. (Optimal Latin Hypercube technique only). Click the Generate button to create
your design matrix. This ability to manually generate your matrix is present since
this process can take a significant amount of time.
The Design Matrix Generation Status dialog box appears, showing you how the
matrix generation is progressing.
Note: If you do not manually generate the design matrix in the component editor, it
will automatically be generated as part of the execution.
4. (Orthogonal Arrays technique only) Click the Options... button to display any
options available for generating the design matrix. When using Orthogonal Arrays,
the factors are automatically assigned to columns in a way that minimizes
confounding with interaction effects. Preference is given to factors in the order
they were defined.
5. Select how you want the design matrix displayed using the Show button. You can
display the design matrix as values (the actual values to be set at execution) or
levels (the level number as generated by the technique algorithm).
6. Perform any of the following actions, as desired, to manipulate the design matrix:
Deactivate a single design point. Click any row number to deactivate that
design point so that it will not be executed (the icon appears). You can
also click the button at the top of the design matrix, or right-click the
design matrix and select the Skip selected design point(s) option from the
menu that appears.
Deactivate all design points. To deactivate all design points in the design
matrix, highlight all of the design points and click the button. You can
also right-click the design matrix and select Skip all design points from the
menu that appears.
Note: Deactivating design points is useful when it is known that a specific set
of values in the design matrix represents a design that cannot be evaluated for
whatever reason.
Activate a design point. To activate a design point that has been set to be
skipped, click the icon that represents the design point. The row number
reappears. You can also click the button at the top of the design
matrix, or right-click the design point and select Activate selected design
point(s) from the menu that appears.
Activate all design points. To activate all of the design points in the design
matrix, highlight all of the design points and click the button. You can
also right-click the design matrix and select Activate all design points from the
menu that appears.
Copy the design matrix. This option copies the design matrix to the clipboard
so that is can be pasted elsewhere (for example, in a text file). This option can
2. Select the output parameters you want to use as responses by clicking the
corresponding check boxes. To select all parameters, click the Check button at the
bottom of the tab. If no parameters are selected, you will be prompted to add all
parameters as responses. To deselect all the parameters, click the Uncheck button.
Lower Limit. If a response value is specified in the Lower Limit column for a
response, the probability of response values greater than that lower limit will
be calculated and reported after all simulations are complete.
Upper Limit. If a response value is specified in the Upper Limit column for a
response, the probability of response values less than that upper limit will be
calculated and reported after all simulations are complete.
Note: If both Lower and Upper limits are specified the Total probability between
the limits is also reported. This response probability can be used to characterize the
reliability of that response with respect to remaining between the limits. The total
probability of failure with respect to the specified limits can be determined by
subtracting the total probability from one.
Note: When you move your mouse pointer over or click a specific column, the
graph that represents that value is outlined on the right side of the tab. The graph
not only gives you a visual aid, but provides text information about the individual
settings.
Note: You can also set response options using the Edit... button at the bottom of
the editor. For more information, see “Editing Attributes for Multiple Parameters,”
on page 208.
4. (optional) If desired, map any of these response attributes to parameters. For more
information, see “Mapping Options and Attributes to Parameters,” on page 203.
If you are using Six Sigma Analysis, proceed to “Using the Options Tab” on
this page.
If you are using Six Sigma Optimization, proceed to “Using the Optimization
Tab,” on page 194.
The calculated results that can be included in a Six Sigma Results output aggregate
parameter are displayed on this tab.
2. Select the Six Sigma outputs to be included in the Six Sigma Results output
aggregate parameter if desired. Selecting results to be included in this parameter is
useful for mapping from these values, or for selection as output in a process
component placed above the Six Sigma component.
For example, the Six Sigma component can be placed under the Optimization
component to formulate a process to improve the quality level (robust and/or
reliability optimization), as shown below.
3. (optional) If desired, map any of these objective attributes to parameters. For more
information, see “Mapping Options and Attributes to Parameters,” on page 203.
4. Click OK to close the editor and save your changes. Click Apply to save your
changes, but keep the editor open.
2. Verify that the General sub-tab is selected; then, select the technique you want to
use from the Optimization Technique drop-down list. The technique’s tuning
parameters appear directly below the Technique list, and information about the
technique appears in the Technique Description area on the right side of the editor.
4. Set the tuning parameter values for your technique in the Optimization Technique
Options area.
Note: You can map tuning parameter settings to parameters. For more information,
see “Mapping Options and Attributes to Parameters,” on page 203.
For more information on these options, see one of the following sections:
5. Set the following execution options, if available, in the Execution Options area:
Execute in parallel. This option controls how the design points will be
executed during optimization. (This option is not available for all techniques.)
If selected, design points will be executed in parallel, in small batches. The
size of the batch depends on the number of selected Design Variables for
numerical techniques or the size of the population for exploratory techniques.
Note: You must define all upper and lower bounds (or allowed values) for all
design variables when this option is selected. If some design variables do not
have bounds, you will not be able to use this option.
6. (optional) Click the Advanced Options... button; then, set the following options:
Execute subflow only once. If selected, the subflow executes one time, and
constraints, objectives, and penalty values are calculated. This is useful in
models that need to turn the driver logic on/off parametrically. This option is
also helpful in debugging the process.
Note: You can map this setting to a parameter. For more information, see
“Editing Attributes for Multiple Parameters,” on page 208.
Allow the following items to be turned off by parameter. This option allows
you to disable some design parameters at runtime by using parameters. When
this option is selected, a set of special “switch” variables are created. By
changing the values of these special variables, you can disable their
corresponding design parameters.
The list of parameters on this tab includes all parameters of modes Input and
In/Out from the Six Sigma component, and also parameters of the same modes
from the subflow components, if they are not mapped to any parameters of the Six
Sigma component.
8. Perform any of the following actions, which vary based on your model design:
You can also select all of the listed variables (including array elements) using
the Check button at the bottom of the tab.
Specify the lower bound for the variables in the corresponding column. This
setting is required if you are using the automatic scaling component
preference. For more information, see “Setting Optimization Component
Preferences,” on page 152.
Note: This column is not available for Multi-Island Genetic Algorithm, since
the initial design point is not used by this algorithm.
Warning: The initial values of the variables will be overridden at the
execution time if the Six Sigma component is a child component of another
process component (for example, a Task component). Therefore, to change the
initial values of variables, you must change the corresponding parameter
values in the parent (Task) component.
Warning: Changing initial values of variables essentially changes their values
in the main Design Gateway window. These changes are immediate and
cannot be reversed by clicking Cancel on the Six Sigma component editor.
Specify the upper bound for the variables in the corresponding column. This
setting is required if you are using the automatic scaling component
preference. For more information, see “Setting Optimization Component
Preferences,” on page 152.
Set the scale factor for the variable in the corresponding column. Scale factors
are used to bring variable values to the same order of magnitude to improve the
efficiency of the optimizers.
Note: This column does not appear if you use the automatic scaling component
preference. For more information, see “Setting Optimization Component
Preferences,” on page 152.
(Multi-Island Genetic Algorithm technique only) Set the value in the Gene
Size column. This value controls the number of bits N in all genes used for
encoding the value of each variable. Every bit of the gene can change its value
between 0 and 1. The total number of possible combinations in every gene is
then 2^N. This number of combinations determines the minimum change in
the value of any design variable during all genetic operations - take the
allowed range of values for a design variable, and divide it by the total number
of combinations. To increase the minimum change in design variable values
(i.e., to decrease the number of possible bit combinations when the allowed
range of design variable values is small), decrease the gene size.
Note: You can also set variable options using the Edit... button at the bottom of the
editor. For more information, see “Editing Attributes for Multiple Parameters,” on
page 208.
9. (optional) If desired, map any of these variable attributes to parameters. For more
information, see “Mapping Options and Attributes to Parameters,” on page 203.
10. Click the Constraints tab. The constraints for your problem are displayed.
The list of parameters on this tab includes all parameters of mode Output from the
Six Sigma component, and also parameters of mode Output from the subflow
components, if they are not mapped to any parameters in the Six Sigma
component. When any subflow parameter is selected as a constraint, a
corresponding parameter is created in the Six Sigma component.
11. Perform any of the following options, which vary based on your model design:
Set the lower and upper bound of a constraint in the corresponding columns.
Set the constraint’s target in the corresponding column.
Set the constraint’s scale and weight factors in the corresponding columns.
Scale factors are used to bring constraint values to the same order of
magnitude to improve the efficiency of the optimizers. Weight factors are used
to change the importance of various constraints.
Note: You can also set constraint options using the Edit... button at the bottom of
the editor. For more information, see “Editing Attributes for Multiple Parameters,”
on page 208.
12. (optional) If desired, map any of these constraint attributes to parameters. For more
information, see “Mapping Options and Attributes to Parameters,” on page 203.
where LB is the lower bound value, UB is the upper bound value, T is the target
value, Parm is the parameter value, W is the weight factor, and S is the scale factor.
13. Click the Objectives tab. The contents of the tab appear.
The list of parameters on this tab includes all parameters of mode Output from the
Six Sigma component, and also parameters of the same modes from the subflow
components, if they are not mapped to any parameters in the Six Sigma
component. Also, design variables (selected on the Variables tab) are included in
the list of parameters.
14. Perform any of the following actions, which vary based on your model design:
Set the objective’s scale and weight factors in the corresponding columns.
Note: You can also set objective options using the Edit... button at the bottom of
the editor. For more information, see “Editing Attributes for Multiple Parameters,”
on page 208.
15. (optional) If desired, map any of these objective attributes to parameters. For more
information, see “Mapping Options and Attributes to Parameters,” on this page.
16. Click OK to close the editor and save your changes. Click Apply to save your
changes, but keep the editor open.
Note: When Six Sigma Optimization is chosen and the OK or Apply button are
clicked for the first time, the model is modified by embedding the Six Sigma
component under an Optimization component. All optimization criteria will be
copied to the Optimization component and all selected parameters will be mapped
up. The optimization criteria can still be changed in the Six Sigma editor.
3. Click the value in the text box for the technique option whose value you want to
map. The value is highlighted.
4. Right-click the value; then, select Map this value to a parameter option. The
Select a Parameter dialog box appears prompting you to enter the name of a
parameter to which you want to map.
5. Type a name for the parameter (by default, the technique option name is used);
then, click OK. Once mapped, the icon appears next to the technique option's
value. You can click this icon to view or change the parameter name. You can also
right-click on the setting again to remove the parameter name.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter.
6. Click Apply to save your changes; then, proceed to step 18.
7. Right-click any of the execution options in the bottom left corner of the editor to
map that execution option to a parameter. The available execution options depend
on the Analysis Type selected. Click the Advanced Options... button to map
additional parameters.
8. Select the Map this value to a parameter option. The Select a Parameter dialog
box appears prompting you to enter the name of a parameter to which you want to
map.
9. Type a name for the parameter (by default, the execution option name is used);
then, click OK. Once mapped, the icon appears next to the execution option.
You can click this button to view or change the parameter name. You can also
right-click on the setting again to remove the mapping.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter.
10. Click Apply to save your changes; then, proceed to step 18.
13. Type a name for the parameter (by default, the Standard Deviation option name is
used); then, click OK. Once mapped, the icon appears next to the tuning
parameter's value. You can click this icon to view or change the parameter name.
You can also right-click on the setting again to remove the parameter name.
14. Click Apply to save your changes; then, proceed to step 18.
If you want to apply the mapping to only the selected parameter, select the
Map <Attribute_Name>to a parameter for selected option.
If you want to apply the mapping to all the parameters, select the Map
<Attribute_Name> to a parameter for all option.
The icon appears next to the tuning parameter's value. You can click this icon
to view or change the parameter name.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a member(s) of a special aggregate parameter called Mapped
Attributes and Options.
Note: You can remove these mappings at any time. Simply right-click the
appropriate parameter attribute; then, select the Remove mapping of
<Attribute_Name> to a parameter for selected or Remove mapping of
<Attribute_Name> to a parameter for all option, depending on how you
originally mapped the attribute.
18. Click OK to close the component editor and return to the Design Gateway.
20. Locate the new aggregate parameter called Mapped Attributes and Options;
then, click the icon to expand the parameter. The new mappings appear.
To select all parameters, click the button (Check button on the Responses
tab) at the bottom of the tab. To deselect all the parameters, click the
button (Uncheck button on the Response tab).
2. Click the button (Edit... button on the Response tab) at the bottom of the
component editor. The Edit dialog box appears. In the following example, a
parameter on the Random Variables tab is being edited.
3. Update the listed values, as desired. Only options with defined values appear on
this dialog box.
4. Click OK. The values are updated for all the parameters that were selected.
1. From the Isight Design Gateway, select Preferences from the Edit menu. The
Preferences dialog box appears.
2. Expand the Components folder on the left side of the dialog box; then, select Six
Sigma.
The following preference options are available with the Six Sigma component:
Default Six Sigma Analysis Type. Determine which analysis type will
automatically appear every time the component editor for a newly inserted
component is first opened.
3. Click OK to save your changes and close the Preferences dialog box.
The API for the Six Sigma component consists of the following method:
get("SixSigmaPlan"). This returns a SixSigmaPlan object from the
com.engineous.sdk.designdriver.plan package. From that entry point you have access
to numerous methods for setting/configuring the sampling technique, setting the run
mode, adding/editing random variables and responses, and configuring execution
options. For more information on configuring the Six Sigma component, refer to the
javadocs for the SixSigmaPlan interface in your <Isight_install_directory>/javadocs
directory.
“Opening the Editor and Using the P-Diagram Tab,” on this page
The Component Editor dialog box appears with the P-Diagram selected.
The signal factor and number of levels. The signal factor represents a
parameter set by the user when the product/process is being used (for example,
thermostat setting, accelerator pedal position, etc.). A signal factor is defined
only for dynamic systems. The number of levels for the signal factor is
displayed when a signal factor is selected.
The responses. When the responses are selected and defined on the Responses
tab of this editor, this information is added to the response block to complete
the P-Diagram.
2. Set the system type by selecting the Static System, Dynamic System, or
Dynamic-Standardized System radio button. Changing the system type will
determine whether or not the signal tab is displayed (dynamic and
dynamic-standardized systems only) and will change the response options on the
Responses tab. The system type also determines the post-processing approach.
3. Set the following option in the Execution Options area of the P-Diagram tab:
Execute design points in parallel. If selected, all of the design points defined
by the design matrices will be submitted for execution simultaneously. You
may need to clear (deselect) this option if components within the subflow have
license limitations or other requirements that mandate only one execution at a
time.
Note: You can map this setting to a parameter. For more information, see
“Mapping Options and Attributes to Parameters,” on page 228.
4. (optional) Click the Advanced Options... button. The Advanced Execution
Options dialog box appears.
Execute subflow only once. If selected, the subflow executes only one time.
This is useful in models that need to turn the driver logic on/off parametrically.
This option is also helpful in debugging the process.
Note: You can map this setting to a parameter. For more information, see
“Mapping Options and Attributes to Parameters,” on page 228.
Allow the following to be turned off by parameters. This option allows you
to de-activate (turn off) the selected types of design parameters (factors,
responses) using input parameters to the Taguchi components. If this option is
selected, boolean input parameters will be created in an Active Factors/Active
Responses aggregate under the Mapped Options and Attributes aggregate
parameter. The parameters are selected by default. If any of the parameters are
not selected at runtime, then those design parameters will not be used during
execution.
1. Click the tab for the type of factor you want to set: Control, Noise, or Signal.
2. (Control and Noise tabs) Click the Technique tab.; then, select the Technique by
using the Technique button.
The technique’s options appear in the Technique Options area, and information
about the technique appears in the Technique Description area on the right side of
the tab.
Note: You can set default behavior for this option using the component preferences
as described in “Setting Taguchi Robust Design Component Preferences,” on
page 234.
The following Design of Experiments techniques are available in Isight (although
all techniques listed below may not be included in your installation):
Central Composite Design
Data File
Full Factorial
Latin Hypercube
Optimal Latin Hypercube
Orthogonal Arrays
Parameter Study
Note: DOE Techniques can be added to the Taguchi Robust Design component by
publishing new “DOE Technique” plug-ins to the Library. For more information,
refer to the Isight Development Guide.
3. (Control and Noise Factors only) Set the Technique Options. For more
information on using the options available on the Technique sub-tab for Taguchi
Robust Design Factor tab, see “Setting Technique-Specific Options,” on page 58.
Note: You can map these settings to a parameter. For more information, see
“Mapping Options and Attributes to Parameters,” on page 228.
4. Click Apply.
5. Click the Factor(s) tab. The contents of the tab appear.
The tab presents a table of available input parameters to select as factors to study,
along with columns for attributes to specify for each factor. The attributes to
specify are determined by the technique being used.
6. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
7. Select the parameters you want to use as factors by clicking the check box that
corresponds to the parameter. To select all parameters, click the Check button at
the bottom of the tab. If no parameters are selected, you will be prompted to add all
parameters as factors. To deselect all the parameters, click the Uncheck button.
Note: Since there can be only one signal factor, there are no Select All/Deselect
All buttons, and the factor check boxes are replaced with single select radio
buttons.
8. Change any of the default values, which are provided for all attributes for a
selected factor, as desired.
Lower/Upper. These attributes are lower/upper levels for the factor. A change
to one of these attributes will automatically calculate new levels evenly
distributed between the Lower and Upper values.
Baseline. This attribute is the value to be used for converting Levels into
values when the Relation is “%” or “diff”. This attribute is set to the current
parameter value by default.
Alpha. The Lower and Upper levels specify the two levels at which a 2-level
full-factorial study is performed. The center point is also studied. The Alpha
option is a ratio defining two other points (also known as “star points”) at
which to study the given factor. For example, Alpha set to 0.25 indicates the
factor is to be studied at points 1/4 of the way from the baseline to the lower
and upper levels. Alpha set to 1.6 indicates the factor is to be studied 60%
beyond the lower and upper levels. For example, Lower = 5, Upper = 20,
Baseline = 10, and Alpha = 0.25 results in Levels = {5, 8.75, 10, 12.5, 20}.
If the Noise Matrix option is selected, noise factors are defined exactly as defined
in step 8 - factor attributes available are based on the noise technique selected.
If the 3 Noise Condition option is selected, the Technique and Matrix tabs under
the Noise tab are removed; no technique is selected for the noise factors and no
matrix is generated. In this case, three noise experiments are fully defined in the
noise factors table based on a specified negative noise condition, a reference noise
condition, and a positive noise condition. The noise condition refers to the overall
combined effect of the noise factors at each condition on the response values. The
negative and positive side noise conditions can be defined relative to the reference
condition using the Relation attribute, by setting this attribute to % or diff.
The Noise tab with the 3 Noise Condition setting selected is shown below.
10. Select the Update factor baselines to current values when executing option at
the bottom of the tab if you want the baseline values updated to the current
parameter values before executing. If Isight previously modified a parameter, this
option allows for automatic updating of the baseline of all factors to the current
parameter values in Isight, prior to executing the DOE technique, which will
re-adjust the values to be studied appropriately. The default is to have this option
deactivated, retaining user-defined settings. This option is useful if the DOE
technique is executed after another element of the workflow that might change the
parameter values (for example, after an Optimization component), so that the DOE
study can be performed around the new design point.
Note: You can also set factor attributes for one or multiple factors using the Edit...
button at the bottom of the editor. For more information, see “Editing Attributes
for Multiple Parameters,” on page 233.
Note: You can map these settings to parameters. For more information, see
“Mapping Options and Attributes to Parameters,” on page 228.
12. Click the Matrix tab. The contents of the tab appear.
This tab displays the design matrix generated for the technique selected and the
factors selected and attributes specified on the Factors tab.
Note: Since there can be only one signal factor, the signal factor matrix has only
one column, with each level of the signal factor listed exactly once.
Note: If an array was selected as a control or noise factor, only a single item will be
displayed in the design matrix for the entire array. At runtime, the individual
elements of the array will be assigned to columns of the design matrix, as
necessary.
13. (Optimal Latin Hypercube technique only). Click the Generate button to create
your design matrix. This ability to manually generate your matrix is present since
this process can take a significant amount of time.
The Design Matrix Generation Status dialog box appears, showing you how the
matrix generation is progressing.
Note: If you do not manually generate the design matrix in the component editor, it
will automatically be generated as part of the execution.
14. (Optimal Latin Hypercube technique only) Examine the information displayed on
the Status dialog box; then, click the Close button once the matrix has been
created. If you click the Close button before the optimization of the design matrix
is complete, the resulting design matrix will be the last one generated during the
optimization process.
15. (Orthogonal Arrays technique only) Click the Options... button to display any
options available for generating the design matrix. When using Orthogonal Arrays,
the factors are automatically assigned to columns in a way that minimizes
confounding with interaction effects. Preference is given to factors in the order
they were defined.
16. Select how you want the design matrix displayed using the Show button. You can
display the design matrix as values (the actual values to be set at execution) or
levels (the level number as generated by the technique algorithm).
17. Perform any of the following actions, as desired, to manipulate the design matrix:
Deactivate a single design point. Click any row number to deactivate that
design point so that it will not be executed (the icon appears). You can
also click the button at the top of the design matrix, or right-click the
design matrix and select the Skip selected design point(s) option from the
menu that appears.
Deactivate all design points. To deactivate all design points in the design
matrix, highlight all of the design points and click the button. You can
also right-click the design matrix and select Skip all design points from the
menu that appears.
Note: Deactivating design points is useful when it is known that a specific set
of values in the design matrix represents a design that cannot be evaluated for
whatever reason.
Activate a design point. To activate a design point that has been set to be
skipped, click the icon that represents the design point. The row number
reappears. You can also click the button at the top of the design
matrix, or right-click the design point and select Activate selected design
point(s) from the menu that appears.
Activate all design points. To activate all of the design points in the design
matrix, highlight all of the design points and click the button. You can
also right-click the design matrix and select Activate all design points from
the menu that appears.
Copy the design matrix. This option copies the design matrix to the clipboard
so that is can be pasted elsewhere (for example, in a text file). This option can
19. (optional) Repeat step 1 through step 18 for any other factor types you want to set.
20. Click the P-Diagram tab. Note that the factors you selected now appear in the
P-Diagram.
Defining Responses
To define responses for your plan:
2. (optional) Right-click in the table, and then select Group Parameters. This action
sorts the parameters in the table by component. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You may also choose
to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
3. Select the parameters you want to use as responses by clicking the check box that
corresponds to the parameter. To select all parameters, click the Check button at
the bottom of the tab. If no parameters are selected, you will be prompted to add all
parameters as responses. To deselect all the parameters, click the Uncheck button.
When an output is selected for use, the remaining columns automatically display
any related information.
Note: You can map these settings to parameters. For more information, see
“Mapping Options and Attributes to Parameters,” on page 228.
(Static and Dynamic Systems only) Type. This column defines the response
type, which, in turn, determines what formulation is used for the S/N Ratio and
Loss Function calculations. To change the response type, click the current
setting in the cell.
The following options are available:
• Nominal is Best. For static system, values as close as possible to a
specified target are preferred. For dynamic system, the slope of the linear
relationship between the response and the signal levels is preferably as
close to the specified target slope as possible.
• Lower is Better. For static system, values as low as possible are preferred.
For dynamic system, the slope of the linear relationship between the
response and the signal levels is preferably as low as possible.
• Higher is Better. For static system, values as high as possible are
preferred. For dynamic system, the slope of the linear relationship between
the response and the signal levels is preferably as high as possible.
(Dynamic System only) Dynamic Quality Characteristic. For Dynamic
system, the following quality characteristics determine how the dynamic
response is analyzed for linearity with respect to the signal factor:
• Zero Point Proportional. The relationship between the signal factor and
response passes through zero (zero input leads to zero output). One good
example of this relationship is a weighing system to be designed and
calibrated. If no weight is on the scale, the measurement should read zero.
• Reference Point Proportional. The relationship between the signal factor
and the response passes through a reference point. This type is used when
the relationship does not pass through zero or when the signal values are far
from zero. This type can be used if the relationship can be calibrated to a
known standard, as with oven controls where one setting is calibrated to a
specific temperature.
• General Linear Equation. This type is used when the relationship
between the signal factor and response is not known to pass through zero
and no reference point is known, which is the most general case. The other
types should be used whenever possible. This type can be useful, however,
when signal levels are close together and far from zero, with no known
reference point. With this type, a linear fit to the data will be used to assess
the signal factor / response relationship.
(Static and Dynamic Systems only) Target. This column displays information
pertaining to the desired response target value. The target column is used only
when working with a Nominal is Best response type. For dynamic system, this
value defines the target slope of the linear relationship between the response
and the signal levels. To enter a target value, simply click in the appropriate
cell in the target column, and make any necessary changes.
(Static System only) Loss Constant. This column displays the loss constant,
which is a scale factor used to convert the loss function to a desired
measurement or unit (for example, $/part). If no existing data/analysis is
available to set this constant, the default of 1.0 can be used to assess relative
quality loss between design alternatives (to compare designs defined by the
control array). If domain information is available to set this constant, the value
can be changed by clicking on it, and making any edits at the cursor that
appears.
Note: You can also set variable options using the Edit... button at the bottom of the
editor. For more information, see “Editing Attributes for Multiple Parameters,” on
page 233.
4. Click Apply.
1. Click the Post Processing tab. The contents of the tab appear.
Best and worst factor levels are estimated based on main effect calculations. The
combination of best or worst levels may not be a combination that was included in
the control experiment and hence no metric (e.g., S/N ratio) value is available. The
metric value is automatically estimated, again based on main effect information.
Choosing one or both of these options will allow the estimated response metric
values to be verified with actual executions of the noise array and signal levels.
2. Select a response; then, in the Confirm drop-down list choose one of the following:
Best Levels
Worst Levels
Best and Worst Levels
Note: You can also set options using the Edit... button at the bottom of the editor.
For more information, see “Editing Attributes for Multiple Parameters,” on
page 233.
S/N Ratio
(Static or Dynamic only) Sensitivity
(Static only) Mean
(Static only) Variance
(Static only) Loss Function
(Dynamic-Standardized only) Beta 1
(Dynamic-Standardized only) Beta 2
(Dynamic-Standardized only) Beta 3
Generic formulation
Taguchi formulation
This option determines how the S/N Ratio is calculated. For more information
regarding the standardized S/N Ratio calculation, see “Taguchi
Dynamic-Standardized Analysis,” on page 565.
5. Click the Run baseline design point (all noise/signal experiments will be
executed) check box to specify an evaluation of robustness at baseline design, if
necessary. After execution of the designed experiments, Isight executes the
noise/signal combinations at the baseline design, and performs Taguchi post
processing on that point to calculate the values of the Taguchi metrics for the
baseline design. These values are then used for comparison with metric values at
identified best/worst levels to calculate benefit/loss achieved through the
performed experimentation and post processing.
6. Click OK to close the editor and save your changes. Click Apply to save your
changes, but keep the editor open.
3. Right-click the Execute design points in parallel option in the upper left corner of
the editor to map that execution option to a parameter. Click the Advanced
Options... button to map the Execute subflow only once option.
4. Select the Map this value to a parameter option. The Select a Parameter dialog
box appears prompting you to enter the name of a parameter to which you want to
map.
5. Type a name for the parameter (by default, the execution option name is used);
then, click OK. Once mapped, the icon appears next to the execution option.
You can click this button to view or change the parameter name. You can also
right-click on the setting again to remove the mapping.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter.
7. Click the Control or Noise tab; then, click the Technique tab.
8. Click the value in the text box for the technique option whose value you want to
map. The value is highlighted.
9. Right-click the value; then, select Map this value to a parameter option. The
Select a Parameter dialog box appears prompting you to enter the name of a
parameter to which you want to map.
10. Type a name for the parameter (by default, the technique option name is used);
then, click OK. Once mapped, the icon appears next to the technique option's
value. You can click this icon to view or change the parameter name. You can also
right-click on the setting again to remove the parameter name.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter.
11. Click Apply to save your changes; then, proceed to step 19.
12. Click the Control, Noise, or Signal tab; then, click the Factor(s) tab.
If you want to apply the mapping to only the selected parameter, select the
Map <Attribute_Name>to a parameter for selected option.
If you want to apply the mapping to all the parameters, select the Map
<Attribute_Name> to a parameter for all option.
The icon appears next to the tuning parameter's value. You can click this icon
to view or change the parameter name.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a special aggregate parameter.
Note: You can remove these mappings at any time. Simply right-click the
appropriate parameter attribute; then, select the Remove mapping of
<Attribute_Name> to a parameter for selected or Remove mapping of
<Attribute_Name> to a parameter for all option, depending on how you
originally mapped the attribute.
15. Click Apply to save your changes; then, proceed to step 19.
If you want to apply the mapping to only the selected parameter, select the
Map <Attribute_Name>to a parameter for selected option.
If you want to apply the mapping to all the parameters, select the Map
<Attribute_Name> to a parameter for all option.
The icon appears next to the tuning parameter's value. You can click this icon
to view or change the parameter name.
The parameter(s) you have just created will appear in the Design Gateway
Parameters tab as a member(s) of a special aggregate parameter called Mapped
Attributes and Options.
Note: You can remove these mappings at any time. Simply right-click the
appropriate parameter attribute; then, select the Remove mapping of
<Attribute_Name> to a parameter for selected or Remove mapping of
<Attribute_Name> to a parameter for all option, depending on how you
originally mapped the attribute.
19. Click OK to close the component editor and return to the Design Gateway.
21. Locate the new aggregate parameter called Mapped Attributes and Options;
then, click the icon to expand the parameter. The new mappings appear.
To select all parameters, click the Check button at the bottom of the tab. To
deselect all the parameters, click the Uncheck button.
The Edit dialog box appears. In the following example, a parameter on the
Responses tab is being edited.
3. Update the listed values, as desired. Only options with defined values appear on
this dialog box.
4. Click OK. The values are updated for all the parameters that were selected.
1. From the Isight Design Gateway, select Preferences from the Edit menu. The
Preferences dialog box appears.
2. Expand the Components folder on the left side of the dialog box; then, select
Taguchi RD. The following preference options are available:
3. Click OK to save your changes and close the Preferences dialog box.
The API for the Taguchi component consists of the following method:
get("TaguchiPlan"). This returns a TaguchiPlan object from the
com.engineous.sdk.designdriver.plan package. From that entry point you have access
to numerous methods for setting/configuring the DOE control and noise plans,
adding/editing control and noise factors and responses, and configuring execution
options. For more information on configuring the Taguchi component, refer to the
javadocs for the TaguchiPlan interface in your <Isight_install_directory>/javadocs
directory.
For more information on using the Task Plan, refer to the Isight User’s Guide.
This chapter describes the usage of the Engineous-supplied activity components that
are included with a standard Isight installation. It is divided into the following topics:
Introduction
Components are an essential part of model construction. Numerous components have
been developed by Engineous and are included with Isight. These components are
accessed using the Design Gateway, but each has a different editor and options that can
be configured.
All Isight components can be divided into two types: process and activity.
For detailed information on each of these techniques, as well as their parameters, see
the remaining parts in this section.
“Selecting the Technique and Specifying the Data File” on this page
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide. The Component Editor dialog box appears.
The editor is divided into six tabs: Technique, Data File, Parameters, Technique
Options, Error Analysis Options, and View Data.
2. Verify that the Technique tab is selected; then, choose the technique you want to
use from the Approximation technique drop-down list. Information about the
technique appears in the Technique Description area.
Note: This component has default preferences which can be set based on your
needs. For more information, refer to the Isight User’s Guide.
3. (optional) Review the information in the Technique Description area.
4. Click the Data File tab. The contents of the tab appear.
The Data File type is Sampling Points by default. You may also use a previously
saved coefficients data file from another approximation with the same number of
input and output parameters. Proceed to one of the following sections:
5. Type the name and path of the data file directly into the corresponding text box, or
click the Browse... button and navigate to the file with data for approximation
construction.
The file must contain parameter names on the first line, and data points on each
line after that.
6. After entering the data file, determine how Isight will handle the file using one of
the following options:
Static file. When this option is selected, the data in the file never changes.
Isight will read it once and save in memory for future use. However, if the
contents of the file change, you can instruct Isight to re-read the file using the
Re-read File button.
Dynamic file. When this option is selected, the data in the file can change.
Isight will read it every time before initialization. This option is only available
in Isight standalone. It is not supported in a Fiper environment.
File parameter. When this option is selected, Isight will create a file
parameter in the component, which can be mapped to receive data from
another file parameter at the time of execution.
8. Select Coefficients Data from the Data File Type drop-down list. The options for
the Coefficients Data File appear.
9. Type the name and path of the data file directly into the corresponding text box, or
click the Browse... button and navigate to the file with data for approximation
construction.
Note: If you use a coefficients data file, the Technique Options tab is not available.
This tab allows you to set the list of input and output parameters of the
approximation by selecting them from the list of available parameters. If you
previously created parameters for this component using either the Design Gateway
or this editor, the parameters will appear in the two lists on this tab. You can add
new parameters to your component by scanning the first line of the data file.
2. Click the Scan... button. The Parameters in Data File dialog box appears.
All parameter names found on the first line of the data file are displayed in the
table on this dialog box. You can select any number of parameters to be used as
inputs or outputs. You do not have to select all of the parameters in the data file.
3. Determine if the parameters will be input or output parameters using either of the
following options, as desired:
4. Click OK. You are returned to the Parameters tab, and the selected parameters are
displayed as specified (input or output).
5. Verify that the correct parameters are selected for use with your approximation.
You can choose individual parameters, or use the Check button to use all of the
listed input or output parameters.
Proceed to “Setting Error Analysis Options,” on page 251 if you wish to set a
particular option (other than the default) for the error analysis.
Proceed to “Viewing Coefficients Data,” on page 252 if you wish to view the
approximation’s internal coefficient data.
1. Click the Technique Options tab. The contents of the tab appear:
2. Set the Smoothing Filter option. This option allows you to relax the requirement
that the RBF approximation pass through every single data point. Its primary
purpose is to smooth out noisy data. By not going through every point, Isight can
effectively smooth noisy functions and provide an approximation that may be
easier to optimize. The value specified by this option averages the output values of
points that are clustered in the normalized filter domain.
points in the range of 0.719<x<0.721 (remember that the space is normalized over
the range 10).
The maximum allowed value for the smoothing filter is 0.1. Mathematically, this
means you have a maximum of 10 clustered sample points across. (Through
research, it has been determined that at higher values it does not make sense to
perform an RBF.) With 10 clustered sample points, you can do a maximum of four
local minima (sine wave) across one dimension. With quartic RSM you can do
three local minima across one dimension. Therefore, if you have to smooth more
than 0.1 it is better to switch to the RSM technique.
Note that there is no theoretical basis to distinguish between signal and noise in the
data used for approximation. We recommend that you evaluate the resultant RBF
to determine if this option is appropriate for your application.
4. Click the Technique Options tab. The contents of the tab appear.
5. Set the Polynomial Order option. This option controls the order of the
polynomials used by the Response Surface Model. The following options are
available:
Linear. This option makes all outputs linear functions with respect to inputs.
This option is recommended if you want to study first order (linear) effects of
the inputs on the outputs.
Quadratic. If this option is selected, the polynomials for all outputs will
contain linear terms, as well as all quadratic terms and all two-way interactions
of the inputs. This option is the recommended choice unless you know that the
outputs are either linear or highly non-linear with respect to the inputs.
Quadratic polynomials also behave well in optimization.
Cubic. If this option is selected, the polynomials for all outputs will contain, in
addition to all linear and quadratic terms, all pure cubic terms. No three-way
interactions are included. Cubic polynomials require more design points in the
data file. They are recommended only for highly non-linear output functions,
when it is known that quadratic polynomials do not provide an accurate
approximation of the outputs.
Quartic. If this option is selected, the polynomials for all outputs will contain,
in addition to all linear and quadratic terms, pure cubic terms and pure 4-th
order terms. No three-way or four-way interactions will be included. The same
recommendations that apply to cubic polynomials also apply to quartic ones.
In practice, it is rarely necessary to use quartic polynomials. Cubic and quartic
polynomials may inhibit the optimization process by creating numerous false
local minima.
6. You can use Term Selection to remove some polynomial terms with low
significance, which can improve reliability for your approximation and reduce the
number of required design points. Perform one of the following actions:
To use Term Selection: Click the Use term selection to select the most
significant terms from the polynomial check box.
If you do not want to use term selection for your approximation, proceed to
“Setting Error Analysis Options,” on page 251.
7. Select one of the following options from the Term selection method drop-down
list:
Here are exact output values, are approximate output values, n is the
number of design points used for RSM.
Stepwise(Efroymson). This method of polynomial term selection starts with
the constant and then adds polynomial terms one at a time so that the fitting
errors of the RSM are minimized at every step. A new term is added if the
following condition is satisfied:
After adding a new term, Isight will examine all selected terms and will delete
one or more terms for which the following condition is satisfied:
The latter two values can be controlled using the text boxes that appear under
the Term Selection Method menu when Stepwise (Efroymson) is selected:
• F-ratio to drop term. This is the maximum value of F-ratio to drop a
polynomial term from RSM.
• F-ratio to add term. This is the minimum value of F-ratio to add a new
polynomial term to RSM.
Two-At-A-Time Replacement. This method of polynomial term selection
starts with the constant and then adds polynomial terms one at a time so that
the fitting errors of the RSM are minimized at every step. After adding a new
polynomial term, Isight will consider all possible replacements for 1 or 2 of the
selected terms that can reduce the fitting errors further. Then the best
replacement combination is found and the terms are replaced and the next best
term is selected and added. The process is repeated at every step until the
maximum number of terms is selected. This method has a better chance of
finding the best approximation than the two previous methods, but it is more
expensive computationally.
Click the Specify number of selected terms check box; then, enter the
number you want selected in the corresponding text box.
2. Select one of the following from the Error Analysis Method drop-down list:
No Error Analysis. No error analysis will be performed on the current
approximation. Proceed to step 8.
Separate Datafile. This method compares exact and approximate output
values for each data point. Proceed to step 3.
Cross-Validation. This method selects a subset of points from the main data
set, removes each point one at a time, re-calculates co-efficients, and compares
exact and approximate output values at each removed point. Proceed to step 6.
3. Type the name and path of the data file directly into the corresponding text box, or
click the Browse... button and navigate to the file with data for approximation
construction. The file must contain parameter names on the first line, and data
points on each line after that.
4. After entering the data file, determine how Isight will handle the file using one of
the following options:
Static file. When this option is selected, the data in the file never changes.
Isight will read it once and save in memory for future use. However, if the
contents of the file change, you can instruct Isight to re-read the file using the
Re-read File button.
Dynamic file. When this option is selected, the data in the file can change.
Isight will read it every time before initialization. This option is only available
in Isight standalone. It is not supported in a Fiper environment.
5. Proceed to step 8.
6. In the text box, type the number of points from the total number of sampling points
you want to use for cross-validation error analysis.
7. Click the Use a fixed random seed for selecting points check box and specify a
seed value to use for the random generator when determining the set of sample
points to be executed. This option allows you to reproduce the approximation with
the same set of points later, if desired.
1. Click the View Data tab. The contents of the tab appear.
Note: You may see a message stating that changes have been made to your
configuration. Click Yes to save the changes.
If the approximation is not initialized upon opening this tab, your tab will appear as
shown above. If your approximation is initialized, internal data appears on the tab,
as shown after step 2 below.
Click the Export... button if you want to save the internal approximation data
to a file. Specify the name and location of the file using the Select File dialog
box that appears.
Click the Clear Data button if you want to reset your approximation (clear all
internal data from the approximation); then, click Yes to verify the data
removal. The approximation will become uninitialized, and the initialization
message and button reappear on the tab. Return to step 2 to initialize the
approximation again.
Click the Error Analysis... button to open the Approximation Error Analysis
tool. The Error Analysis button is only available if error analysis was
performed on the approximation. For more information on using this interface,
refer to the Isight User’s Guide.
Click the Visualize... button to open the Approximation Viewer tool. For more
information on using this interface, refer to the Isight User’s Guide.
4. Click OK to close the editor and save your changes. Click Apply to save your
changes, but keep the editor open.
Creating a Calculation
To use the Calculator component:
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide.
Parameters list. This area, on the bottom left half of the dialog box, contains a
list of available parameters. You can select parameters from this area to use in
your expressions. You can use the drop-down list at the top of the area to
determine what parameters are displayed in the list. All parameters are
displayed by default.
You can sort the parameters in the list by component. Right-click in the list,
and then select Group Parameters. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You can also
choose to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
Calculator buttons. These buttons, in the middle of the bottom half of the
dialog box, are arranged to resemble a standard calculator.
Functions list. This list, on the bottom right half of the dialog box, contains a
list of functions that may be helpful in defining your expressions. You can use
the drop-down list at the top of the area to determine what functions are
displayed in the list. Also, if you place your mouse pointer over a function,
information about the function is displayed in a pop-up tool tip.
Status bar. This item, at the bottom of the dialog box, just above the OK
button, displays messages associated with using the Calculator component.
Note: Array variables can also be used with statistical functions. For more
information, see “Using Array Parameters,” on page 259.
To insert a function call: Double-click a function in the Functions list to add it
to the Expression text box. You can also select the item and click the Add
button.
Note: You can add comments to your calculations by starting the line with the #
symbol or the // symbol. You can change selected single or multiple lines into
comments by pressing the CTRL key and the # key (be sure to hold the SHIFT key
also) or the / key. The same action on lines that are already commented will
remove the comment symbols from those lines.
4. Once the calculation is finalized, you can choose one of the following options, if
desired:
Click the Calculate Now button to execute the calculation and display the
result at the bottom of the dialog box, unless an error exists in the expression.
To view a more detailed result, click the Show Results button next to the
result or double-click the result itself to open the Calculation Results dialog
box. Click OK to close this dialog box.
Click the Clear button to remove all of the contents in the Expression text box;
then, click Yes to verify the action.
If an array parameter already exists, and you enter an array element that is outside the
bounds of the existing array parameter, you will be prompted to confirm the recreation
of the array parameter when OK or Apply is clicked.
You can only pass one array parameter, such as sum(dy). You can pass multiple
scalar variables, such as sum(s1,s2,s3,…).
Only arrays of numeric datatype (real, integer) are available for use in calculations.
Only the following statistics functions are supported when using arrays:
absSum - the sum of the absolute values of all elements in the array
absMax - the maximum of the absolute values of all elements in the array
absMin - the minimum of the absolute values of all elements in the array
Understanding Limitations
The following limitations exist for variables used in Calculator component
expressions:
The variable name must start with a non-numeric character. For example, a
parameters named “3var” is invalid. Instead, the variable should be called “var3”
or just “variable.” Any Isight variable that does not conform to this format is not
displayed in the available list of parameters.
A variable that contains a space in its name must be surrounded by single quotes in
the expression. For example, a parameter called variable one would have to be
entered in the expression as 'variable one'.
Using array variables: For more information on the limitations associated with
array variables, see “Using Array Parameters,” on page 259.
The API for the Calculator component consists of the following methods:
getString("expression"), set("expression",
myExpression). This method allows you to get/set the value of the expression
as a string. When setting, this overwrites the previous value. To add to the existing
expression use the add("calculation") method. This new expression is not
yet stored in the component; you must call the apply() method on the Calculator
API to store this expression permanently.
Set COM object property. This operation allows you to set a COM object property
value from an Isight parameter value.
Get COM object property. This operation allows you to set an Isight parameter
value from a COM object property value.
COM object method / function execution. This operation allows you to execute
functions provided by the COM object and map the return values to Isight
parameters.
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide. The Component Editor dialog box appears.
2. Type the COM object name in the COM Object text box. You must know the
name of the object you are going to use. There is currently no way to browse the
set of registered COM objects for selecting.
3. Proceed to one of the following sections based on the operation you want to
perform:
4. Verify that Get is selected from the Operation drop-down list. The Get editor
appears as shown below.
5. Specify the Property to get in the corresponding text box; then, select the Isight
parameter whose value you want to set from this property from the Parameter to
set return value to drop-down list or specify a new parameter using the
button. For more information on creating parameters, refer to the Isight User’s
Guide.
6. Click the button to add the new operation to the list in the Operations area.
7. Proceed to step 19.
10. Select the Isight Parameter to set the return value to from the corresponding
drop-down list.
Note: You can create a new parameter in which to store this value using the
button. For more information on creating parameters, refer to the Isight User’s
Guide.
If you select Constant: Specify the constant in the text box below the
drop-down list; then, click the button. The argument is added to the small
list in the Arguments area.
If you select Parameter: Select the parameter you want to use from the second
drop-down list, or create a new parameter using the button; then, click the
button. The argument is added to the small list in the Arguments area.
Use the buttons in the Arguments area to rearrange the order of the
arguments.
13. Click the button at the bottom of the left side of the editor. The specified
operation is added to the list in the Operations area.
14. Proceed to step 19.
15. Select Set from the Operation drop-down list. The Set editor appears as shown
below.
16. Specify the COM Property to set in the corresponding text box.
17. Specify whether you are setting the COM property from an Isight parameter value
or from a constant using the Value to set to drop-down list; then, select the
parameter from the drop-down list or type the value of the constant in the text box.
Note: You can create a new parameter in which to store this value using the
button. For more information on creating parameters, refer to the Isight User’s
Guide.
18. Click the button to add the new operation to the list in the Operations area.
19. (optional) Click Apply at any time to save your changes. Repeat the appropriate
previous steps to add more operations as desired.
20. Use the buttons in the Operations area to specify the order of the list of
operations.
Note: You can delete an operation by highlighting it and clicking the button
below the operations list.
The Data Exchanger can process multiple files at one time, allowing data to be copied
directly from one file to another, or to use a read-only file to look up reference data.
Data can be read from and written to the same file when necessary. General
programming commands written in Java can be inserted between the data exchanger
actions in order to perform a calculation, loop over multiple data items, and even to
recover from processing errors.
The Data Exchanger is most frequently used to prepare input files for external
programs and to extract data from program output files. It can be used by itself in an
Isight workflow, and is also included as part of the Simcode component. The Simcode
component consists of an input Data Exchanger to prepare input for the command, an
OS Command to execute the command, and an Output Data Exchanger to extract data
from the program's output files. One advantage of the Simcode component is that the
files are prepared, processed, and read in the same step, avoiding any need to transfer
files over the network during distributed processing.
Toolbar
The toolbar located at the top of the editor displays buttons for most of the actions
available from the right-click menus. Some actions can be accessed only from the
toolbar.
. Opens a Find dialog to search for a string in the data source (file).
. Marks a location in a General Text region. Searches for the currently selected
text, or prompts for a search string if no text is selected. For more information, see
“Creating Markers,” on page 317.
. Inserts a For Loop action. For more information, see “Using the For Loop
Editor,” on page 321.
. Edits full Java source of the actions. Opens a text area where the full Java text
of all the actions is displayed. You can edit this Java code, including
cut/copy/paste using the standard keyboard actions. When the edit dialog is closed,
the Java code is split into actions and re-executed.
The first entry in the list is always a comment (cannot be deleted or edited). The
purpose of this comment is to select it in order to insert an action before all the other
actions.
The point where new actions will be inserted is indicated with a red line. New
statements are inserted between existing actions. Selecting an action in the list sets
the insertion point to be just after that action. Selecting parameters or highlights
does not change the insertion point.
The next action to be executed is indicated with a blue arrow in the right margin.
This arrow is only visible when single-stepping through the actions.
Right-clicking an action displays a menu that allows you to edit, cut, copy, or delete
the action(s). If you cut/copy an action, there will also be a Paste entry for re-inserting
the action(s). You can also cut/copy/paste actions with the standard keyboard actions:
CTRL-X, CTRL-C, and CTRL-V.
There is also a Run to this statement entry that is useful when debugging the actions.
Selecting this entry single-steps the list of actions until the current statement is at or
after the selected action.
Executing Actions
Typically, an action is executed as soon as it is inserted. Changing the value of a
parameter that is used in the Data Exchanger causes the whole list of actions to be
re-executed.
There are controls at the bottom of the Actions List for controlling execution. The
controls include the following:
. Stops execution. This button is only enabled when actions are being executed.
. Executes only the next action (the one with the blue arrow).
. Resets the program and sets the Next Action arrow to the top of the list. You
have to reset the actions before you can single-step.
It can be useful to reset the list of actions and step through them one at a time to see
what each action does. When single-stepping, read or calculation actions will
immediately update the value of the affected parameter. You can also step through a
Loop to see what happens each time through.
You must select text from the file and then a parameter in order to define a read or
write action. The action is executed as soon as it is created: the value of the parameter
is updated with the text read, or the file is updated with the parameter value written,
allowing you to see how the file and parameters will look when the Data Exchange is
run.
The text is highlighted in various colors to indicate how the text is to be processed. The
following color scheme is used:
Yellow: sub-sections of the current section. If you format part of a file as a table
and click outside the table, the table will be yellow and the rest of the file will be
white.
Gray: the area outside of the currently selected section. If you format part of a text
file as a table and then select the table, the table will be white and everything
outside the table will be gray.
Clicking a gray or yellow section navigates through the sections. Clicking in a green or
pink highlighted area selects the read or write action and the parameter. Right-clicking
in the data area displays a menu of actions as shown below.
The area selected determines the actions that are available. All actions except Load
Sample File and Edit Format are also available on the toolbar.
Load Sample File. Loads a new data file to test how the parsing instructions will
act on it.
Edit Section Details. Changes the details of how the section (or whole file) is
parsed. This is mostly used to change the delimiters that separate words.
Edit Selected Statement. Edits the currently selected statement. This is usually a
read or write statement. If no read/write is selected, then it can be the section.
Edit Format. Allows you to set a format to control how a number is printed.
Find (General text format only). Opens a find dialog to search for text.
Marker (General text format only). Opens a find dialog to search for a string, and
remembers where it was found. This marker can then be used as an anchor for
subsequent read/write operations.
Insert Read. Creates a read action. This action works in the same manner as the
read button on the command bar. If a read action is currently selected, this
action converts the read into a write action.
Insert Write. Creates a write action. This action works in the same manner as the
write button on the command bar. If a write action is currently selected, this
action converts the write to a read action.
New Section. Creates a new section from the currently selected text.
Delete Section. Deletes the currently selected section, or the whole file if the
current selection is the whole file.
Close Data Source. Removes the whole file, regardless of the section currently
selected.
The background of the parameter area changes to indicate the status of the parameter:
White: the box is empty or contains the name of an existing, unused parameter.
Yellow: the box contains the name of a new parameter that does not exist yet.
Green: the box contains an existing parameter that is being read from the file.
Pink: the box contains an existing parameter that is being written to the file.
Also, buttons are present that allow you to create a read or write operation.
The button for the default operation for this file is highlighted with a black border.
Read is the default operation for files that are being read (“Output parse”), and Write is
the default operation for files being written (“Input parse”). The current operation (read
The swipe area is always displayed. You can hide it by clicking the down arrow
located below the Parameter Read/Write area.
Changes to text fields in the swipe area do not take effect until after you press the
ENTER key on your keyboard.
This list contains all parameters for the Data Exchanger, all parameters for the simcode
the data exchanger is part of, and also all parameters from the parent and sibling
components that could be mapped to this component. Using a parameter from a parent
or sibling component automatically maps it to the data exchanger component when the
editor is closed.
Clicking the icon in the Op (operation) column is a shortcut to read or write that
parameter. The button cycles through the available options. You cannot delete the
parameter binding by simply using the operation column, you must use the right-click
Right-clicking a parameter opens the standard Parameter menu, with entries for editing
the parameter details, and cut/copy/paste parameters options.
You can also use the List of Parameters to create new parameters (the button), add a
member to an aggregate parameter (the button), or delete a parameter (the
button).
You can change the name, mode, value, or data type of an existing parameter by
clicking in the appropriate cell of the List of Parameters table.
The List of Parameters behaves similarly to the Parameters tab on the main Design
Gateway interface. However, you cannot select multiple parameters on the Data
Exchanger editor. Additionally, parameters do not appear on the Parameters tab until
the OK or Apply button on the Data Exchanger editor is clicked.
Status Bar
The status bar at the bottom of the interface displays various messages about the status
of the Data Exchanger editor. General progress messages are in gray, and warning
messages, especially about invalid data during a read, are in yellow. Error messages
are in red (or pink).
Note: When the Full Java Code view is open and there is an error in the Java code, the
error message is displayed on the Status Line of the main editor interface. Because of
this, when you open the Java Code view, you should position it above the editor
interface so the status line is still visible.
Note: The Javadocs that are shipped with Isight are located in the javadocs folder in the
Isight installation directory.
This interface is accessed by clicking the New Section Format button in the
Parameter Read/Write area. It allows you to adjust the selection, and then apply a
formatter to the selection. For more information on this wizard, see “Changing the
Format of a Section of a Data Source,” on page 299.
Understanding Terminology
Two terms are used throughout this component description to describe actions taken by
the user of the component. Both of these terms are used when describing the
highlighting of data in the Data Source area. These terms are:
swipe. This action involves highlighting data by clicking your mouse button,
dragging it across the desired data, and then releasing the mouse button. The
behavior of swiping text depends on the current section format. For the General
Text format (the most common one), swiping across several columns selects
exactly those characters (fixed column, fixed length). Swiping down selects
multiple whole lines. For the Vector format, a swipe selects multiple elements of
the vector. For the Table format, a swipe may select all or part of a row, a column,
or a sub-table.
click. This action involves placing your mouse pointer on a particular location in
the Data Source area and clicking the button a set number of times (single-clicking,
double-clicking, and triple-clicking). For the General Text format, a single click
selects the current word or item, a double-click selects the current word, and a
triple-click selects the whole line. For all other formats, a click selects the current
item and double and triple-clicks have no additional effect.
Note: For the general text tool, selecting a word with a click and selecting
characters with a swipe results in very different behaviors. When a word is
selected, the data will be located at runtime by counting the number of words from
the beginning of the line, no matter how long those words may be. When
characters are selected, the characters at exactly those positions on the line will be
used - if something earlier on the line became longer, the data will no longer fit
within the swipe. It is not recommended that you swipe characters of text except
when reading or writing data to be processed by a FORTRAN formatted read or
write operation. The Swipe Details area tells you if the data was selected by word
or by characters.
2. Click the large button in the center of the dialog box to begin the process of
defining a data source.
The Exchanger Wizard appears with the Select File screen displayed.
Note: The Exchanger Wizard is different for the input and output parses of the
Simcode component, even though the rest of the Data Exchanger editor is the same
for Data Exchanger and Simcode components.
3. Determine how the file is to be used. The following two options are available:
Read a File (proceed to step 4). This is used to read data from a file output by
an external program, and is sometimes called an Output Parse because an
output file is being read.
Write a File (proceed to step 7). Writes parameter values to a file that will be
used as Input to an external program. This is sometimes referred to as an Input
Parse.
4. Specify the sample file that will be used in the Sample file to use when designing
Data Exchange text box. You can type the name of this file directly, or navigate to
it using the Browse... button. This file must already exist.
5. Specify the name of the file to read at runtime in the second text box. Simply enter
the name of the file. If you browsed to the file in step 4, this text box is
automatically filled using the name of the specified file. Instead of typing a file
name, you can select an existing file parameter. The file specified by the parameter
will be used.
Note: If you type the name of a file, an Input file parameter is created that
references the named file. This file parameter can be mapped from an earlier
component in the workflow that produces the file.
6. Proceed to step 9.
7. (optional) Specify the template file to update. The Template file is optional; if no
template is given, the Data Source area will initially be blank. The whole file will
be created by writing parameter values.
Note: An Input file parameter is created for the Template file. This parameter has
the same name as the file being written, with “Tmpl” appended. While the
template file is usually fixed, it is possible to map another file parameter to the
Template file parameter, allowing the Template to vary at runtime.
8. Specify the file to write at runtime, or select an existing file parameter. If you
browsed to the file in step 7, this text box is automatically filled using the name of
the specified file.
Note: If a file name was typed, an Output file parameter is created. The parameter
name is the same as the name of the file being written, but with periods converted
to underscores. This file parameter can be mapped to subsequent components in
the workflow. For the Input Data Exchanger of a Simcode component, only one
file parameter is created, since the OS Command part of the simcode directly reads
the file from the working directory. In this case, the file parameter has the same
name as the sample file, and has mode input.
9. Set the Encoding option using the corresponding drop-down list. This option
allows you to explicitly specify the encoding the file parameter is to use when
converting between bytes and characters. In a Locale (a system setting that
includes the language, number formats, and character set in use) that uses
multi-byte characters (Japanese, Chinese, Korean), there is a default encoding used
to convert bytes into characters. Most text files will be written using this encoding,
but sometimes it is necessary to specify this encoding. For additional information
on encoding, refer to the Isight Development Guide.
Note: This setting is visible only if the Show File Type Encoding option on the
Files tab is selected on the Parameters preference dialog. For more information
about this option, refer to the Isight User’s Guide.
10. (Write to File only) Set the Line Ending option. The available choices are as
follows:
Default. This setting leaves the line endings as they were found in the input
file. Isight does not attempt to change the line endings. If the file has to be
copied for some reason, local line endings are used. Since the Data Exchanger
always copies the file, this setting is similar to selecting Local.
Local. This setting writes the file with the local line endings for the machine
on which the model is running. This is the location of the Isight Gateway and
Fiper Station. This means CR-LF on Windows and LF on Unix. The file will
always be copied in text mode and the line endings updated, even if the line
endings already appear to be correct.
Unix. This setting always separates lines with LF, even if the model is running
on Windows.
Windows. This setting always separates lines with CR-LF, even if the model
is running on Unix.
Select the general layout of the text in the file. The following options are available:
General Text. This format is for text with no particular structure. Fields are
located by searching for words or phrases. For more information on using this
format option, see “Using the General Text Format Option,” on page 304.
Table. This format is for tables and lists of numbers. Fields are addressed by
row number (line) and column number. This option can be used with files in
any format as long as the line numbers never change; the number of entries on
each line does not have to be the same. The cells in the table may be separated
by delimiters (usually space or comma), or the table columns can be defined
by absolute character position (sometimes useful for reading packed
FORTRAN formatted data). The Table format allows whole columns, rows
and arrays to be read into Array parameters in one operation. For more
information on using this format option, see “Using the Table Format Option,”
on page 312.
12. Select the option that is best suited to the file you are using; then, perform one of
the following options:
For General Text, Table, and Vector formats: Click Finish. The file is
displayed in the Data Source area as a new tab. Proceed to the section that
describes the usage of the format you selected (these sections are identified in
the previous step of this procedure).
For the Name Value format: Click Next. The Name/Value format requires
additional configuration, but allows all items to be automatically read into
similarly named parameters.
13. Select the delimiter that you want to use for your file.
Once a selection is made, the highlighted information in the Sample Text area is
updated, if necessary. The name fields are highlighted in orange, and the value
fields in green.
Note: The wizard attempts to select the best delimiter automatically. For this
reason, the highlighted screen often appears immediately after clicking the Next
button from the File Format screen (step 12).
Verify that only the data you want is included in the Value field. Sometimes it
is necessary to use a multi-character delimiter (such as “:=”) to make sure that
the right data is in the values.
Empty lines and lines that do not have the delimiter are ignored.
14. Click Next. The Map Item Names to Parameters screen appears. This screen
allows all of the entries in the Name/Value file to be automatically read or written
to/from parameters with the same name as the Name field of the item.
15. Edit the parameter information, as desired. The following options are available:
Select parameters that will be read from or written to using the Op column.
You can select or de-select individual parameters by clicking the Op column
for that parameter, or you can select all of the listed parameters using the Read
All or Write All buttons. You can also clear the selection of every parameter
using the Clear All button.
If the file is open for reading, the Op column toggles between Read and No
Operation (an empty box). If the file is open for writing, you have the option
to read or write the item, and the Op column toggles through the sequence
Write ( ), Read, and No Operation.
When the Op column is empty (No Operation), no parameter is created and no
data is read or written. The name/value item is ignored (though you can
operate on it later using the main Data Exchanger editor).
Note: Changing the name of a parameter does not change the name of the
name/value field it will be read from or written to.
When writing, you can change the initial value of the parameter by editing the
Value column of the table.
Change the mode of a parameter (the available options are Input, Output,
In/Out, and Local).
Change the type setting of the parameter. The Wizard guesses at the data type
based on the data in the Template or Sample file. This is sometimes wrong: If
the sample file contains a number without a fractional part, the Wizard guesses
it is of type Integer, when it perhaps should be type Real.
If the value is not a single number or word, the Wizard assumes it is text data and
selects the data type String. Often such entries actually contain a number followed
by a comment, or a vector of numbers. Such items can be handled by creating a
General Text or Vector section on top of the value. For now, de-select the
name/value item by clicking in the Op column until it is blank. The read/write for
that item will have to be set up later using the main Data Exchanger editor. For
additional information on setting parameter information, refer to the Isight User’s
Guide.
16. Click Finish. The file is loaded into the Data Exchanger component.
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide.
The Component Editor dialog box appears, and the existing data exchanger
program is loaded. The program is executed, displaying the data sources in tabs in
the Data Source area. The code insertion point is left at the end of the program, the
last section referenced is selected in the Data Source area, and the Swipe Details
area (if open) shows the selection details for this section. The GUI is left as it was
after the last read or write was created. This controls the selected parameter, the
contents of the Swipe Details panel, and the selected File tab.
Note: If the program is no longer correct (usually because a parameter used by the
program has been deleted), the Java code view (described on page 303) will open,
and the first error will be highlighted in pink.
In general, the data and parameter selections are independent, and can be selected in
either order. See below for some special-case details, however. The read or write
button is not enabled until both a Parameter and some Data have been selected. There
are several shortcuts for creating Read or Write operations to speed up the process.
These are described in detail below.
The same parameter can be read or written more than once. While reading into a
parameter several times is rarely useful, writing the same value in multiple places is
often required. To write the same parameter value more than once:
The process for creating a Read or Write is slightly different depending on whether the
parameters are created before the Data Exchanger editor is opened (Top Down) or are
created by the Data Exchanger editor (Bottom Up).
Note: A file opened as read-only supports only read operations, not write operations. In
this case, the Write button is disabled on the editor, and it is not a selectable option in
the Op column in the Parameter List area. On the other hand, a file opened for writing
supports both read and write operations.
Important: The Parameter List will show all parameters that belong to the Data
Exchanger component itself, the Parent component of the Data Exchanger, and any
Sibling components that are before or after the Data Exchanger in the workflow. When
a parameter from the Parent or Sibling components is read or written, a parameter with
the same name is created in the Data Exchanger component and the appropriate
parameter mappings are created. The parameters and mappings are not created until the
OK or Apply button on the editor is clicked. If the Cancel button is clicked, no
parameters are created and all work done since the editor was opened is discarded. To
create a top-down data exchange:
1. Select some data in the data source viewer using one of the following methods:
Triple-clicking. The triple-click action selects the entire line that currently
contains the cursor.
Swiping. This option allows for the selection of an item or a range of data. The
selected item or range is displayed in the Swipe Details area at the bottom of
the editor.
2. (optional) Adjust the swipe details. For more information on using the swipe
details options, see one of the following sections:
3. Select the parameter to represent the data using one of the following methods:
Select a parameter from the Parameter drop-down list or type the name of an
existing parameter.
Click a parameter in the Parameter List area. The parameter name appears in
the Parameter text box.
Note: If the chosen parameter is already used in a read or write statement, the
selected data will be un-selected and the Data Source area will be scrolled to
show where the parameter is used. In this case, it is necessary to select a
parameter and then select the data.
Type the name of an existing parameter in the Parameter text box. You can
type the name of a scalar parameter, the name of an aggregate member, or the
name of an array element. For more information, see “The Parameter
Read/Write Area,” on page 274.
4. Apply the new read or write statement using one of the following methods:
Click the Read or Write button adjacent to the Parameter text box.
Right-click in the Data Source area; then, select Insert Read or Insert Write
from the menu that appears.
Click in the Parameter box and press the ENTER key on your keyboard.
Click the Op column in the parameter list until the desired option appears.
This option defaults to read-only for a read-only data source, and toggles from
write to read for read/write data sources. The correct default operation is
always initially displayed.
Note: This procedure only works if the parameter has not been previously used in a
Read or Write operation. If the parameter has already been used, clicking on the
Op column will select the location originally read/written, and then attempt to
change a write to a read (a read is not changed). To create a second read/write for a
parameter already used once, you must:
Select the parameter from the Parameter List or the drop-down list in the
Parameter Read/Write area.
Return to step 1 to define more data source information using this shortcut
method.
This type of data exchange assumes that no parameters exist, and the parameters will
be created as the data exchange program is created. The Data Type and Default Value
of the parameter (as well as the size of an Array parameter) is determined by
examining the data that was selected.
1. Select some data in the data source viewer using one of the following methods:
Single-clicking on the Word (General Text format) or item (Table, Vector,
Name/Value formats).
Triple-clicking. The triple-click action selects the entire line that currently
contains the cursor.
Swiping. This option allows for the selection of an item or a range of data. The
selected item or range is displayed in the Swipe Details area at the bottom of
the editor (if it is displayed).
2. (optional) Adjust the swipe details. For more information on using the swipe
details options, see one of the following sections:
Type the name of the parameter in the Parameter text box. The parameter text box
will switch to a yellow background to indicate that this is the name of a new
parameter.
You cannot create an array by typing the name of an array element (for example,
x[5]). If you attempt to do so, a warning message will be displayed in the Status
Line. If you need to create an array, click on the Create Parameter button ( )
below the Parameter List and fill out the Create Parameter dialog.
Note: When using the Table or Vector tool, you can create an entire array by
selecting multiple numbers and typing a parameter name into the Parameter text
box.
Shortcut: Normally you would click in the Parameter text box before typing the
parameter name. This step is not necessary. Instead, after clicking in the Data
Source area, press the TAB key to move to the Parameter text box. Even pressing
the TAB key is optional; if you start typing after clicking or swiping in the Data
Source area, the cursor will automatically jump to the Parameter text box.
3. Apply the new read or write statement using one of the following methods:
Click the Read or Write button adjacent to the Parameter text box.
Press the ENTER key on your keyboard after typing a new name in the
Parameter text box. This option uses the default read/write operation for the
exchanger.
The parameter is created and selected, the read or write is created, the statement is
executed, and the data is highlighted. Note that only one mouse click (to select the
data) and one extra keystroke (the ENTER key) is required to create the parameter
and read or write it. Of course, this is in addition to typing the parameter name.
4. Adjust the Parameter settings. The parameter is created using default settings
based on the type of operation (read or write) and the selected data. The
fundamental rules are:
If the operation is a READ, the parameter has mode OUTPUT; if the operation
is a WRITE, the parameter has mode INPUT.
If one word, cell, or item is selected, the parameter is Scalar. If multiple values
are selected with the Table or Vector formats, an Array parameter is created.
The size of the array will exactly match the size of the selection. See “Using
the Table Format Option,” on page 312 below for more information.
The data type is selected to match the type of data selected. If the selection
includes only one block of digits (and leading or trailing spaces), the data type
is Integer. If the selection looks like a real number (-99.99e-99]), the data type
is Real. In all other cases the data type is String.
Note: Selecting data such as “12 34” will use the data type String, not an array
of 2 integers as you might hope.
The initial value of the parameter is the same as the selected data (with leading
and trailing spaces removed).
After creating the read or write, you can edit the parameter settings in the
Parameter List. The setting most likely to need changing is the Data Type, though
sometimes the Mode may need to be changed to INOUT.
To edit the parameter settings, click in the Name, Value, Mode, or Type column to
enable editing (for the name or value) or to show a drop-down list of choices
(Value and Type).
To change the operation from a write to a read, click in the Op column of the
Parameter list, or click the Read button next to the Parameter name box.
Note: You can change the size of an array parameter (or make any other change to
any parameter) by right-clicking on the parameter and selecting Edit Selected
from the menu.
Note: You can also update a read/write statement, including setting the format and
selecting a different parameter, by using one of the following methods:
Right-click in the Actions or Data viewer; then, select Edit Selected Statement
1. Select a read or write operation by clicking on the parameter in the parameter list,
or by clicking on the highlighted data (pink or green) in the Data Source area.
Clicking on a Parameter scrolls the Data Source area so the first read or write of
the Parameter is visible. The data read or written (already highlighted in green or
pink) is then emphasized by a blue border.
Clicking on Read or Write data similarly highlights the data with a blue border.
The associated parameter is then selected in the Parameter List.
In either case, the details of the Data selection are displayed in the Swipe Details
area.
Click the Op column in the Parameter to toggle between read and write
operations. Again, this only works if the Data Source is open for Writing.
Right-click the Op column in the Parameter List; then, select Read Parameter
or Write Parameter from the menu that appears.
1. Select the Read or Write by clicking on the highlight in the Data Source area or by
clicking on the Parameter in the Parameter List.
2. Edit the Swipe details in the Swipe Details area. Any change to the swipe location
will immediately be displayed as a blue box in the Data Source area. The Read or
Write operation will not be updated until you press ENTER in the Swipe Details
area.
The most common reason to change the Swipe Details is to alter the search string used
to locate the data in the General Text format. For more information, see “Using the
General Text Format Option,” on page 304.
2. Right-click on the data highlight or action; then, select Edit Selected Statement
from the menu that appears. Depending on the type of statement selected, the Read
or Write Parameter dialog box appears.
The dialog box is divided into three tabs: Parameter, General Text Swipe, and Edit
Read/Write Format.
General Text Swipe. Allows you to edit the swipe information. The swipe
information is different for each section format. For more information on
editing the swipe information, see “Adjusting a Basic Swipe,” on page 306.
Edit Read/Write Format. Allows you to change how the data is formatted.
For more information, see “Formatting Numbers During a Write Operation,”
on page 298.
Right-click on the highlight in the Data Source area and select the last menu item
Remove Read/Write Instruction from the pop-up menu.
Select the read/write action in the Actions List, then click the button on the
toolbar.
Note: Deleting a parameter using the button below the Parameter List will also
remove all read/write instructions involving that parameter.
2. Right-click on the Highlight in the Data Source area; then, select Edit Format
from the pop-up menu. This will open the Edit Read/Write Format dialog box.
Note: Do not select the menu item Edit Section Details; its function is described
below. You can also access the Editor format dialog by double-clicking the action
to open the Read/Write editor, and then clicking the Format tab.
3. Select the type of format from the pull-down on the Edit Read/Write Format
dialog box. The following types of formats are supported:
C printf: Enters a C-language printf format string that contains one format
specifier (“%4d”, “%5.2f”, or “%12.5e”). Extra characters before or after the
format specifier are included in the output.
Boolean Format: Allows control over how Boolean values are printed.
Specify values for true and false separated by a comma. For example “T, F” or
“.TRUE., .FALSE.”.
Note: Setting a write format when writing an Array using the Vector or Table
format, the output format is applied separately to each element of the array.
While it is possible to set a format on a Read operation, doing so almost never works as
desired and is strongly discouraged.
To format a section:
1. Select the data to be formatted in the Data Source area. Usually this involves
selecting several lines. This can be done easily by dragging the mouse down the
page. You can also select multiple lines by clicking on the first line and
shift-clicking on the last line.
3. Select the new format from the Formats area. The General Text, Table, and Vector
formats do not have any additional options to set. The Name/Value option has
additional options that can be defined. For more information on these options, see
“Using the Name Value Format Option,” on page 311.
4. Click Finish. The wizard is closed, and a new format is colored white in the Data
Source area. You can now insert Read and Write statements using the new format.
5. Proceed to one of the following sections, based on the type of statement you are
creating in the section:
1. Select the data to be updated in the Data Source area, or click a bound parameter in
the Parameter List to select the corresponding data.
2. Right-click the Data Source area; then, select Edit Section details from the menu
that appears. The Edit Section Format dialog box appears. It is divided into three
tabs.
Note: This dialog box differs (the tab contents are altered) based on the current
format selected.
3. Edit the details used to construct the format using the first tab. This step is most
often used to change the delimiters between elements.
The options in the Grouping Characters area allow you to control which delimiters
cause a value containing punctuation to be treated as a single Vector element,
Table cell, or name/value Value. By default, all of the standard ASCII quote and
bracket characters are recognized, and the following string is treated as a vector of
length 3, where the 3rd element is 3 lines:
The Escape Character option removes any special meaning of the character that
follows it (such as a space, comma, quote, or bracket character). The escape
character itself (default is backslash (\)) is always removed from the value.
Normally a quote inside a quoted string is allowed, if the quote is preceded by the
escape character. Activating the Double quotes inside of quotes check box allows
the quote to be doubled (as it is in CSV format). With this check box activated, the
following text is a single string containing one double quote:
To remove all special processing and have all characters (except the separators
listed above) be treated as normal text, clear (uncheck) all of the check boxes in the
Grouping Characters area and delete the default escape character.
4. Click the Advanced tab; then, set the mode to either RANDOM or
SEQUENTIAL. The default is always RANDOM. Using SEQUENTIAL mode
on large files is more efficient, but may interfere with how the editor operates.
5. Click the third tab. This tab changes the file if the top-level section (the whole file)
is selected. If a sub-section is selected, this tab lets you update the boundaries of
the section.
There are tabs at the top of the Data Source area for each data source. Selecting a tab
will switch the display to that data source. Selecting a tab also displays the section of
that data source that was most recently used.
Once the correct data source is selected, you can navigate by clicking on the data.
The current section is always highlighted in white. Any sub-sections are highlighted in
yellow. Read and Write statements are colored in green/pink. Any text outside the
current section (in a parent section) is highlighted in gray. Clicking a yellow highlight
will move the focus to the sub-section, turning the yellow area white, and the previous
white area gray. Clicking the gray area will navigate to the parent section, turning the
gray area white and the previous white area yellow.
1. Click the Java Source button on the end of the tool bar. The Java Source Code
dialog box appears.
The full Java source code is generated and displayed in this dialog box, and the
code can be edited directly.
2. Edit the code, as desired. You can copy and paste text between this dialog box and
an external Java development tool using the standard keyboard commands
(CTRL-X, CTRL-C, and CTRL-V).
3. Click OK to save your changes and close the dialog box. The Java code is checked
for errors while it is being saved. If there are any errors, the first error is
highlighted in pink, a description of the error is displayed on the status bar at the
bottom of the Data Exchanger editor, and the Java Source Code dialog will not
close. You must either correct the error, or click Cancel to discard all changes.
New basic swipe. This type of swipe involves highlighting a portion of text in your
file and then mapping it to a parameter. It does not involve the usage of the Swipe
Details section of the Data Exchanger component. For more information, see
“Performing a Basic Swipe” on this page.
Adjust a basic swipe. This type of swipe requires the use of the Swipe Details area,
and involves only one line in your file (whether it is the whole line or only part of
the line). For more information, see “Adjusting a Basic Swipe,” on page 306.
Advanced swipe of a group of lines. This type of swipe requires the use of the
Swipe Details area, and involves using more than one line in your text file. For
more information, see “Performing an Advanced Swipe,” on page 308.
1. Click or swipe the desired data in the text file. For more information on the
differences between these two ways to select data, see “Understanding
Terminology,” on page 278.
2. Specify the parameter that will correspond to this information from the text file
using one of the following methods:
Use the Parameter drop-down list (to the right of the Parameter text box) to
select an existing parameter.
3. Click the Read or Write button (based upon the usage you specified for the file in
“Creating a New Data Exchanger Program,” on page 279). The parameter is added
to the Parameter List on the right side of the component editor.
1. Verify that the General Data Swipe appears near the bottom of the editor. If not,
click on the down arrow button located under the Parameters Read / Write area.
2. Click the parameter in the Parameter List that is bound to the swipe that you want
to update. The swipe information appears in the General Data Swipe area.
3. Select one of the following options from the first drop-down list:
Find. This option allows you to locate a specific string in the data. If you need
to locate a string, proceed to step 4.
4. Perform the following steps to locate data using the Find option:
Select String or Regexp (Regular Expression) from the drop-down list in the
center of the General Data Swipe area. Regular expressions are a way to
specify patterns that match similar strings. For example, the regular expression
‘a *b’ matches an ‘a’, zero or more spaces, and a ‘b’. A detailed description of
regular expressions can be found in the Java 1.5 Manual pages for class
java.util.regex.Pattern, in the Perl language manual, or in any of a number of
books on Regular Expressions.
Set the Offset Lines option. This option allows you to tell the component to
add lines before or after the location of the found data. Negative numbers
select a line before the matched line, while positive numbers select a line after
the matched line.
Add text to the Find text box. Once a text string is added to this text box, the
matching item in the text file is automatically highlighted.
You can adjust where the search starts with the From drop-down list. There
are three options:
• Start of File. Starts the search from the beginning of the file. This is the
default.
• Current. Starts the search from where the last read/write occurred. This
can be much more efficient when working near the end of a large file. Be
aware of where the statement will be inserted in the list of actions – the
“last read or write” is the last one before the new statement – which can be
an unexpected value if the insertion point is not at the end of the list of
actions.
• End of File. Starts the search backwards from the end of the file.
5. Select Line from the drop-down list. The line number of the data appears. It is
automatically updated, based on the results of the Find operation. Normally, the
line number is relative to the start of the file, but you can specify a line number as
an offset from the current location or as a negative offset from the end of the file.
6. Verify that the correct line is highlighted. You can change the line number in the
Line text box, if necessary. You can also enter an expression or click the
expression button to open an expression editor. The matching item in the text
file is automatically highlighted.
7. Refine the selected data using the following options from the drop-down list at the
bottom of the General Data Swipe area:
Whole Line. This option highlights the entire line in the file that contains the
information specified.
Word #. This option highlights the specified word number in the text file. The
count is based on the number of words from the left margin.
Character. This option highlights the characters in the specified range, which
is specified in the corresponding text boxes.
3. Select Line or Find from the drop-down list, based on how you want to specify the
location where the swipe begins. Selecting Line allows you to specify a particular
line in the file, enter an expression, or open an expression editor. Selecting Find
allows you to specify a group of characters as the starting point.
4. Perform one of the following actions, based on the selection you made in step 3.
If you selected the Line option, enter the number of the line you want to mark
the beginning of your swipe. You can also enter an expression or click the
If you selected Find, select String or RegExp (regular expression) from the
adjacent drop-down list; then, enter the text you wish to start from in the
corresponding text box. You can also enter an offset, which is the number of
lines before or after the search string to start the section.
5. Click the End subtab; then, use the options to set where your swipe will end. These
steps are similar to those described in step 3 and step 4, however, it is slightly more
complex. While a Start search starts from the beginning of the file, an End search
starts from the line selected by the Start tab. The start line # option is from the start
of the file. The End line # option is from the start line (0 = 1 line, 1 = 2 lines).
1. Select the Name or Item # from the drop-down list to determine how to locate the
item in your text file.
2. Select the item itself from the second drop-down list. The contents of this
drop-down list are determined by your selection in step 1.
3. Type a new parameter name in the Parameter text box, or select an existing
parameter from the drop-down list. If you create a new parameter, it is added to the
Parameter List on the right side of the dialog box. If you choose an existing
parameter, the parameter is highlighted in the list.
Only one item can be read/written at a time. You can also apply a section format to the
value of a Name/Value item. It is particularly useful to use the Vector format in order
to read an array from the value.
Note: It is not currently possible to write new entries into a Name/Value section. You
can only read or write the value of an existing name/value item.
The Table format is useful in that it can read/write a whole (one dimensional or
two-dimensional) array at once. A new parameter is created having as many elements
as the swipe. To read/write an existing array, you must have a swipe exactly as large as
the array parameter or the parameter must be resizable. For more information on
resizable arrays, refer to the Isight User’s Guide.
The format can read/write a cell (scalar), a whole or portion of a row or column (a one
dimensional array), or a sub-table (a two dimensional array).
1. Type a parameter name in the Parameter text box, or select an existing parameter
from the drop-down list.
2. Set the table swipe coordinates. The first two text boxes represent the starting point
for the swipe. The second two text boxes represent the ending point for the swipe.
The end is left empty to indicate the swipe has a length of one in that dimension.
Leaving the column field blank in the start coordinates highlights the entire
referenced row. Similarly, leaving the start row field blank selects the entire
column.
Blank rows are not highlighted if selected using the table swipe coordinates.
3. Click the Read button or Write button , based on the type of data
exchange you are performing.
4. Click Apply to save your changes.
If your selection covers more than one cell in a table and you type a new parameter
name, the parameter is created as an array as big as the swipe. You can select a cell of
the table by clicking in it. Triple-clicking will select a whole row. You can select a
row, column, or sub-table by dragging the mouse from one end to the other (for a
sub-table, drag from one corner to another).
Note: When selecting a column by dragging, what matters is which word on the line
you start and end the drag in, not what character position on the line. If the widths of
the columns vary a great deal, it may be difficult to end the drag in the same relative
column as it started in. When you release the mouse button after dragging a column,
the blue selection highlight will shrink to just the selected column(s). If too many
columns are selected, adjust the selection using the Table Swipe details panel.
4. Click the Fixed Columns tab. The contents of the tab appear.
5. Type in the column boundaries in the text box; then, click OK. Note that the
numbers are the character position on which each column ends. The example
shown above creates four columns, each five characters wide:
The Vector format is similar to the Table format in that it can read/write a whole (one
dimensional) array at once. In both formats, a new parameter is created having as many
elements as the swipe. To read/write an existing array, you must have a swipe exactly
as large as the array parameter, or the array must be resizable.
1. Type a parameter name in the Parameter text box, or select an existing parameter
from the drop-down list.
2. Enter the swipe start and end point. The points are highlighted as you enter the
information.
Each item is considered a cell, and the count is made from left to right, and from
top to bottom.
3. Click the Read button or Write button , based on the type of data
exchange you are performing.
1. Verify that the correct data source is loaded in the Data Source area.
2. Click the Find button at the top of the editor. The Find dialog box appears.
3. Enter the text string you want to search for in the text box; then, click Next. The
string, if found in the data source, is highlighted in orange in the Data Source area.
Click Next to locate the next instance of the specified text string.
Click Close to close the Find dialog box and return to the Data Exchanger
editor.
The General Text format uses the search string as the Find target for swipes as long as
the swipe is close to the find string (currently within ten lines). To set the search target
for the general text tool to a string on the screen, select the text to use as the search
target; then, click on the Find button , or right-click and select Find from the pop-up
menu.
Creating Markers
Markers are used to search for a string, and to remember where the string was found.
Then, the marker can be used as an anchor for subsequent read/write operations. There
are two ways to create a marker.
To create a marker:
1. Select a General Text section; then, click the button on the toolbar. The Marker
dialog box opens.
2. Enter the variable name, the section to search, a string to search for, and where to
begin the search.
3. Click OK.
or
1. Select some text on the line where you want the marker.
The marker is created immediately. It searches for the selected text in the selected
section, and uses the selected text (with punctuation changed to underscores) as the
marker name.
Using Markers
If you simply click on a word in a General Text section that is after a marker, the word
will be located at runtime as a certain number of lines after the marker.
You can also select a marker from the From drop-down list on the General Text Swipe
editor, when in Line mode (not in Find mode).
Note: If you are using markers, there is little reason to use the FIND mode of the
General Text swipe editor, since the Markers handle searching.
. For. Creates a For Loop. For more information, see “Using the For Loop
Editor,” on page 321.
. While. Create a While loop. An editor opens for specifying the condition to test
at the start of each loop. This is identical to the editor for an If statement.
. Java. Adds arbitrary Java code as an action. When you click OK, the Java code
is checked for errors. Note that a single block of Java code could be split into
multiple actions.
Click the button on the toolbar to open the Calculation Editor. The Edit Calculation
dialog box appears.
The Calculator editor is similar to the Calculator component. For more information
about the Calculator component, see “Using the Calculator Component,” on page 255.
There are a few differences that include the following:
The Calculator component uses lists and buttons whereas the Calculation editor
uses menus to keep the dialog smaller.
A Data Exchanger calculation can contain multiple statements, but there must be a
semicolon after each statement (except the last). The Calculator component allows
one statement per line even without semicolons.
The Calculation Editor allows full Java statement syntax, including expressions as
subscripts of arrays, and calling methods on parameters (which are type
com.engineous.sdk.vars.Variable). This includes Java auto increment/decrement
operators (++ and --) and the operation assign operators, such as ‘i +=1;’.
Subscripts of array parameters can use either Java notation ‘arr[i][j]’ or Fiper
syntax ‘arr[i,j]’.
In the Calculation editor, an array parameter can be resized either by calling the
‘array.setDimSize(size)’ method or with the ‘resize(array,size)’ function. It is not
currently possible to change the size of an array in the Calculator component.
Note: The same calculation editor dialog is used for IF and WHILE statements, and to
enter expressions for row and column numbers in the Vector and Table tools. The only
difference is that a calculation must contain assignment statements, whereas the other
uses must be an expression that does not assign the value to a parameter.
Increment the value of an integer parameter ‘i’. All of the following are equivalent:
i++
i+=1
i=i+1
Set an element of array parameter ‘array’ to a complex expression. The subscript is
one less than the integer parameter ‘i’.
array[i-1] = array[i] * cos(x)
Change the size of array ‘outArray’ to be twice as large as the array ‘inArray’:
resize(outArray, size(inArray) *2)
1. Click the button to open the For Loop editor. The Edit For Statement dialog
box appears.
2. Click the appropriate tab: Simple or Advanced. The tab selected controls the
statement. Switching tabs tries to convert from the simple form to the advanced
form (always works) or from advanced to simple (only works in a restricted subset
of cases).
Simple. The Simple tab allows you to edit a parameter, initial value, final
value, and increment (similar to the For Loop component). This is the basic
'for i = 1 to 10 by 1' type loop.
Advanced. The Advanced tab allows you to edit the initialization, condition,
and update expressions. This is the complete Java (or C or C++): 'for (i =
1, j = 10; xx[i]; i++, j--)' For loop.
3. Click the button at the end of each text box to open a calculation editor.
The editor can be used to build an expression for the corresponding text box based
on parameters, operators, and functions. A line of instructions at the top of the
calculation editor indicates the type of expression expected (for example,
assignment, integer value, logical condition).
Filtering Parameters
To alter the view of the parameters in the Parameter List:
1. Click the Filter... button at the bottom of the Parameter List. The Edit Parameter
Filtering Options dialog box appears.
2. Set the check buttons, as desired. Activating a check box causes the corresponding
parameters to be shown in the Parameter List.
3. Click OK. You are returned to the Data Exchanger editor, and your Parameter List
is updated.
1. Click the tab that corresponds to the data source that you want to delete. The
contents of the data source appear in the Data Source area.
Right-click anywhere in the Data Source area; then, select Delete Selection
from the menu that appears.
Keep in mind the following actions that will occur when you use the Delete/Close
options in the Data Exchanger component:
The button/Close Data Source menu option always close the current data
source.
The button/Delete Statement menu option delete a data source, section, or other
action depending on the selection.
The Delete Section option (accessed by right-clicking in the Data Source area)
deletes the current section, or the data source if the current section is the data
source.
Deleting a section also deletes all read and write statements inside that section, and
closing a data source deletes all sections, reads, and writes in that data source.
Deleting an 'if', 'while' or 'for' action ONLY deletes the selected action and the
associated end indicator “}”. Any other statements that were inside the block are
left in place. To delete an 'if', 'for' or 'while' action and everything inside it, you
must select all the actions by clicking on the start action and shift-clicking on the
end indicator.
Actions can be moved into or out of a loop or if block by using the cut/paste
feature.
1. From the Isight Design Gateway, select Preferences from the Edit menu. The
Preferences dialog box appears.
2. Expand the Components folder on the left side of the dialog box; then, select Data
Exchanger. The following preference options are available with the Data
Exchanger component:
Show line numbers of Data Source. Specify if you want line numbers
displayed on the left side of the Data Viewer.
Show column numbers of Data Source. Specify if you want column numbers
displayed on the top of the Data Viewer.
3. Click OK to save your changes and close the Preferences dialog box.
The API for the Data Exchanger component consists of the following methods:
avoid names that will clash with Java classes, such as System, Table, or Tool
If the parameter name is a valid variable name, the two strings can be the same
(and they usually are).
In the above example, the Java variables sample2 and table are Java variables,
not parameters. Therefore, they are not included in the renaming comment.
The strings
Database-Specific Settings
Depending on the database you are using, the following changes need to be made in
order to use the Database component. For more information, refer to your database
documentation or contact your local database administrator.
MS Access
Some MS Access databases may need the Windows ODBC setting to create
the database.
For database license distribution, set the environment to the user database
classpath.
Oracle
The Database component is a general usage database. If you use a special Oracle
datatype such as CLOB, you need to put the ojdbc14.jar file in the Oracle
directory. You also need to add this file with the full path to the Isight class path.
For more information on adding the file, see “Modifying the Isight Class Path,” on
page 329.
DB2
If you are trying to connect to a remote DB2 database, you need to select the
following driver from the driver drop-down list when connecting to a database
using Isight:
com.ibm.db2.jcc.DB2Driver
You also need to add the following files with the full path to the Isight class
path. For more information on adding the files, see “Modifying the Isight Class
Path,” on page 329.
• db2jcc.jar
• db2jcc_license_cisuz.jar
• db2jcc_license_cu.jar
SQL Server 2000- Type 2 driver
If you are trying to connect to a Microsoft SQL Server database using the Type
2 driver, you need to download the following Type 2 driver file:
• msbase.jar
• mssqlserver.jar
• msutil.jar
You need a valid user name and password.
If you are trying to connect to a Microsoft SQL Server database using the Type
4 driver, you need to download the following Type 4 driver file:
sqlijdbc.jar
You need a valid user name and password.
Modifying on Windows
To modify the Isight class path:
1. Click Start; then, point to Control Panel and click System. The System
Properties dialog box appears.
2. Click the Advanced tab; then, click the Environment Variables button. The
Environment Variables dialog box appears.
3. Click New (under the User variables for user area). The New User Variable dialog
box appears.
7. Find the CLASSPATH variable in the Systems variable list; then click Edit.
;%FIPER_ADDPATH%
Modifying on UNIX/Linux
To modify the Isight class path:
1. Using a text editor, open the fiperenv file, which is located in the bin directory
of your Isight installation directory.
2. Add the following line in the file. Do not enter a space after the equal sign.
FIPER_ADDPATH="/users/ora92/jdbc/lib/ojdbc14.jar;
/users/IBM/SQLLIB/java/db2jcc.jar"
Understanding Limitations
The following known limitations should be noted prior to using the Database
component:
The component only allows you to interact with the database tables. No other
database objects are available. Furthermore, the component does not support
procedure executions.
The datatypes supported are as follows: string, text, varchar, int, double, float, and
decimal.
The database table must have a primary key. The primary key datatypes supported
are as follows: string, text, varchar, int, double, float, and decimal. If you use
another datatype for the primary key, you need to modify the SQL command.
The variable name must start with a non-numeric character. For example, a
parameter named “3var” is invalid. Instead, the variable should be called “var3” or
just “variable.” Any Isight variable that does not conform to this format is not
displayed in the available list of parameters.
A variable that contains a space in its name must be surrounded by single quotes in
the expression. For example, a parameter called variable one would have to be
entered in the expression as 'variable one'.
The Fiper file parameter can be mapped to the BLOB datatype, except for
MSAccess which uses OLE datatypes.
Connecting to a Database
To start using the Database component, you must first connect to the database.
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide. The Database Component Editor screen appears as shown
below.
Once you specify the type of database, the options necessary for a connection
are activated. If the options listed below are not accessible then they are not
needed.
Service/Database Name. Enter the appropriate service name, depending on
the database.
Host Name/Instance Name. Enter the name of the host machine to which the
component should connect.
Port. Enter the port number to which the component should connect.
Class name. Select the class name that is included in the driver file. The class
name must match the driver file.
The type of database and the service/database name are displayed at the top of the
editor. The database tables in the schema are shown on the left side of the editor.
“Viewing Data and Mapping Parameters,” on page 334 describes how to view
the table data, and how to map the data to Isight parameters.
2. Select a table; then, click the Show Data button. The data from the selected
table is displayed.
The right side of the editor shows the selected columns with all the rows of the
table.
3. Click the icon to the left of a table to expand it. Additional information appears.
The icon represents the primary key of the table. This column cannot be
mapped.
The icon represents columns with datatypes not supported by Isight. The
data in these columns cannot be displayed or mapped.
4. Select the column(s) or row(s) from the current table that represent the data that
will be used in or replaced by Isight. Be aware of the following issues with regard
to data selection:
If multiple columns and rows are selected with the same datatype, an aggregate
or array variable is created.
If multiple columns and only one row are selected with the same datatype, an
aggregate or array variable is created.
If multiple columns and only one row are selected with different datatypes, an
aggregate or multiple scalar variables are created.
If only one column and multiple rows are selected, an array variable is created.
5. Determine how you want to map your parameters. The following options are
available:
To map individual parameters manually, by selecting data in the corresponding
table, proceed to step 6.
To map parameters based on existing column information, proceed to step 10.
6. Specify the Isight parameter that will be mapped to the selected data using one of
the following techniques:
7. Select the mapping direction for the parameter. The following two options are
available:
8. Click the button. The parameter mapping is added to the table at the bottom of
the editor.
The table at the bottom of the editor displays the various parameters created. The
columns in the table show the parameter name, mode of the parameter, the table
from which the rows are selected, column names, and the query formed by the
selected rows.
Note: You can remove an existing mapping by selecting it in the table at the
10. Select the data you wish to map from the database table. You can select individual
cells, or you can select an entire column by clicking on the column’s header.
12. Select the appropriate parameter attribute; then, click OK. The data is mapped and
added to the table at the bottom of the editor.
14. Select the data you wish to map from the database table. You can select individual
cells, or you can select an entire column by clicking on the column’s header.
15. Click the button. The Column and value dialog box appears.
17. Click OK to return to the editor. The data is mapped and added to the table at the
bottom of the editor.
Note: The name Isight assigns to the new mapping (for example, $D, $I) cannot be
edited.
Note: You can remove a record from the database by selecting a cell in the
18. Return to step 5 if you want to map additional parameters, or click OK to save the
changes and close the editor. Click Apply to save the changes without closing the
editor.
Note: A SQL exception will be thrown at runtime if you entered invalid values into
the database. The exception appears in the runtime log.
Modifying Queries
You can modify a query formed by mapping the row data to a parameter.
1. Select the mapping whose query you want to edit from the table at the bottom of
the editor.
2. Click the Edit button in the bottom right corner of the editor. The Advanced Query
dialog box appears.
The text of the query is displayed in the Modify Query area at the top of the dialog
box. Each query is broken into independent parts and displayed on a single line.
You can select a line and modify the corresponding query accordingly.
Remove a line from the query by clicking it; then, click the Remove from
Query button.
Select any query at the top of the dialog box; then, click the buttons provided
at the bottom of the Modify Query area to insert the corresponding text.
These buttons help you to form the query string and are listed below:
• (
• AND
• OR
• NOT
• )
• WHERE
Use the ORDER BY button to list the query in ascending or descending order.
Use the options in the Add where clause area to create a Where clause
connecting the column names and a constant or parameter, depending on the
selection from the corresponding drop-down list. Once created, add the clause
to the query by clicking the Add to Query button.
Determine the number of rows the query should fetch in the corresponding text
box.
Review the updated query in the Final Query area at the bottom of the dialog
box. This area is for viewing purposes only and is not editable.
4. (optional) Click the Check Query button to verify the correctness of the query in
terms of syntax. If any syntax mistakes are found, a dialog appears with the
message “The query is incorrect”.
5. (optional) Click the Manual Edit button to manually edit the query. A warning
message appears, stating that this option is at your own risk.
Important: This procedure is not supported, and any problems caused by manual
query editing cannot be remedied by SIMULIA. Use with option with extreme
caution.
6. Click OK to save the query and close the editor. The editor will not close if the
query is incorrect. Instead, an error message appears. You are returned to the
Database component editor.
parameter mapping
macro execution
Parameter mapping includes mapping Isight parameter values to Excel cell values and
mapping Excel cell values to Isight parameter values. Both scalars and arrays can be
used. Macro execution includes running Excel predefined macro methods and
functions.
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide. The Component Editor dialog box appears.
Once the editor is opened, you need to specify an Excel workbook with which to
interact.
2. Click the Browse... button. The Open dialog box appears.
3. Navigate to the Excel workbook you want to use; then, click the Open button.
Note: If a message appears regarding the existence of Named Cells, this means that
Isight has detected that the workbook contains names/tags associated with certain
cells, an indication that these might be items of importance. In this case, you can
do some instant parameter mapping. Click Yes to map the parameters now. Click
No to map them manually at a later time or to not map them at all. For more
information on mapping these parameters manually, and on this option in general,
see “Named Cells/Ranges Mapping,” on page 352. The workbook is loaded into
the editor.
Important: Once loaded into the Excel editor, any modification made to the
original workbook will not appear in the Excel editor’s emulator unless the reload
button ( ) is clicked or the editor is closed and re-opened. Also, no direct
interaction is possible with the workbook using the emulator. It is for display and
selection purposes only.
4. Click the Store workbook in the model check box if you’d like the workbook
saved as part of the Isight model file. This option may be useful if you wish to
allow this component to be distributed for execution on any Windows machine
without requiring that the workbook file exist on that machine.
Once the workbook has been loaded, you can map parameters or specify macros to
execute.
Mapping Parameters
There are four different ways to map Isight parameters to and from an Excel workbook
using the Excel component editor:
Direct cell mapping. This type of mapping allows you to select a single cell to
map to or from an Isight scalar parameter or array element, or to select a range of
cells to map to/from an Isight array parameter of the same size. For more
information, see “Direct Cell Mapping,” on page 347.
Name-Value mapping. This type of mapping allows you to select a 2-by-n or
n-by-2 region of cells that represent name-value pairs and automatically define
mappings using those names. For more information, see “Name-Value Mapping,”
on page 349.
File Mapping. This type of mapping allows you to map contents from a file into
Excel or from Excel into a file, with various options defining what content should
be mapped. For more information, see “File Mapping,” on page 354.
Since the parameters in the list might actually be from other components, you
can sort the parameters in the list by component. Right-click in the parameter
entry, then select Group Parameters. The setting is now saved in the
preferences for this component. There is a Preference option to set the initial
default for this option for all applicable component editors. You can also
choose to not group parameters for the currently selected component. For more
information on these Preference options, refer to the Isight User’s Guide.
To map a new Isight parameter, type the name of the new Isight parameter in
the Parameter text box. You can also click the button to define a new
parameter in more detail.
3. Select the mapping direction. The following two options are available:
4. Click the Add Mapping button . The mapping is added to the list at the bottom
of the editor. The parameter mode, type, and value are set automatically. In the
following example, we mapped the value from a parameter to the cell A3.
Note: You can delete a listed mapping by selecting it and clicking the Remove
Mapping button . You can also edit the details for any defined mapping by
Name-Value Mapping
To quickly define mappings for ranges that contain names and values in adjacent cells:
1. Select the range of workbook cells that you want to map, which contains both the
names for the parameters you are mapping and the associated values. The selected
range must contain either 2 rows or 2 columns, with the names of the parameters
assumed to be in the left or top cells.
2. Select the mapping direction. The following two options are available:
3. Click the down-arrow button next to the ; then, select Add Name-Value
Mapping from the menu. Scalar parameters are automatically created based on the
names in the selected range and are mapped to the corresponding adjacent value
cells. Your mappings are added to the list at the bottom of the editor.
Note: You can delete a listed mapping by selecting it and clicking the Remove
Mapping button . You can also sort the listed mappings by clicking the
column heading of the column that you want to sort by. Sorting the listed mappings
does not affect the order of execution of those mappings, which is controlled on the
Advanced tab as described in “Using Advanced Options,” on page 357.
4. Verify that the mapping direction is set properly for each parameter as described in
step 2. If it is not, you can change the mapping direction using the drop-down
menu provided in the Action column for that mapping.
If you have just loaded an Excel workbook, and the Named Cells Found dialog
box appeared, click Yes.
If you are working with an already loaded Excel file and want to invoke the
Named Cells/Ranges dialog, click the down-arrow button next to the ; then,
select Add Named Cells mapping from the menu.
2. Select the named cells that you would like to be automatically mapped. You can
choose a single mapping, multiple mappings, or all mapping using the Select All
button.
3. (optional) Change the default action of the mappings by selecting a new action
using the button; then, click the Change button.
4. Click OK. The new mappings are listed at the bottom of the Excel editor.
File Mapping
File mapping allows you to map contents from a file into Excel or from Excel into a
file, with various options defining what content should be mapped.
When inserting contents from a file into Excel, each line in the file will be added as a
row and the line will be split at spaces, commas, and tabs in order to define the items to
put into different columns in that row. Values that have spaces must have quotation
marks around them in order to be interpreted as a single value to insert into a cell.
1. Click the drop-down arrow next to the button; then, select Add mapping
from/to a file from the menu that appears. The Map To or From a File dialog box
appears dialog box appears.
4. Enter the name of the file in the File text box or click the Browse... button to
navigate to the file.
5. Select one of the following:
Insert entire file contents starting in cell. Select this option to have the entire
contents of the file inserted into the workbook starting at the currently selected
cell.
Match parameter names from file with selected range. This option allows
you to select a row of parameter names in the Excel worksheet to be matched
against parameter names found on the first line of the file. The entire column
of data for those matching parameters will then be copied from the file to the
corresponding column in the Excel worksheet.
8. Enter the name of the file in the File text box or click the Browse... button to
navigate to the file.
9. Determine if you want to Export entire selected sheet or Export only selected
range.
11. Click Finish. The new mappings are listed at the bottom of the Excel editor.
Important: If no macros are present in the workbook you loaded into the Excel
component editor, all macro options are disabled.
3. If applicable, set the available macro options. If no macros are present in the
workbook you loaded into the editor, the macro options are disabled.
b. Specify the macro arguments using the Arguments button. Note that you can
supply arguments as either constants or Isight parameter values.
4. If necessary, modify the action execution order using the and buttons
below the Action Execution Order list.
5. Click the Save Excel file after execution check box if you’d like changes that
were made to the Excel file during execution to be saved. You can type the path
and file name into the corresponding text box, or you can click the Browse...
button and navigate to the file. By default, the current path and file name are
automatically added to the text box.
6. Click the File Parameter button if you’d like to include the file as an
output file parameter from the Excel component. This button toggles between a
depressed (active) and raised (disabled) position. The default setting for this button
is determined using the Excel component preferences as described in “Setting
Excel Component Preferences,” on page 364.
7. Click the Show Excel during execution check box at the bottom of the tab if you
want Excel to be visible when you execute your model.
8. Click the Close workbook check box if you want Excel to close the opened
workbook (when not selected, Excel and the workbook remain open after the
model is executed); then, use the corresponding drop-down list to determine when
Excel should close. The following options are available:
when job completes. This option closes the workbook after the entire Isight
execution is completed.
after each execution. This option closes the workbook after each execution of
Excel. Excel may be executed numerous times during a single job.
Note: You can set default behavior for this option using the component preferences
as described in “Setting Excel Component Preferences,” on page 364.
9. (optional) Click the More Options... button; then, set the following options as
desired.
Re-use open workbook of same name. Click this check box if you want the
component to use the same workbook that was open for a previous component
in the model. This option allows you to modify (or extract data from) the same
workbook. If this option is not selected, which is the default, each component
will open its own copy of the workbook.
Use a dedicated Excel process. By default, the Excel component will share a
single Excel application process among all instances of an Excel component,
whether they are in the same model or different models. Select this option if
you would like a separate Excel process to be started and dedicated to use for
this instance alone.
Automatically truncate large worksheets to show 500 rows. Click this
option to allow only the first 500 rows on any worksheet to be loaded into the
editor. This allows large spreadsheets to load more quickly. Note that this will
not allow you to select cells in rows beyond row 500 to define mappings, but
any mappings already defined will remain defined and will still execute.
These steps are not necessary if you are executing using the standard Isight desktop
(Standalone) execution.
2. Type dcomcnfg in the Open text box; then, click OK. The Component Services
dialog box appears.
3. Click Component Services on the left side of the dialog box. Folder options
appear on the right side of the dialog box.
5. Double-click the DCOM Config folder. The contents of the folder appear.
6. Right-click the Microsoft Excel Application icon; then, select Properties from
the menu that appears. The Microsoft Excel Application Properties dialog box
appears.
8. Click the Customize radio button in the Launch and Activation Permissions area;
then, click the Edit... button. The Launch Permission dialog box appears.
9. Click the Add... button. The Select Users, Computers, or Groups dialog box
appears.
10. Type the necessary username (be sure to include the computer/domain name) in
the Enter object names to select text box.
Note: You can click the Check Names button to verify that the username you
entered is valid. You can also search for the name using the Advanced... button. If
the username you specify matches more than one known user, the Multiple Names
Found dialog box appears, allowing you to pick the exact user.
11. Click OK. You are returned to the Launch Permission dialog box, and the
username you entered now appears in the list at the top of the dialog box.
12. In the Permission for <username> area; click the following check boxes in the
Allow column:
Local Launch
Local Activation
13. Click OK. You are returned to the Microsoft Excel Application Properties dialog
box.
14. Click OK. You are returned to the Component Services dialog box.
16. Proceed to “Starting the Editor and Selecting a Workbook,” on page 344 for
information on using the component.
1. From the Isight Design Gateway, select Preferences from the Edit menu. The
Preferences dialog box appears.
2. Expand the Components folder on the left side of the dialog box; then, select
Excel.
The following preference options are available with the Excel component:
Default workbook close option. Determine the default behavior for the Excel
workbook that is referenced with the component. You can choose to leave the
workbook open after execution, or to close it after each execution of the
component or after the entire job completes.
Note: Selecting this option will increase the execution time for the Excel
component (by however long it takes Excel to save the workbook).
Run auto-macros when loading Add-ins. The Excel component will always
load any Excel Add-Ins that are installed in the Excel application; however, it
will not invoke any auto-macros for those Add-Ins unless this preference is
selected. An example in which running auto-macros for Add-Ins is required
would be if the Add-In added menu items to the Excel application (which is
done through macros).
Note: Selecting this option will increase the start-up time for the Excel
application when it is executed from the Excel component.
3. Click OK to save your changes and close the Preferences dialog box.
The API for the Excel component consists of the following methods:
Data Exchanger. The Data Exchanger is the original parser for Isight and Fiper.
The Data Exchanger supports directly parsing name/value files, tables of data, and
vectors of numbers, in addition to general text find/replace.
The Data Exchanger allows parsing instructions to be augmented with arbitrary
Java code via the Dynamic Java interpreter.
iSIGHT File Parser. The iSIGHT file parser (Advanced Parser) was the original
parser in iSIGHT. The iSIGHT File Parser uses a simplified language, similar to
TCL, to specify parsing instructions. The File Parser allows a fair amount of
programming to do complex parses.
Fast Parser. The Fast Parser was added to iSIGHT in release 8.0. This is a very
simple and fast parser, especially for inserting parameters into input files. While
the Fast Parser can read fixed-length columns of numbers, it has no programming
facilities to conditional or looping operations.
In general, the Data Exchanger is recommended for new models, as it is the most
powerful. The other two parsers exist mostly to ease converting iSIGHT models to
Isight - the same parsing instructions can be used in both models, and you can directly
import the parsing instructions or template files from an iSIGHT model into an Isight
model.
The Fast Parser is the easiest to use of the three parsers, and can rapidly read very large
output files if you are selecting text from only a small portion of the file. When
preparing input files, it can be much easier to use the Fast Parser than the other parsers
- you simply select the location to write the value and specify the parameter - no need
to specify how that location is found. It can also read (and, with limitations, write)
columns of numbers in one simple action. However, the numbers must be in the same
relative position on successive lines - there is no easy way to skip lines. Only
fixed-length columns can be handled.
This ease of use may be useful even for new models if the parsing fits within the Fast
Parser's limitations.
While the Fast Parser can handle files of unlimited size at runtime, the editor can
handle about a 4.5 MB template file. To read larger files, you have to use an external
editor to edit the file down to just the sections of interest, and use the smaller file as the
template.
The iSIGHT File Parser is useful if you are already familiar with it from using iSIGHT.
You’ll find that the General Text tool in the Data Exchanger can do most everything
the iSIGHT File Parser does.
It is possible to mix the parsers in the same model - using the Data Exchanger, iSIGHT
File Parser, and Fast Parser components. You can also use any of the parsers in the
Simcode component, though a given Simcode can use only one of the parsers and must
use the same parser type for the input and output parse. By mapping a file parameter
between parsing components, you can even process the same file using multiple
parsers, taking advantage of what each parser does best. For example, you can prepare
a portion of an input file using Advanced Parser and then insert it into Fast Parser using
the Include File option. On the output side, you can extract part of the output data using
Fast Parser and part using the iSIGHT File Parser.
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide.
3. Depending on the mode you selected, click the Load Input File or Load Output
File button to select the file to read/write from.
You are prompted to make a copy of this file to use as the template file. If you
select Yes, then you can begin using the Fast Parser. If not, then you must click the
Load Template File button to locate the existing template file.
4. Proceed to “Using the Fast Parser,” on page 370.
Defining a Tag
You can specify both input and output tags using Fast Parser. For more information,
see one of the following topics:
3. Specify the parameter for the tag using one of the following methods:
Type the name of the parameter in the Parameter Name text box to create a
new parameter.
Select an existing parameter using the corresponding drop-down list.
4. Specify the Substitution Type for the tag. The following options are available.
Scalar.
Array Element. When this option is selected, a text box appears that allows
you to specify the Array Index.
Note: For this substitution type, you tag the top of the column.
Note: For this substitution type, you tag the top of the column.
5. (optional) Specify the Substitution Format for the tag. The following options are
available:
Best Fit. Formats the number using the best fit format depending on its value
(it may be written as an integer or as a real).
6. Specify the number of characters allowed by the format in the Max Field Width
text box; then, specify the number of digits after the decimal point in the Precision
text box, if it is available.
7. Click the Leading Plus check button if you want a plug sign (+) to appear in your
input text file before the positive numbers in your tag.
8. Click the Left Justify check button if you want the numbers in your tag to be left
justified in your input text file. Numbers are right justified by default.
9. Click OK to save your tag information and return to the Fast Parser editor.
3. Specify the parameter for the tag using one of the following methods:
Type the name of the parameter into the Parameter Name text box to create a
new parameter.
Select an existing parameter using the corresponding drop-down list.
4. Specify the Substitution Type for the tag. The following options are available:
Scalar.
Array Element. When this option is selected, a text box appears that allows
you to specify the Array Index.
Array Column. Using this option means that you want to extract every value
of the column into its own Isight parameter. When selected, two text boxes
appear that allow you to specify the Initial Array Index and the Final Array
Index.
Note: For this substitution type, you tag the top of the column.
Multiple Values. Using this option means you want to extract either the last,
min, max, min(abs), max(abs), sum, or average value of multiple tags into a
single Isight parameter. When this option is selected, two text boxes appear.
The first text box allows you to specify the Array Index. The second text box
allows you to specify the Number of Values.
If you select Multiple Values, the Number of Values option indicates the
number of rows to fill in with the tagged value.
Note: For this substitution type, you tag the top of the column.
5. (optional) Specify the Substitution Format for the tag. The following options are
available:
Best Fit. Formats the number using the best fit format depending on its value
(it may be written as an integer or as a real).
6. Specify the maximum number of characters allowed in the format using the Max
Field Width text box.
7. Specify the failed run value in the Failed Parse Value text box. This value gets
sent back to Isight when a run fails to produce an output value to be parsed. For
example, you might use this to send back a bad objective function value when a
simulation code fails to analyze a particular design point.
8. (Multiple Values option only) Use the Multiple Extractions drop-down list to tag
multiple outputs with the same variable name, and return to the parameter file
either the last one read, the minimum, the maximum, the minimum absolute value,
the maximum absolute value, the sum, or the average.
9. Click OK to save your tag information and return to the Fast Parser editor.
Defining a Marker
Markers are used to help tags locate output variables. Once a marker is inserted, the tag
following the marker uses it. The location of all subsequent tags is relative to the
nearest marker above it.
To define a marker:
3. Specify the search criteria for the marker. The following options are available:
Last Tag. This option searches for the marker of interest starting at the last tag
in the output file.
Beginning of File. This option searches for the marker of interest starting at
the beginning of the file.
4. Use the Find Marker occurrence # text box to find the nth occurrence of a
marker and start parsing from that point. For example, you could search for the
12th occurrence of the word “stress” and extract data from that point forward.
1. Place the cursor in the area of the template file where you'd like to include the
outside file.
3. Enter the path and name of the file in the Include File text box; or, click the
Browse... button to navigate to the file. The name of the file appears in the Include
File text box.
4. Click OK. The file is added to the specified location in the Fast Parser template
file.
Setting Delimiters
You can specify the characters used in an output parse to separate words. The Fast
Parser supports spaces, tabs, and commas.
To set a delimiter:
1. Place the cursor at the location where you want to begin using the delimiter.
3. Click the text box beside the delimiter you want to use (Spaces, Tabs, Commas).
Deleting a Tag
You can remove any tag from the Fast Parser, including tags, markers, included files,
or delimiters.
To delete a tag:
1. Click the tag you want to delete to select it.
Right-click the tag; then, select Delete from the menu that appears.
The tag is deleted.
For additional information on iSIGHT, refer to the iSIGHT User’s Guide. This book is
included on the iSIGHT 10.0 CD. The iSIGHT component is only supported in
iSIGHT 9.0.5 or higher. It is recommended that you have iSIGHT 10.0 installed to use
this component.
1. Double-click the component icon to start the component editor. For more
information on inserting components, and accessing component editors, refer to
the Isight User’s Guide. The Component Editor dialog box appears.
Important: If you receive a message stating that iSIGHT must be installed to use
the iSIGHT component, iSIGHT is not currently installed in the system path. You
must install iSIGHT in the system path for Isight to use this component.
2. Click the Browse... button to select the description file you want to use. The Select
Description File dialog box appears.
3. Select the description file you want to use; then, click Open. The description file’s
parameters are displayed. This example is using the beamMin.desc file.
4. (optional) Click the Edit... button to set the description file’s file parameter
settings. The description file information is saved as an Isight file parameter so that
it can be mapped from one component to another. For more information on these
settings, see “Setting Input File Parameter Information,” on page 384. For more
information on parameter mapping, refer to the Isight User’s Guide.
5. Use the iSIGHT Run Mode button to set the desired run mode. For more
information on these modes, refer to the iSIGHT User’s Guide.
6. Select the parameters that you want to expose as Isight parameters in your model
by clicking the corresponding check boxes. You can select all user-defined
parameters in a list using the Select All button at the bottom of the column.
7. Click the Support Files tab. The contents of the tab appear.
8. Specify the support files for iSIGHT in the area on the left side of the tab:
To add a file, click the button. The file parameter wizard appears, which
helps you specify the support file you need. After it is specified, the file is
displayed on the tab. For more information on using this wizard, refer to the
Isight User’s Guide.
To delete a file, select the file you want to delete; then, click the button.
You can select multiple files to delete at once.
To change a file’s settings, select the file whose information you wish to alter;
then, click the Edit... button. The Configure File dialog box appears. The
settings for each support file are saved as an Isight file parameter so that they
can be mapped from one component to another. For more information on these
settings, see “Setting Input File Parameter Information,” on page 384. For
more information on parameter mapping, refer to the Isight User’s Guide.
9. Specify the files to be read from iSIGHT in the area on the right side of the tab:
To add a file, click the button. The file parameter wizard appears, which
helps you specify the support file you need. After it is specified, the file is
displayed on the tab. For more information on using this wizard, refer to the
Isight User’s Guide.
To delete a file, select the file you want to delete; then, click the button.
You can select multiple files to delete at once.
To change a file’s settings, select the file whose information you wish to alter;
then, click the Edit... button. The Configure File dialog box appears. The
settings for each output file are saved as an Isight file parameter so that they
can be mapped from one component to another. For more information on these
settings, see “Setting Output File Parameter Information,” on page 388. For
more information on parameter mapping, refer to the Isight User’s Guide.
10. Click the Description File tab. The description file is shown in MDOL format.
12. Click the Execution Options tab. The contents of the tab appear.
Retry iSIGHT license check ... times, at intervals of .... This option allows
you to set the component to check for an available iSIGHT license after a
specific number of minutes or seconds. You would want to set this option
when the number of iterations exceeds the number of licenses you have. For
example, if you wish to run an iSIGHT component inside of a DOE
component, and the task will run 200 iterations but you only have 10 licenses,
the task run may fail because you have run out of licenses. You can avoid this
problem by retrying the license check, which allows you to take advantage of
iSIGHT’s 10x parallelism instead of having the job run sequentially.
Show iSIGHT during execution. This option is useful only for demos and
gaining access to iSIGHT’s Oversight graphics application after a run. It
allows you to set whether or not iSIGHT appears when the component is
executed. Do not use this option unless you set the affinities to run the job on
your workstation.
Close iSIGHT after execution. This option allows you to specify whether or
not you want to keep iSIGHT open after execution, if you have set iSIGHT to
be shown during execution. Do not use this option unless you set the affinities
to run the job on your workstation.
Reinitialize iSIGHT for each execution. This option is useful for ensuring
that data from previous runs does not influence the next run. It allows you to
update parameter information from the previous execution prior to the next
execution of the component.
This section is divided into the following topics, based on the type of file parameter
you are using:
Note: If you are not sure as to the type of file parameter you are using, refer to the text
and graphic at the top of the Configure File dialog box. It discusses the parameter type
and shows a graphic representation of how the parameter is used.
1. Access the Configure File dialog box by clicking one of the Edit... buttons on the
component editor. These buttons are available adjacent to the Description File text
box and on the Support Files tab.
The Source area configures the source of the data. For a mapped input file
parameter, this information is overridden by parameter mapping. It may still be
convenient to configure a data source, just in case the user runs this component
alone (not as part of a larger workflow).
2. Click the Location drop-down list; then, select one of the following four options
for the data source:
<None>. The data source is not configured. This setting is the default for new
input file parameters. Having no configuration is normal for a file that will be
mapped from another component in the workflow. If such a file parameter is
not mapped, or the component is run by itself, the component receives an
empty file.
File. The data is stored in a regular file on the file system. The Name text box
holds the base name of the file, excluding all directory path information. You
can click the Browse... button to navigate to the file you want to use. The Path
text box holds an expression which defines the directory path of the file.
The meaning of this expression is determined by the setting of the Options list
described below:
• Absolute path. The Path text box holds the actual name of the directory
containing the named file, in absolute path form. The system will try to
access the file under this directory at runtime; usually this works only when
running locally or when the directory is on a shared file system. This is the
default for inferred input file parameters. This type of file parameter is not
safe for distributed or parallel execution.
• Runtime directory. The Path text box holds the substitution key
{rundir}, an expression designating the component’s runtime working
directory. You may add subdirectories to this expression. At runtime, the
expression is replaced with the actual current working directory, and the
system will try to access the file there.
• Model directory. The Path text box holds the substitution key
{modeldir}, an expression designating the directory containing the
model file which declares this component. You may add subdirectories to
this expression. At runtime, the expression is replaced with the actual
directory that contains the model file (if there is a model file), and the
system will try to access the file there. This type of file parameter is not
safe for distributed or parallel execution.
• Shared file system. The Path text box holds a substitution key {root
root-name}, where root-name is a symbolic root defined in the user
preferences or in a Fiper Station properties file. This option is used when
the file is on a shared file system that is mounted in different places on
different machines. You may add subdirectories to this expression. At
runtime, the system obtains the definition (if any) of this expression (for
example, from the Fiper Station environment to which the component was
dispatched); then the expression is replaced with the defined directory, and
the system will try to access the file there. For more information, refer to
the Isight User’s Guide.
Click in the Name or Path text box to specify the insertion point for a
parameter substitution. Select a parameter from the Parameters drop-down
list. A properly formatted parameter substitution key {var parameter-name}
is inserted into the selected location.
In Model. The contents of the file will be stored inside the Isight model. Isight
takes care of transferring the data to where the component is being executed.
This option is the most convenient because it eliminates any concerns about
shared file systems or parallel execution. Model files, however, are limited in
size. Generally in-model files should not be larger than a few hundred
thousand characters (about 2000 lines). The absolute limit is dependent on the
amount of available memory, but is usually around 5 MB. Click the Reload
From... button to select the file to be copied into the model, or to re-load a
modified file into an existing model.
URL. Allows you to specify a file residing on a server. All forms of URLs
supported by Java may be used, though http: is the most common form. You
may specify the amount of time to wait for the server to respond, or “0” to wait
until the underlying protocol times out. Note that a URL type file parameter
with a file: URL is equivalent to a File-type file parameter.
3. Click the Preview… button (at the bottom of the Read From area) to display the
contents of the selected file. For In Model file parameters, the contents of the file
can be edited from the Preview dialog box; for other types it cannot be edited.
4. Specify the file type using the corresponding drop-down list. You can specify a file
as Binary or Text.
Note: The iSIGHT component editor conservatively sets the file type of all
inferred input file parameters to binary. For most iSIGHT applications it may be
helpful for most of these file types to be reset to text.
5. (For Text files only) Click the Encoding drop-down list; then, specify the
encoding to use when converting between bytes and characters. In a Locale (a
system setting that includes the language, number formats, and character set in
use) that uses multi-byte characters (Japanese, Chinese, Korean), there is a default
encoding used to convert bytes into characters. Most text files will be written using
this encoding, but sometimes it is necessary to specify this encoding.
Note: The Encoding drop-down list only appears if you have selected the Show
File Type encoding on the Files Tab option from the Preferences dialog box. For
additional information on preference options, refer to the Isight User’s Guide.
6. Select one of the following options from the Destination area of the Files tab.
These options allow you to choose where the data will be put while the component
runs:
Fixed file name. This option should be used if the component expects its input
in a specific place. Fill in the file name in the text box provided, or browse to
the file you want. Normally, this will be a simple file name with no path,
indicating a file in the runtime working directory. An absolute path may be
used in the odd case of a program that demands that its input be in a specific
directory, though such a file parameter is not safe for parallel or distributed
execution.
Automatic. This option is used most often. When activated, the Isight system
assigns a name to the file and passes that name to the component.
As for Input file parameters, the bottom of the Files tab for an Output file
parameter has Source and Destination areas.
2. Specify the file name information in the Name and Path text boxes. The data is
stored in a regular file on the file system. The Name text box holds the base name
of the file, excluding all directory path information. You can click the Browse...
button to navigate to the file you want to use. The Path text box holds an
expression which defines the directory path of the file; the meaning of this
expression is determined by the setting of the Options list as described below:
Absolute path. The Path text box holds the actual name of the directory
containing the named file, in absolute path form. The system will try to access
the file under this directory at runtime. An absolute path can be used in the odd
case of a program that insists on writings its output to a specific directory. This
type of file parameter is not safe for distributed or parallel execution.
Runtime directory. The Path text box holds the substitution key {rundir},
an expression designating the component’s runtime working directory. You
may add subdirectories to this expression. At runtime, the expression is
replaced with the actual current working directory, and the system will try to
access the file there. This is the default for inferred output file parameters.
Model directory. The Path text box holds the substitution key {modeldir},
an expression designating the directory containing the model file which
declares this component. You may add subdirectories to this expression. At
runtime, the expression is replaced with the actual directory that contains the
model file (if there is a model file), and the system will try to access the file
there. This type of file parameter is not safe for distributed or parallel
execution.
Shared file system. The Path text box holds a substitution key {root
root-name}, where root-name is a symbolic root defined in the user
preferences or in a Fiper Station properties file. This option is used when the
file is on a shared file system that is mounted in different places on different
machines. You may add subdirectories to this expression. At runtime, the
system obtains the definition (if any) of this expression (for example, from the
Fiper Station environment to which the component was dispatched); then the
expression is replaced with the defined directory, and the system will try to
access the file there. For more information, refer to the Isight User’s Guide.
The File name text box may be left blank. However, Isight will assign a name
for the output file and pass that name to the component during execution.
3. Specify the file type using the corresponding drop-down list. You can specify a file
as Binary or Text.
Note: The iSIGHT component editor conservatively sets the file type of all
inferred output file parameters to binary. For most iSIGHT applications it may be
helpful for most of these type of files to be reset to text.
4. (For Text files only) Click the Encoding drop-down list; then, specify the encoding
to use when converting between bytes and characters. In a Locale (a system setting
that includes the language, number formats, and character set in use) that uses
multi-byte characters (Japanese, Chinese, Korean), there is a default encoding used
to convert bytes into characters. Most text files will be written using this encoding,
but sometimes it is necessary to specify this encoding.
Note: The Encoding drop-down list only appears if you have selected the Show
File Type encoding on the Files Tab option from the Preferences dialog box. For
additional information on preference options, refer to the Isight User’s Guide.
5. In the Destination area, specify where the data will be put after the component
finishes running. The following three options are available in this area:
Fiper File Manager. Isight takes care of storing the contents of the file as part
of the run results. This option is the simplest one, and it is the default. There is,
however, a limit to the size of a file saved this way; usually the limit is around
4 MB. The data is physically stored in the Run Results database.
Specific location. The data is copied from the working directory to the
specified location. This setting must be an absolute path, since there is no
concept of a “working directory” after the component finishes executing. The
file name substitutions described in the Isight User’s Guide can be used to file
the data in locations of your choice. If there are no substitutions, the same file
will be written by every run and the model is not safe for parallel execution.
Note: If the File Name text box in the Read From area is left empty and the
Specific location option is selected, the data will not be written to the runtime
working directory. Instead, the absolute path described in the Write To area
will be passed to the component, and the component can write the data directly
to the specified location. This option is more efficient for very large output
files, though it can be difficult to configure for parallel and distributed
execution.
None. The data is not copied after the component executes. Instead, the
absolute path to the Read From area is stored in the results database. If the
Read From name is a simple file name (not an absolute path), the data is
written to the working directory, which is normally deleted after the
Edit the Isight startup file (fiperenv) as described below (based on your
operating system):
Note: If you are submitting your job to run through an ACS in the Fiper environment,
the temporary directory you see at design time has no bearing upon the temporary
directory that exists at runtime. The runtime directory is determined by the individual
Fiper Station installations. You will receive warnings for each execution of the
iSIGHT component if it executes on a Fiper Station whose temporary directory
contains spaces. For more information on using Fiper Stations, refer to the Fiper
Installation and Configuration Guide that matches your ACS combination.
Currently, the iSIGHT File Parser component only allows you to make use of the
Advanced Parser in iSIGHT. File Description Command (FDC) files and the
instruction segments of description files can be used to move data between Isight
parameters and files.
This section explains how to make use of pre-existing parses and how to edit them.
While it is possible to create new parses with this component, this feature is not
covered, in detail, in this section. For more information regarding the workings of the
FDC files, refer to the iSIGHT User’s Guide.
Type Chooser. This area, in the top left corner of the editor, determines the mode
of the parse (input or output). Any bi-directional code is handled internally and
does not have to be specified.
Actions. This group of buttons, in the upper right corner of the editor, provides the
basic commands available to the user to produce, edit, and test the parse.
Necessary Files. This area, on the left side of the editor and divided into tabs,
displays the files that are required and constitute the parse. These files can be
selected or created, edited, or viewed.
Isight Parameters. This area, on the right side of the editor, displays a list of the
parameters that the component deems necessary for the parse. The parameters are
placed into the component so that Isight has access to them.
Understanding Terminology
A simcode consists of a flow of parameters into an input parser. This flow produces a
file which is the input for an OS Command component. This component has a file as an
output that is used by the output parse to return parameters back to Isight. The
following terminology for individual parse types is used with Isight:
Template. The file, used with input parses, that indicates the format for the
produced file.
Input FDC. The file, used with input parses, that describes how the parameters
should be written to the produced file.
Produced. The file that is better known as the input file, because it is the file that is
created by the parse that is used as the input for some other component (typically
OS Command in Isight).
Input Parameters. The parameters whose values come from Isight and are placed
into the produced file.
Output. The file, used with output parses, that was produced by another component
(typically OS Command in Isight) and that is used as input for the current parse.
Output FDC. The file, used with output parses, that describes how to get the
parameters from within the output file.
Output Parameters. The parameters whose values come from the output and are
brought into Isight.
1. Double-click the component icon to start the component editor. For more
information on inserting components and accessing component editors, refer to the
Isight User’s Guide. The iSIGHT File Parser Component Wizard dialog box
appears.
The wizard is the first thing to appear when you open the editor of a newly added
component. The initial screen presents you with three choices to determine how
you will import or create your parse.
2. Determine how you want to obtain your FDC file. The following options are
available:
Create a new FDC via an editor. Proceed to “Creating a New Parse using
iSIGHT’s Advanced Parser,” on page 405 for more information on using this
option.
You can also use the component editor itself to create or import a parse. For more
information, see “Using the Component Editor to Create a Parse,” on page 406.
1. Access the component wizard and select the appropriate option as described in
“Opening the Editor and Determining Usage,” on page 395.
3. Click the Browse button; then, specify the description file that contains the parse
you want to use.
4. Click Next. The Select a FDC from the list below screen appears.
Within description files are segments where entire parses are defined. After
selecting a description file, you are given the names of the parses within the
description file and their contents.
5. Click the parse you want to use on the left side of the screen. Information about the
parse is displayed on the right side of the screen.
The wizard uses the description file to fill in the component as best as it can. Also,
any files listed are provided, and the component will attempt to determine all of the
necessary parameters.
Click the button to open the iSIGHT Advanced Parser interface. For more
information, see “Creating a New Parse using iSIGHT’s Advanced Parser,” on
page 405.
Click the button to view the parse, if you are working with an input parse.
If all of the necessary information is present (FDC file, template, parameters
with desired values, etc.) then the Produced tab displays what the file would
look like for the parse. If a produced file is not specified, the parse is still
previewed, however, you are warned that you need to specify the file before
execution. If you are working with an output parse, and all the necessary
information is present (the “output” file is populated with the desired
values/info, FDC file, as well as the parameters that you expect to get back
from the parse), then the preview enters the values of the parameter table.
Manually alter the parse or its parameter settings. For more information, see
“Updating an Imported Parse,” on page 406.
1. Access the component wizard and select the appropriate option as described in
“Opening the Editor and Determining Usage,” on page 395.
Input. An input parse takes the variables and their values in Isight and makes
use of the FDC instructions to produce a file that is used as an input for a
particular program. If this parse is bi-directional, it will also make changes to
the variables values.
Output. An output parse takes the output file of a particular program and
makes use of the FDC instructions to extract the Isight variables and their
values. If this parse if bi-directional, it will also make use of some variables
and their values in Isight.
4. Click Next. The Select the FDC from a file screen appears.
5. Click the Browse button; then, specify the FDC file you want to use. Information
about the parse appears on the right side of the screen.
6. Edit the parse, if desired; then, click Save. If you do not want to save your changes,
click Cancel.
7. (Input parses only) Click Next. The Select a Template from a file screen appears.
The template file is optional when writing an FDC, but is still very common and
should be included if available.
8. (Input parses only) Click the Browse button; then, specify the FDC file you want
to use. Information about the file appears on the right side of the screen.
9. Click Next. The Select the “Input/Output File” from a file screen appears. The
exact title of this screen varies depending on the type of parse you selected in
step 3.
10. Click the Browse button; then, specify the output file you want to use. Information
about the file appears on the right side of the screen.
11. Edit the output file, if desired; then, click Save. If you do not want to save your
changes, click Cancel.
Click the button to open the iSIGHT Advanced Parser interface. For more
information, see “Creating a New Parse using iSIGHT’s Advanced Parser,” on
page 405.
Click the button to view the parse, if you are working with an input parse.
If all of the necessary information is present (FDC file, template, parameters
with desired values, etc.) then the Produced tab displays what the file would
look like for the parse. If a produced file is not specified, the parse is still
previewed, however, you are warned that you need to specify the file before
execution. If you are working with an output parse, and all the necessary
information is present (the “output” file is populated with the desired
values/info, FDC file, as well as the parameters that you expect to get back
from the parse), then the preview enters the values of the parameter table
Manually alter the parse or its parameter settings. For more information, see
“Updating an Imported Parse,” on page 406.
1. Access the component wizard and select the appropriate option as described in
“Opening the Editor and Determining Usage,” on page 395.
2. Click Next.
If you have iSIGHT installed on your system, an iSIGHT file parser is available
screen may appear.
3. Determine the type of parse you want to create (input or output) using the options
near the bottom of the screen.
4. Click Finish to launch the iSIGHT Advanced Parser. Once you have completed
the new parse, it will be added to the Isight File Parser component. For more
information on using the iSIGHT Advanced Parser, refer to the iSIGHT User’s
Guide that was included with your iSIGHT media.
1. Access the component editor; then, select the type of parse you want to create
using the Parse Type drop-down list.
2. Click the Browse button on each of the tabs on the left side of the editor; then,
specify the file for that part of the parse. You can select either an FDC file or a
description file. If a description file is selected, you are given the option of
choosing a single FDC file from within the description file.
Once you select the FDC file, the component tries to determine all the parameters
that are necessary, but, just like when using the wizard, it is a good idea to make
sure that the parameters needed are present.
Note: It is possible to create a new parse by either using the simple text editors on the
left side of the editor. Also, you can start the iSIGHT Advanced Parser, if it is available
on your local machine (using the button).
Update the parameter information on the right side of the editor by either adding
new parameters, deleting existing parameters, or adding new members to
aggregate parameters. You can also sort the parameter information by clicking the
column headings. For more information on using parameters, refer to the Isight
User’s Guide.
Manually change parse information directly in the text boxes on the left side of the
editor. Click Save, near the bottom of the editor, to save your changes. If you do
not want to save your changes, click Cancel.
Click the button to open the wizard, which helps guide you through importing a
parse. For more information, see “Importing an Existing Parse,” on page 396.
Click the button to open the iSIGHT Advanced Parser interface. For more
information, see “Creating a New Parse using iSIGHT’s Advanced Parser,” on
page 405.
Click the button to view the parse, if you are working with an input parse. If all
of the necessary information is present (FDC file, template, parameters with
desired values, etc.) then the Produced tab displays what the file would look like
for the parse. If a produced file is not specified, the parse is still previewed,
however, you are warned that you need to specify the file before execution.
If you are working with an output parse, and all the necessary information is
present (the “output” file is populated with the desired values/info, FDC file, as
well as the parameters that you expect to get back from the parse), then the preview
enters the values of the parameter table.
Building a Simcode
The Simcode component can use the Data Exchanger, iSIGHT File Parser, or the Fast
Parser to prepare the input file and extract data from the output file. This is done by
setting the Default Parser type Preference option for the Simcode component before
adding the Simcode to the model. For more information on setting this option, see
“Setting Simcode Component Preferences,” on page 495.
Once you choose the parser type, you need to create two mappings in order for the
model to work. First, you must map the produced file of the input parse to the input file
of the OS Command component. Second, you must map the output file of the OS
Command component to the output file of the output parse (remember it’s an input to
the parse, it’s just called an “output file”). Once these basic steps are complete, you can
configure the driver as necessary for the given model.
1. Double-click the component icon to start the component editor. For more
information on inserting components and accessing component editors, refer to the
Isight User’s Guide.
Note: You can automate this step by specifying the information in the Isight
Preferences. Once specified, the Mail Server and From text boxes are
automatically populated when the Mail component editor is accessed. For more
information on setting these preferences, refer to the Isight User’s Guide.
4. Add e-mail addresses to the recipients list. You can use the three common e-mail
recipient specifications: To, Cc, and Bcc. You may also use commas to separate
multiple recipients on a single line. You can click the To text label to access the Cc
and Bcc options.
Note: The From, To, and Subject entries can be defined using parameters whose
values are inserted when the component executes, allowing the e-mail to be
defined in a more dynamic fashion. To set this option, use the notation “{var
<parameter-name>}” where “<parameter-name>” is replaced by the name of the
parameter whose value you want to insert.
You can sort the parameters in the list by component. Right-click in the list, and
then select Group Parameters. The setting is now saved in the preferences for this
component. There is a Preference option to set the initial default for this option for
all applicable component editors. You can also choose to not group parameters for
the currently selected component. For more information on these Preference
options, refer to the Isight User’s Guide.
8. Place the cursor in the message body at the location where you’d like the
parameter information to be displayed; then, click the button. The name of the
parameter is added to the message body.
When this component executes, the parameter value will be displayed in place of
the parameter name in the recipient’s e-mail.
Note: If you wish to add multiple parameters in the format “name = value” to the
message body, click the button; then, select the parameters that you want to
add using the Select Parameter dialog box. Click OK when your selection is
complete.
9. Repeat step 7 and step 8 for each parameter you wish to send.
Note: The e-mail will be sent once the component is executed, whether
independently or within the model. If you want to send the e-mail immediately
while in the editor, click the Send button .
10. Proceed to step 13.
11. To add a file as an attachment, click the Attachment tab ; then, perform one
of the following actions:
To add a file from disk as an attachment, click the button and navigate to
the file. Note that a file parameter will be created for the selected file.
Note: To remove an attached file parameter, select it; then, click the button.
The API for the Mail component consists of the following methods:
get/set("from"). This method allows you to get/set the address of the email
account from which this email should be sent. If left undefined, an attempt will be
made to set this value from your Isight preferences.
add("to"). This method allows you to add the specified email address as a
recipient of this email. To add a recipient with a delivery mode of cc or bcc, use the
add("recipient") method.
call("send"). This method allows you to send the email using the current
settings/configuration.
Specifically, Isight will look for the following system environment variables to add
them to the system path to ensure that the necessary MATLAB files can be found at
execution time:
MATLAB=<full path to the location of the bin directory in your MATLAB
installation> (for example, c:\software\MATLAB7\bin)
MATLAB_SHLIB=<full path to the location of the directory containing the
MATLAB shared libraries in your MATLAB installation> (for example,
c:\software\MATLAB7\bin\win32)
If these system environment variables are not defined, the locations noted above must
already be on the system path in order for the component to work properly.
The MATLAB component editor is divided into two tabs: Contents and Options.
The Contents tab specifies the actions taken by the component (i.e., the data
transfers between Isight and MATLAB) and the computations performed by
MATLAB. The Options tab gives you access to useful adjustments to the behavior
of the component at runtime.
The MATLAB component Contents tab allows for actions to be defined in any of
three phases:
Initialize. Actions that will be executed only the very first time that this
component executes within a job (model execution). This phase is useful for
when you need to initialize/create a variable or load some functions into the
MATLAB session and only need it done one time at the very beginning.
Execution Order. Actions that will be executed every time this component
executes.
Finalize. Actions that will be executed only when the MATLAB session is
shutting down. Since the component can be configured to leave MATLAB
open after the job completes, these actions might not get run at the end of the
job, but may instead be invoked later when Isight is cleaning up persistent
components.
Warning: Isight provides a scratch directory for MATLAB to use during the
Initialize and Execution phases, but not during the Finalize phase. Your Finalize
commands should not assume that the current directory setting in MATLAB refers
to a usable directory. You may disregard this warning if the MATLAB component
is configured to always run in a fixed directory.
The standard usage is simply to define actions in the Execution phase. The
MATLAB component will start with three pre-defined actions in the Execution
phase to represent the most common usage by default:
Input Mappings for defining mappings from Isight parameter values to
MATLAB variables.
Commands for defining the command to execute.
Note: These default actions are simply provided for convenience and can be
deleted or renamed as desired. They are empty to start with and the details of each
of these must be filled out as needed.
2. Click the tab that corresponds to the phase for which you need to create actions.
3. Click the Add Action button . The Action Type Selection dialog box
appears.
4. Click the button that corresponds to the action type (Mapping or Command) that
you want to create; then, click OK. A new action of the specified type will be
created and displayed in the actions table.
The actions table presents the following information for each action:
Use?. Specifies whether or not this action should be executed. This option
allows you to leave your actions defined, but turn them on and off as desired.
5. Use the and arrows located below the table to change the order of
the actions. You may also delete actions by clicking the button.
6. (optional) Edit the details of any action by selecting it in the actions table and using
the panel displayed on the right side of the component editor. You can use this
panel to configure the properties of the action.
Proceed to the following sections for details on how to edit actions:
7. Click the Options tab. The Options tab appears as shown below.
8. (optional) Click the Show MATLAB during execution check box if you do not
wish to have MATLAB running in the background. This option is selected by
default. When selected, MATLAB is launched and is viewable during the
execution of your model.
Note: If you deselect this check box (telling Isight to hide MATLAB during
execution), but you have instructed MATLAB to produce a plot, the plot will still
appear during execution.
9. (optional) Click the When to close MATLAB drop-down list to choose when
MATLAB should close; then select one of the following options:
Never. The component will never attempt to close the MATLAB session. It
must be closed manually.
Note: If you choose this option, it is recommended that you leave the Show
MATLAB during execution check box selected.
When job completes. The MATLAB session will remain open and be re-used
during the entire duration of the job (model execution) and will be closed after
the job completes.
After each execution. A new MATLAB session will be started, used, and shut
down every time this component executes.
10. (optional) Click the M-Script Error Response drop-down list to control how the
component responds to MATLAB errors that occur when Commands are executed;
then, select one of the following options:
Fail Execution on All Errors. The component stops processing Mappings and
Commands if any MATLAB error occurs while executing a Command, and
flags this component run as failed.
Ignore Errors Containing This Text. When this option is selected, a text
field appears below the drop-down list in which you can enter text. The
component checks if any MATLAB error occurs while executing a Command,
and if so retrieves the MATLAB error message. If this message contains the
entered text as a substring, the error message is written to the log file, then
execution proceeds to the next Mapping or Command; otherwise the
component stops processing Mappings and Commands, and flags this run as
failed.
Ignore All Errors. The component disregards the MATLAB error report and
assumes the Command executed correctly. Execution proceeds to the next
Mapping or Command.
11. (optional) Select Native code API or Java Socket from the Drive MATLAB with
drop-down list. If you select Java Socket, enter a command in the MATLAB
Command text box or click the Browse button to navigate to the command.
Note: For new MATLAB components, this option will default to the current
setting of the MATLAB Preferences option. This setting should be changed only if
this specific component must use a different interface or run a specific (different)
MATLAB executable (which is rarely the case). For information on the
Preferences option, see “Setting MATLAB Component Preferences,” on page 425.
Note: If you defined the environment variable MATLAB (see “Setting up the
Environment,” on page 413), you do not need to enter a command in the
Preferences dialog box; the component will launch the MATLAB executable from
that directory. You may also specify the location of the MATLAB executable by
running Isight from the command line with the command argument
-Dfiper.comp.matlab.command=....
Important: The Java Socket interface has been developed in conjunction with
MATLAB version 7.5 (R2007b) and will not work with earlier versions of
MATLAB.
Defining Mappings
Mapping actions are how Isight passes information to and retrieves information from
MATLAB. A single mapping action can (and typically does) involve mapping more
than just one parameter to/from MATLAB. All that is required to define a mapping is
to specify the Isight parameter and the MATLAB variable name involved in the
mapping.
To define mappings:
1. Set up an action type as described in “Starting the Editor and Adding Actions,” on
page 414.
2. Select the Mapping action for which you want to define the mappings in the
actions table on the left side of the editor. An area for defining the mappings
appears on the right side of the editor.
Note: You can edit the name used to identify the Mapping action in the Mappings
text box at the top of this panel.
3. Select an existing Isight parameter from the drop-down list above the Mapping
table or type the name of a new parameter in the text box to create the entry
directly.
Note: If you type in the name of a new parameter, a scalar parameter of type “real”
will be created by default. If you want to define a new parameter that is something
other than a scalar “real” parameter, click on the button to create a new Isight
parameter.
You can sort the parameters in the list by component. Right-click in the list, and
then select Group Parameters. The setting is now saved in the preferences for this
component. There is a Preference option to set the initial default for this option for
all applicable component editors. You can also choose to not group parameters for
the currently selected component. For more information on these Preference
options, refer to the Isight User’s Guide.
5. Click the button to add a parameter. The new parameter is added to the
table as shown below.
Note: If an Isight parameter has not been specified, a message appears informing
you to select an Isight parameter.
A row appears in the Mapping table with the Isight parameter, the mapping arrow,
and the name of the MATLAB variable. By default the name of the MATLAB
variable is the same as the name of the Isight parameter with the exception that
spaces are replaced with “_”. You can modify the name of the MATLAB variable
as needed directly in the table.
If an entire Isight array parameter (as opposed to just a single element of the array)
is selected, the entire contents of the array are mapped to the MATLAB variable
(for mappings defined to MATLAB) or set from the MATLAB variable (for
mappings defined from MATLAB). If the Isight array parameter is defined as
“resizable”, then when mappings from a MATLAB variable are executed the size
of the array will be adjusted to match the size of the MATLAB variable. Similarly,
when mappings to a MATLAB variable are executed the size of the MATLAB
variable will be adjusted to match the current array parameter size.
Defining Commands
While defining Mappings allows you to pass information to/from MATLAB, the
Commands are what allow you to invoke the desired functions within MATLAB.
Commands can simply be typed directly in or loaded from a MATLAB M-Script file.
Note: The MATLAB component will automatically set the current directory in the
MATLAB session to be the working directory for the component. If you want it set to
some other directory, you will need to provide an explicit “cd” command in your set of
MATLAB commands. Additionally, a variable called “ESI_MATLAB_rundir” is
automatically set in the MATLAB session to allow you to refer to the working
directory of the component, if necessary.
Important: Because the MATLAB component explicitly uses the “cd” command, any
command or M-Script used in the MATLAB session should not define a variable with
the name “cd” since this essentially overrides the command and can lead to errors.
1. Set up an action type as described in “Starting the Editor and Adding Actions,” on
page 414.
2. Select the Command action for which you want to define the specific commands in
the actions table on the left side of the editor.
An area for defining the commands appears on the right side of the editor.
Note: You can edit the name used to identify the Command action in the
Command text box at the top of this panel.
3. Either type commands directly into the large text box on the right side of the editor
or click the Open... button to load the commands from an existing MATLAB
M-Script file.
Once you have selected the file you will be asked whether you wish to store the
M-Script in the model.
If you select Yes, the current contents of the M-Script file will be read and stored
within the component (i.e., as if the contents had been typed into the text box).
This command can be run no matter where the component is executed. If the
M-Script file is later edited, this command must also be edited if it is to run the
updated script.
If you select No, an Isight input File parameter, configured to the absolute path and
name of the M-Script file, is created and added to the component. When the
command is run, the current contents of the M-Script file will be fetched via that
File parameter; hence this command can be run only if that path is accessible
wherever the component is executed. If the M-Script file is later edited, the
component will automatically run the updated script.
Important: You must check for syntactical errors in the given commands. Isight
will not check for proper syntax before execution.
4. (optional) Right-click in the large Command text area; then, click Scan for
variables to map. If you choose this option, the MATLAB editor will scan the
script currently displayed in the box for MATLAB variable names. The MATLAB
variable names that appear in the script will be used to create potential Isight
parameters that can be mapped to or from the corresponding MATLAB variables.
Note: If you use this option, you may want to create all your commands before
creating your mappings.
2. Expand the Components folder on the left side of the dialog box; then, select
MATLAB.
3. Select Native code API or Java Socket from the Drive MATLAB with drop-down
list. If you select Java Socket, enter a command in the MATLAB Command text
box or click the Browse button to navigate to the command.
Note: If you have defined the environment variable MATLAB (see “Setting up the
Environment,” on page 413), you do not need to enter a command in the
Preferences dialog box; the component will launch the MATLAB executable from
that directory. You may also specify the location of the MATLAB executable by
running Isight from the command line with the command argument
-Dfiper.comp.matlab.command=.... Keep in mind that a command
entered in the Preferences dialog box supersedes either of these options.
Important: The Java Socket interface has been developed in conjunction with
MATLAB version 7.5 (R2007b) and will not work with earlier versions of
MATLAB.
1. Double-click the component icon to start the component editor. For more
information on inserting components and accessing component editors, refer to the
Isight User’s Guide. The Component Editor dialog box appears.
2. Proceed to one of the following sections, based on the tab you need to use:
Basic tab. This tab allows you to specify the command and argument or
interpreter (shell) and script, and any I/O redirection. It can also be used to set
affinities to control which Fiper Stations the command is executed on when
using an ACS in the Fiper environment. For more information on the options
on the Basic tab, see “Setting Basic Options,” on page 428.
Advanced tab. This tab allows you to set conditions which indicate that the
program has failed. You can also direct standard output or standard error to the
job log, and indicate that a failed run should be retried. There are also
additional options for controlling the environment in which the
Required Files tab. This tab is used to specify the files that must be placed in
the working directory before the command is run, and the files to save from the
working directory. When the component is part of a Simcode component, it
also lists the files passed by the input and output file parses. For more
information on the options on the Required Files tab, see “Setting Required
Files Options,” on page 440.
Grid tab. This tab is used to access the Grid Adapter. This tab appears only if
the Enable use of Grid Plugin preference is enabled. The Grid Adapter allows
OS Command and Simcode components to submit commands to run on a
separate computer using the remote execution commands SSH or RSH, or the
LSF grid execution environment. For information on activating the Grid
Plugin, see “Configuring the OS Command Component,” on page 427. For
more information on the Grid Adapter, see “Setting Grid Adapter Options,” on
page 441.
Multi line (shell) scripts. For more information on the multi line options, see
“Setting Multi Line Script Options,” on page 432. The OS Command preference
page lists the available interpreters, and allows you to add interpreters. For more
information about the OS Command preferences, see “Configuring the OS
Command Component,” on page 427.
2. Click the Find Program... button; then, navigate to the executable you want to
use. You can also type the name of the executable directly into the text box below
the Find Program... button.
Note: You can type a simple command name, such as "nastran," if you know
the program is already on the PATH of the Isight gateway and/or Fiper Station.
Otherwise, you must enter the full path to the executable.
3. Type the rest of the command line following the program name. The syntax used
for the command line is a subset of that used by the Microsoft Windows Command
line (cmd.exe) or the UNIX Borne Shell (sh):
Use < to supply standard input to the program from a file. Follow the < with
the name of the file, or by inserting a file parameter (see the example below).
A space between the < and the file name is optional.
Use > to direct standard output of the program to a file. If standard output is
not redirected, it is lost.
Use 2> to redirect the standard error stream to a file. By default, standard error
is also sent to the job log.
Use 2>&1 to redirect standard error to the same file as standard output. This
setting is only useful if standard output is also redirected into a file.
Note: Other shell or cmd.exe punctuation such as “|” (pipe), “&&” or “;” (multiple
commands), and “&” (background) are not supported.
For example, to run the program myprog with arguments a and b c d, reading
input from the file sample.txt and writing output to the file bar.txt, the
command line would look like:
myprog a "b c d" <sample.txt >bar.txt
4. Click the Distribute Executable check box to copy the executable file into the
model. The program name turns green to indicate the program is in the model. The
program will be copied into the working directory before the command is run. To
undistribute the executable, clear the Distribute Executable check box. If the
program has changed on disk, you can update the copy stored in the model by
clearing and then checking the Distribute Executable check box.
The Distribute Executable feature is only appropriate for small programs. Larger
programs, and programs that require a network license, must be pre-installed on
the Fiper Station.
Note: You cannot directly execute Windows built-in commands, such as DIR or
COPY. To run such commands, use the cmd command and the arguments /C and
copy. For example:
cmd /c dir dir1
5. Click the Verify Commands button to check the syntax of the command line. If
there is a syntax error (such as a missing quote or no file name after >), an error
message appears describing the problem. If the command is valid, a dialog box
pops up that shows how the command line will be interpreted - what the command
name is, where arguments are split, and any I/O redirection. The dialog box will
show the VALUE of substituted parameters, instead of their name. The Command
Preview text box, located near the top of the tab, displays how the command will
be issued during runtime. Any error in the command line will also be shown in the
Command Preview text box.
6. You may insert a parameter as an argument by selecting the parameter from the
Parameter drop-down list; then, click the Insert button . The parameter name
is added to the Arguments text box at the current cursor position and is highlighted
in green, indicating that the value will be substituted at runtime. To quickly add a
new parameter, type the parameter name in the Parameter box and click on the
button. The new parameter will have data type Real, but this can be changed on the
Parameters tab on the Design Gateway.
To delete a parameter from the arguments list, click on the parameter and press the
Backspace key.
If a parameter value might contain spaces, you should put double quotes around
the parameter reference (for example, "parm 1").
For a file parameter, the absolute path to the file is substituted into the command
line. This can be very useful for programs that expect the name of their input file
on the command line. It is not necessary to put quotes around substituted file
parameters. File names are never split, even if they contain spaces.
File parameters can be used after the I/O redirection symbols <, >, or 2> to
redirect input/output to/from the file parameter.
Note: You can create a new parameter using the button. For more information,
refer to the Isight User’s Guide.
7. Set the available options in the Affinities area. These options are used to control
which Fiper Stations may execute the component when using an ACS in the Fiper
environment. You can specify the host name, operating system, software, or other
information pertaining to the component. The Affinity options behave exactly the
same as the Affinities on the Component Properties editor. For more information
on the properties, see “Editing Component Properties,” on page 25.
8. Perform one of the following options:
1. Select the appropriate interpreter from the Type drop-down list. Information for
five interpreters is provided by default: Bourne Shell, C Shell, K Shell, Bash, and
Windows Batch. You can add or modify the list of available interpreters using the
OS Command Preferences. For more information about the preferences, see
“Configuring the OS Command Component,” on page 427. The multi line
component editor dialog box appears.
2. Type any interpreter arguments in the Command Arguments text box. These
arguments precede the script file in the argument list for the interpreter.
Note: Any additional arguments to Windows cmd.exe must come before the
required ‘/C’ option.
4. Select the Encoding (character set) to be used when the script is written to a disk
file. This is sometimes needed when running in an Asian locale.
Note: This setting is visible only if the Show File Type Encoding option on the
Files tab is selected on the Preferences dialog (Files and Directories). For more
information about this option, refer to the Isight User’s Guide.
5. Select the Line Endings to use when the script is written to a disk file. The Default
option is recommended. The only time you would need to change the line endings
is if you are running a Unix script from a Windows Gateway via the Grid option. In
this case, you probably will have to force Unix line endings in the script, since
most Unix shells cannot handle Windows line endings. The available Line Endings
are as follows:
Default. This setting leaves the line endings as they were found in the input
file. Isight does not attempt to change the line endings. If the file has to be
copied for some reason, local line endings are used. In the OSCommand
editor, this option is effectively equivalent to the Local option below, since the
script is always re-written to disk.
Local. This setting writes the file with the local line endings for the machine
on which the model is running. This is the location of the Isight Gateway and
Fiper Station. This means CR-LF on Windows and LF on Unix. The file will
always be copied in text mode and the line endings updated, even if the line
endings already appear to be correct.
Unix. This setting always separates lines with LF, even if the model is running
on Windows.
Windows. This setting always separates lines with CR-LF, even if the model
is running on Unix.
6. Edit the script in the Editor text box in the middle of the tab. All commands you
could type at the command line for the given interpreter are valid. The script you
type is passed to the interpreter as-is, after expanding any parameter substitutions.
You can remove all contents of the text box at any time using the Clear Script
button.
7. You may insert a parameter as an argument by selecting the parameter from the
Parameter drop-down list; then, click the Insert button . The parameter name
is added to the Arguments text box at the current cursor position and is highlighted
in green, indicating that the value will be substituted at runtime. To quickly add a
new parameter, type the parameter name in the Parameter box and click on the
button. The new parameter will have data type String, but this can be changed on
the Parameters tab on the Design Gateway.
To delete a parameter from the arguments list, click on the parameter and press the
Backspace key.
If a parameter value might contain spaces, and you do not want the spaces to be
interpreted by the interpreter, you should put the appropriate quotes for the
language around the parameter reference (for example, "parm 1").
For a file parameter, the absolute path to the file is substituted into the command
line. This can be very useful for programs that expect the name of their input file
on the command line. There is nothing special about file parameters substituted
into scripts. File parameters in scripts SHOULD be quoted if there is any chance
the absolute path could contain spaces. This is different from the behavior in
Command mode.
Note: You can create a new parameter using the button. For more information,
refer to the Isight User’s Guide.
8. (optional) Click the Load Script... button to load a pre-existing script from a file.
9. Edit the affinities for the component. The affinities for the selected interpreter are
set to the values provided in the preferences. Changing the affinities does not affect
the preferences. For more information about the properties, see “Editing
Component Properties,” on page 25.
The Consider execution failed if area is used to determine what conditions should
indicate that the program has failed. If none of the boxes are checked, the program
will be considered to succeed no matter how it exits.
Note: If the program cannot be found, the component will always fail.
Return code is other than. Select this option if you want to define the return
codes for successful completion of the execution. You can type multiple return
codes separated by commas (“1,2,5” means consider return codes of 1, 2, or 5
as success), or a range of numbers separated by a colon (“0:9” means any
return code from 0 to 9 inclusive indicates success). These can be combined to
specify multiple ranges (“0:9, 21:30”). Negative return codes are also allowed
(though few programs return them). This option is deactivated by default.
There is output to the Standard Error stream. When this option is selected,
if the command produces any output to the standard error stream, the run will
be considered a failure and the rest of the workflow will be aborted. This
option is usually set for UNIX programs. This option is selected by default.
Execution takes longer than. This option allows you to set a time limit (in
seconds) after which execution is deemed to have failed. The default timeout is
300 seconds (5 minutes). A timeout of 0 means don't check runtime. This
option is a duplicate of the Timeout option on the component Properties panel.
For more information about component properties, see “Editing Component
Properties,” on page 25.
This option (and those below) are in addition to whether standard error is
redirected to a file, and whether output to standard error is to be considered a
sign that the program failed.
Log Standard Output. If checked, any messages to standard output will also
be sent to the job log. Using this option is not recommended, as many
programs can produce a lot of output, and log messages are relatively
expensive.
Note: Lines are logged as they are produced by the program. This
characteristic can be very useful if the program writes a couple of lines and
then executes for a long time. You can see the status messages in the job log
before the program finishes.
4. In the Retry Execution after Failure area, set the following options, as desired:
The retry options are mostly for cases where a program may fail due to an external
condition, such as a web server being down, that can be expected to correct itself in
a short time. These options are typically used to retry execution if a program fails
because all licenses are in use.
Note: The Maximum number of retries option on the Component Properties panel
is in addition to the number of retries on the OSCommand editor. The
OSCommand retries are done first (in Fiper, on the same Fiper Station), and then
the component retries (in Fiper, on a different Fiper station). For more information
about component properties, see “Editing Component Properties,” on page 25.
5. Set the options in the Wait for File area. This feature allows you to run a program
that submits a job to an external system and then wait for that job to finish by
looking for an output file the job writes.
Wait for file after program finishes. Click the check box to have Isight wait
for a file to appear after the program finishes. If selected, then you must enter a
file name in the File field. You can enter the path name or click the Browse...
button to locate the file. The file name can contain variable substitutions, such
as “{var xx},” similar to file parameters. This file can be a file parameter, but
this is not necessary.
Find this string in the file (optional). Enter the string in the file the program
should wait for after the file exists. If no string is specified, the command
terminates as soon as the file exists.
Delay after file or string is found (seconds). Enter the amount of seconds the
program will wait if the file or string is found before continuing. This allows
the process to finish writing the file.
6. Set the options in the Execution environment area. These settings allow you to
control a few aspects of how the program runs.
Note: When this box is checked, a helper program called makejob is run by
the OSCommand. You will see this program if you examine the list of
processes running on your machine.
Save program exit code in parameter. Enter a parameter name in the text
box if you want the exit code saved. If the parameter does not already exist,
Isight will create it. This parameter can be type integer, real, or string. The
default is string. If this text box is blank, which is the default for new models,
the exit code is not saved.
Note: Models created in earlier versions of Isight will show “retval” as the
parameter name. This insures that existing models have the same behavior as
they did previously.
Click the Set X-Windows Display on UNIX check box to activate the option;
then, specify the Host Name in the corresponding text box. This host name
can be either a domain name (unix.development.com) or an IP address.
This option is only used for UNIX X-Windows programs that fail to run unless an
X-Windows display is available. The specified display must be available at
runtime.
1. Click the Required Files tab. The required files options appear.
This tab is used to list files that need to be accessible when the command is run.
Any files parsed by the Data Exchange portion of a Simcode will automatically be
listed here. The information on this tab is a duplicate of the information on the
Files tab of the Design Gateway. It is presented here as a convenience. For more
information on using the Files tab, refer to the Isight User’s Guide.
2. Add files needed for execution to the list in the middle of the tab using one of the
following methods, based on the file type:
If you want to add all existing file parameters from the parent task, click
the button.
Note: To delete a file, verify that it is highlighted and click the button.
3. You can rename a file parameter by clicking in the Name column and editing the
name. This is often required as new file parameters created with the Add Input File
and Add Output File buttons are named “file1”, “file2”, etc. You can also change
the mode of a file parameter by selecting a new value in the Mode column. The
information on this tab is the same as the Files tab on the Design Gateway, and is
included here for convenience. The Files tab will not be updated until the OK or
Apply button on the editor is clicked. For more information on file parameters,
refer to the Isight User’s Guide
4. To configure (or reconfigure) a file parameter, click in the Value column and then
click on the button next to the value. The Configure File Parameter dialog box
appears.
5. Click OK to close the editor, or proceed to “Setting Grid Adapter Options” on this
page.
Note: The Grid tab is not available by default. To make it available, see “Configuring
the OS Command Component,” on page 427.
Note: It is recommended that you review the limitations listed in “Understanding Grid
Adapter Limitations,” on page 448 before using the Grid Adapter.
Depending on the type of grid adapter you are using, proceed to one of the following
topics:
For LSF options, proceed to “Setting LSF Grid Adapter Options” on this page
For SSH options, proceed to “Setting SSH Grid Adapter Options,” on page 445
2. Select LSF from the Grid System drop-down list. The LSF options appear.
If you activate this option, the script defined on the Basic tab is assumed to be a
compatible LSF script that will be spooled to the batch system and executed by
LSF as a sequence of commands. The script type must be compatible with the
target system. Additionally, on UNIX systems, the "#!..." syntax must be used
within the script to specify the script’s execution shell. The Script Type selected on
the Basic tab is ignored in this scenario.
4. Enter the following information, as desired. You can edit one or all of the values.
You can also accept the default values.
Queue. Specifies the queue where the job should run. You can obtain the
available queue names from the LSF “bqueues” command. Refer to the bsub
“-q” option on the man page.
Hosts. Lists the candidate hosts (space-separated) where the job should run.
Refer to the bsub “-m” option for more information.
Resources. Specifies the resource requirements for the LSF job. Refer to the
bsub “-R” option for more information.
If necessary, you can also set Advanced options for the LSF adapter:
6. (optional) Click the Advanced... button. The Additional Grid Options dialog box
appears.
7. Type the commands in the large text area. Enter any additional options to LSF’s
bsub using standard bsub command-line option syntax (see bsub man page). Line
9. Click OK to save your changes and close the editor. Click Apply to save your
changes, but leave the editor open.
Note: The grid tab will not appear if there are no grid adapters available.
3. Select the Remote Shell and Copy from the drop-down list. The available choices
are:
Ssh and scp (Open SSH). Secure Shell is a remote login protocol which uses
public-key encryption to authenticate users and secure their login sessions;
Secure Copy is the corresponding file-transfer protocol. Open SSH is a
commonly-used open-source implementation of these protocols. Use these
protocols to dispatch the OS Command from a local Unix/Linux system to a
remote Unix/Linux system, whenever security has to be maintained (for
example, when crossing a firewall).
Rsh and rcp. Remote Shell and Remote Copy are standard remote login and
file-copy protocols that are available on all Unix/Linux systems (except where
deliberately removed or replaced). They are fast, but not very secure. Use these
protocols to dispatch the OS Command from a local Unix/Linux system to a
remote Unix/Linux system, whenever security is not a concern (for example,
when both are behind the same firewall).
Note: The SSH Grid plugin currently does not support remote login and/or
file-transfer to a Windows system.
4. (optional) Enter any Extra (SSH, PuTTY, or RSH) Arguments. Each
remote-login tool has an associated set of command line arguments, some of which
may be used to customize how this OS Command will be dispatched. For example,
the -l argument can be used with any of these tools to force it to log in as a
specified user. This is useful when the OS Command may be dispatched to a Fiper
Station that has the Run As feature disabled.
5. Enter the Remote Host. This is the network name of the remote Unix/Linux
system on which the OS Command will run.
7. Select one of the following from the drop-down list under the Working Directory
Name text box:
8. Click OK to save your changes and close the editor. Click Apply to save your
changes, but leave the editor open.
Only LSF v6.2 is supported in Isight. Direct and unrestricted access to run the LSF
commands is required.
If it is necessary to transfer input/output files to/from the grid, this must be done
through the “-f” advanced option (see bsub man page), if available. If it’s not
available, then another mechanism must be used.
Jobs can only be submitted under the security credentials of the job submitter in
Isight.
The job status will be 0 for LSF’s “DONE” status or 1 for any other LSF job status.
Fail on stdout/stderr functionality will never generate a failure if the LSF plug-in is
used, since stdout/stderr will be returned to the submitting user in an e-mail by
default (LSF’s default behavior). You can override this behavior with the LSF
advanced options on the Advanced tab. The relevant bsub options are –o, -oo, -e,
-eo.
For the SSH adapter, your machine/network must be configured to allow remote
login without a password. The exact mechanism will depend on the protocol used,
but it should generally be either an SSH key agent for the SSH protocols or the
usual .rhosts/hosts.equiv syntax for rsh. If you are prompted for a password, the
command will fail.
Adding Interpreters
To add an interpreter:
1. From the Isight Design Gateway, select Preferences from the Edit menu. The
Preferences dialog box appears.
2. Expand the Components folder on the left side of the dialog box; then, select OS
Command.
3. Verify that the Scripts tab is displayed. The tab contents appear.
4. Click the Add button located on the right side of the dialog box.
5. Type a name for the interpreter in the Name text box. The name is displayed in the
OS Command Type drop-down list on the OS Command component editor.
7. Type the interpreter extension in the Extension text box. For example, Windows
batch files need to have the extension of “bat”.
8. Type any default arguments for the interpreter in the Arguments text box.
Note: During the execution of the component, the executable for the interpreter is
loaded from the OS Command preferences. If the preferences are unavailable, such
as on Fiper Stations, then the executable information from the component is used.
An error message is generated if the interpreter is not found during execution of
the component.
9. Set the available options in the affinities area. These options are located in the
bottom half of the dialog box and are used to control which Fiper Stations may
execute the component when using an ACS in the Fiper environment. You can
specify the host name, operating system, software, or other information pertaining
to the component. For more information about the affinities, see “Editing
Component Properties,” on page 25.
For more information on these options, and all other options available with this
component, see “Using the OS Command Component,” on page 426.
You can also delete or edit existing interpreters. To delete an interpreter, select the
interpreter from the list; then, click the Delete button. To edit the preferences for an
interpreter, select the interpreter from the list; then, change the appropriate
information.
1. From the Isight Design Gateway, select Preferences from the Edit menu. The
Preferences dialog box appears.
2. Expand the Components folder on the left side of the dialog box; then, select OS
Command.
4. Click the Enable use of the Grid Plugin checkbox to enable this feature. When
this box is checked, the OS Command (and Simcode) editor have an extra tab for
configuring how to run the command on a compute grid.
5. Select the Source for SSH-Grid Commands from the drop-down list. This will
reset the tab to show the commands, if any, defined in the selected source. Once
you click OK, the shown commands are written to the selected source, overriding
the prior definitions.
When an SSH Grid plugin executes, it first finds the remote login and file-copy
commands for the selected SSH type from one of three sources. These sources are
read in a fixed order, using the commands from the first source that defines them
(if none are found, execution fails).
Custom Configuration File. If the local Runtime Gateway (or the Fiper
Station) was launched, with the Java System property fiper.grid.ssh.configfile
defined, commands are read from the file it names (if the file exists).
User Preferences File. You may define SSH commands and store them in
your personal Fiper preferences. If no custom configuration file is specified,
commands are read from preferences.
Warning: On a Fiper Station, this is the preferences of the user that launched
the Station, unless the Run As feature is enabled.
Warning: Selecting the source for SSH-Grid commands does not indicate to the
plugin which source to use; the procedure explained above is always followed.
Selecting the source only controls which source is displayed for editing at any
moment.
Ssh and scp (Open SSH). Secure Shell is a remote login protocol which uses
public-key encryption to authenticate users and secure their login sessions;
Secure Copy is the corresponding file-transfer protocol. Open SSH is a
commonly-used open-source implementation of these protocols. Use these
protocols to dispatch the OS Command from a local Unix/Linux system to a
remote Unix/Linux system, whenever security has to be maintained (for
example, when crossing a firewall).
Rsh and rcp. Remote Shell and Remote Copy are standard remote login and
file-copy protocols that are available on all Unix/Linux systems (except where
deliberately removed or replaced). They are fast, but not very secure. Use these
protocols to dispatch the OS Command from a local Unix/Linux system to a
remote Unix/Linux system, whenever security is not a concern (for example,
when both are behind the same firewall).
For each supported SSH type, the plugin requires two commands to be defined in
the corresponding text fields.
Run Command. Type the name of the tool used to perform remote login and
run the OS Command under the given working directory on the named remote
host. You may also use the Browse... button to locate the tool.
Copy Command. Type the name of the tool used to copy input and output
files between the current local working directory and the given working
directory on the named remote host. You may also use the Browse... button to
locate the tool.
Each tool name may be followed by command-line arguments. For example, the
Copy command typically requires a flag to preserve a copied file’s mode and
permissions.
The API for the OS Command component consists of the following methods:
getString("scriptinterpreter"), set("scriptinterpreter",
interp). This method allows you to get/set the interpreter to use for executing
the script. The interpreter types are defined in the OSCommand preferences.
getString("scriptarguments"), set("scriptarguments",
theArgs). This method allows you to get/set the arguments to supply to the
script for execution.
getInt("waitforfiledelay"), set("waitforfiledelay",
delaySecs). This method allows you to get/set the amount of time (in seconds)
that the component should wait after it finds the specified file to wait for (and
optionally text in that file) before considering its execution to be complete.
getString("waitfortextinfile"), set("waitfortextinfile",
text). This method allows you to get/set the text that must exist in the specified
file before the component considers its execution to be complete.
getInt("maxexecutiontime"), set("maxexecutiontime",
maxTime). This method allows you to get/set the amount of time (in seconds) the
component is allowed to execute before it is forcefully failed.
getInt("retrylimit"), set("retrylimit", numRetry). This
method allows you to get/set the number of times that the component execution
should be retried if it fails.
getInt("retryiffailedbefore"),
set("retryiffailedbefore", time). This method allows you to get/set
the amount of time (in seconds) within which a component will retry execution if it
fails. Typically, this is used to allow for retrying execution of longer running codes
when they fail at the start of execution (due to unavailable licenses or other
environmental problems/conflicts), but to not retry when they fail later once the
code has actually begun execution properly.
getBoolean("failwhenstderror"), set("failwhenstderror",
fail). This method allows you to get/set whether or not to fail execution when
something is written to standard error.
getBoolean("failwhenstdoutput"),
set("failwhenstdoutput", fail). This method allows you to get/set
whether or not to fail execution when something is written to standard output.
Ask a question
Each of these has different options for configuring how it should behave as part of the
workflow. The options are discussed in the following sections. Also, before using this
component, be sure to review “Known Issues,” on page 463.
1. Double-click the component icon to start the component editor. For more
information on inserting components and accessing component editors, refer to the
Isight User’s Guide. The Component Editor dialog box appears.
2. Select one of the following options from the Action drop-down list:
Pause only. This option causes the workflow to pause and wait until the
specified criteria are met, the Continue button on the provided dialog box is
clicked, or the specified resume time expires. For more information on this
option, see “Using the Pause only Action” on this page.
Ask a question. This option causes the workflow to pause and a specified
question displays. The workflow is paused until the answer is provided and the
the OK button is clicked, or the specified resume time expires. For more
information on this option, see “Using the Ask a question Action,” on
page 459.
Display parameters. This option causes the workflow to pause and a specified
set of parameters displays, allowing for changes to the parameter values before
the workflow proceeds. For more information on this option, see “Using the
Display parameters Action,” on page 461.
1. Set the Show a dialog while paused option. When this option is selected, a dialog
box appears when the Pause component is executed in the workflow. The dialog
box may contain any of the following items, based on how the other options for
this action are set:
A standard message appears indicating the workflow has been paused. You
can replace the standard message with a customized message by clicking the
Custom Message check box; then, enter the customized message in the
corresponding text box.
A message about the file that must be found in the local file system (and
optionally what text it must contain) before continuing the execution, if this
option has been set. For more information on this option, see step 2.
A countdown to the automatic resumption is provided, if this option has been
set. For more information on this option, see step 5.
2. Set the Resume as soon as this file exists option. When this option is selected, the
workflow pauses until the specified file exists. You can type the path and file
information directly into the corresponding text box, or you can navigate to the file
using the Browse... button. You can also define text that must exist within the file
using the option in step 3. The file name specified may contain parameter value
substitutions (using the {var <parameter name>} syntax) or a model
directory substitution (using the {modeldir} syntax).
3. Set the Only if this text is found option. This option is activated for use if the
Resume as soon as this file exists option is selected. You can use this option to
specify text that must exist in the specified file in order for the workflow to
resume, and you can determine if the case of the text is important using the Ignore
Case check box. The text to search for may contain parameter value substitutions
(using the {var <parameter name>} syntax) or a model directory
substitution (using the {modeldir} syntax).
4. Set the Resume as soon as email arrives option. When this option is selected, the
workflow pauses until an email from the specified email address is received. If you
only want to resume if the subject of the email contains certain text, type that text
into the Subject contains text box.
Note: Click the Mail Options button to specify the information to use to connect
to the mail server and the details of your email account. Only messages that arrive
in your “inbox” mail folder will be checked.
5. See “Using the Automatic Resume and Execution Location Options,” on page 462
for information on the resume options available as well as determining where the
Pause component (and associated interface) should be executed when used in a
distributed environment (using an ACS in the Fiper environment).
1. Type the question that will be asked in the Question text box.
2. In the Possible Answers area, select one of the following options from the first
drop-down list:
User will pick from a set. Selecting this option allows you to control the
possible answers to the question, and how the answers are presented. Proceed
to step 3 for more instructions.
User will type anything. Selecting this option provides a text box for typing
an answer. Proceed to step 5.
3. Update the Specify possible answers list to correspond with your question. The
answers of “Yes” and “No” are provided by default. You can specify as many
options as necessary by clicking in the last empty row of the table to provide a new
option.
4. Determine how the answers will be selected using the Display as drop-down list.
You can specify to display the answers using radio buttons or a drop-down menu.
5. Specify the information for the answer parameter using the Parameter to contain
answer drop-down list. By default, a parameter called “answer” is provided. You
can change the name of the parameter directly in the corresponding text box.
See “Using the Automatic Resume and Execution Location Options,” on page 462
for information on the resume options available as well as determining where the
Pause component (and associated interface) should be executed when used in a
distributed environment (using an ACS in the Fiper environment).
1. Select the parameters that you want displayed during execution from the list that
appears. You can select individual parameters or you can use the Select All button
to select every displayed parameter.
The list shows all of the parameters available in the current workflow. Selected
parameters have the icon in the first column of the list. Values for the selected
parameters will be presented in a separate dialog box during execution and can be
modified as a means of interactively influencing the design.
2. See “Using the Automatic Resume and Execution Location Options,” on page 462
for information on the resume options available as well as determining where the
Pause component (and associated interface) should be executed when used in a
distributed environment (using an ACS in the Fiper environment).
3. Click OK to close the editor.
Set the Automatically resume option. If activated, any pause in the workflow will
automatically end once the specified (after option) time limit or clock time (at
time option) is reached.
If the after option is used, the countdown time is displayed during execution
so that you will know how much time remains before it automatically resumes.
The at time option is useful for deferring execution of various parts of the
workflow until a time at which computing resources are known to be available
or a certain user is available to perform some interactive process within the
workflow. Note that if the specified time of day is already passed, the
workflow will resume immediately. If the at time option is used, the time
when the execution will resume is displayed.
You will be able to resume the execution at any time by clicking the Continue
button on the dialog box that appears during execution.
Note: If you do not want the Pause component to automatically resume, simply
clear (uncheck) this option. Clearing this option will also set the general Timeout
option on this component to “0” (meaning it will never timeout). For more
information on component properties, see “Editing Component Properties,” on
page 25.
computer job was submitted from. The Pause component, and any
associated interface, will execute on the computer from which the job was
initially submitted. Select this option if your intent is to have the person
executing the model also be the one prompted with any questions or parameter
lists, or just to monitor the status of the pause. Since this option is the most
typical scenario, it is the default setting.
specified Fiper Station. The Pause component will execute on the computer
that serves as the Fiper Station specified in the provided text box.
any Fiper Station. The Pause component will execute on any available Fiper
Station as selected by the Fiper ACS.
Known Issues
The following issues exist with this component:
When using the Automatically resume at time option during daylight savings
time (DST), the Pause component uses the incorrect time on Windows systems that
do not have the “Automatically adjust clock for daylight savings changes” option
enabled on the Windows Date and Time Properties dialog box.
This interface is accessed by double-clicking the clock in the system tray; then,
clicking the Time Zone tab. The option is at the bottom of the tab.
If daylight savings is in effect in the current time zone, but this Windows option is
not enabled, Isight uses a time value which is incorrect by one hour. This issue
affects the Pause component’s ability to resume execution at a specific time of day.
The remote partner access feature is the primary means of enabling Fiper Federation
(Business-to-Business) capabilities. It allows you to select a model that resides at a
partner organization (which has a Fiper ACS) and insert a reference to it within your
own workflow. When this component is executed as part of your workflow, Isight
makes the appropriate connection to the partner Fiper ACS (which executes the remote
model), and receives the outputs from that remote execution. With this capability,
various organizations can collaborate on an overall product design by contributing
analysis and design results from their own specialty areas of responsibility, while still
maintaining the proprietary nature of their data and methods.
For more information on using Fiper Federation, refer to the Fiper Federation (B2B)
Guide. For more information on other referencing options, refer to the Isight User’s
Guide.
“Accessing the Component and Selecting the Reference Type,” on page 466
For two Fiper-enabled organizations to collaborate, they must register each other
as “partners” so that the necessary connection protocols can be established. Partner
registration is performed as a Fiper administrator function and can be done at any
time. For more information on defining a Fiper partner, refer to the Fiper
Federation (B2B) Guide.
The remainder of this section assumes that the aforementioned prerequisites have been
met.
1. Verify that the Reference component is selected in the workflow area; then,
double-click the component icon to start the component editor. For more
information on inserting components and accessing component editors, refer to the
Isight User’s Guide.
The Component Editor dialog box appears.
This dialog box shows three top-level categories, which represent all the places a
model reference can be made. The first category (Internal Submodels) is a single
level that shows all the internal submodels currently defined in your model. The
root component icon and name are shown for each submodel.
Important: Submodels must be created from the Design Gateway before they can
be used within the Reference component. For more information on how to create
submodels as well as the benefits of using submodels, refer to the Isight User’s
Guide.
The second category (Fiper Library) is a Library browser which shows models
published to the local Library. The third category (Remote Partner) shows a
two-level subtree. The first level displays all the partners known to the ACS, and
for each partner, the list of models you are authorized to use. For more information
on configuring Fiper’s federation capability, refer to the Fiper Federation (B2B)
Guide.
3. Proceed to one of the following sections, based on the type of model you want to
reference:
Referencing a Submodel
This option allows you to reference an internal submodel currently defined in your
model. Submodels must be created from the Design Gateway before they can be used
within the Reference component. For more information, refer to the Isight User’s
Guide.
1. Access the Select Reference Model dialog box as described in “Accessing the
Component and Selecting the Reference Type,” on page 466.
2. Expand the Internal Submodels option on the left side of the dialog box. The
available submodels are listed.
Important: Submodels must be created from the Design Gateway before they can
be used within the Reference component. For more information on how to create
submodels, refer to the Isight User’s Guide.
3. Click the submodel to select it. The submodel’s information appears on the right
side of the dialog box.
4. Verify that you have selected the correct submodel; then, click the Select Model
button.
You are returned to the Reference component editor, and the selected model’s
information now appears on the editor.
5. (optional) As desired, change any values for the input and output parameters.
6. Click OK to close the editor. The submodel is added to your workflow. For more
information on submodels, refer to the Isight User’s Guide.
2. Expand the Fiper Library option on the left side of the dialog box; then, navigate
to the location of the model you want to use (if necessary).
3. Click the model to select it. The model’s information appears on the right side of
the dialog box.
Note: The parameters that are displayed in the model are specified using the model
properties. For more information, refer to the Isight User’s Guide.
4. Verify that you have selected the correct model; then, click the Select Model
button.
You are returned to the Reference component editor, and the selected model’s
information now appears on the editor.
5. (optional) As desired, change any values for the input and output parameters.
6. Click OK to close the editor. The model is added to your workflow. For more
information on how reference models are handled by Isight, and how the workflow
is affected, refer to the Isight User’s Guide.
A list of Fiper partners that have been defined for your organization is displayed.
In the following example, the administrator had defined one partner (partner1) that
can be accessed from the local ACS.
3. Expand the partner that possesses the model you want to use. The available models
are displayed.
When a partner is selected, a remote query is made to get a list of all models on the
remote ACS that have shared attributes matching the local ACS domain and the
logged on user ID.
Note: If your Fiper environment does not yet have any partners established, the
Partners list will be empty and you will not be able to proceed. Refer to the Fiper
Federation (B2B) Guide for information on defining partners for your Fiper
environment.
4. Click the model you want to use. The model information, including name, version,
and parameters, is loaded. You are also informed if no models are present in the
Library.
Note: The parameters that are displayed in the model are specified using the model
properties. For more information, refer to the Isight User’s Guide.
5. Verify that you have selected the correct submodel; then, click the Select Model
button.
You are returned to the Reference component editor, and the selected model’s
information now appears on the editor.
The inputs and outputs of the remote service are read and used to populate the
inputs and outputs of the local Reference component. The default values for inputs
were taken from the default values defined in the remote service.
6. (optional) As desired, change any values for the input and output parameters.
7. Click OK to close the editor. For more information on how reference models are
handled by Isight, and how the workflow is effected, refer to the Isight User’s
Guide. For more information on Fiper federation capabilities, refer to the Fiper
Federation (B2B) Guide.
Note: Federated Reference models are not expanded/visible. You cannot see the
internal parts of the model (i.e., components, workflow, dataflow, parameters).
Resynchronizing a Reference
When a model contains a Reference component that is configured to refer to a
published model, the published model does not become part of the referencing model.
Therefore, it is possible for the published model to be altered so as to no longer match
the configuration of the Reference component. A new version of the model may be
published, or the existing version replaced; any of the following changes (for example)
may affect references:
The published model’s interface may be edited to expose more, or fewer, variables
of the model’s root component. Variables may even have been added to, or deleted
from, the root component.
The published model may have been deleted from one location and republished to
another location.
The published model may have been permanently deleted.
The configuration of the Reference component can be easily corrected when this has
happened.
Below the Choose Model button there is now a Sync to Model button.
2. Click the Sync to Model button. A new version of the published model’s interface
will be loaded from the configured location, and the Reference component will be
reconfigured to match it.
If the published model has been moved or deleted, then click Choose Model and
select a replacement model instead.
Note: If the set of variables has changed you may need to edit the model to
accommodate the changes.
1. Double-click the component icon to start the component editor. For more
information on inserting components and accessing component editors, refer to the
Isight User’s Guide. The Component Editor dialog box appears.
The upper portion of the dialog box lists the available parameters. The lower
portion shows the script.
Name. This column shows the name of the parameter and the array/aggregate
hierarchy.
Java Variable. This column shows the name that the parameter has inside of
the script. This is initially the same as the Parameter name, with punctuation
and spaces converted to underscores ( _ ). You can edit the Variable name and
change the name used in the script if you wish.
Value. This column shows the current parameter value. You can edit this entry
to change the initial value of the parameter.
Mode. This column shows the mode of the parameter (input, in/out, output, or
local). If the parameter is not used by the script, the mode the parameter has in
the model (the mode it would have if mapped to the component) is shown.
Java Type. This column shows the declared type the parameter variable will
have in the script. You can select a different type using the pull-down menu.
The available types are described in more detail below.
Note: Arrays are followed by [ ] in this column indicating that the Java
variable is an array.
3. Type your script in the Enter your Java script here text box.
The Script consists of Java statements, such as you would see inside the body of a
Java method. A surrounding Class or Method declaration is not needed or allowed.
It is not necessary to declare variables before using them. All of the parameters are
pre-declared. Other variables (such as “i” above) can be used as long as the first
reference is an assignment.
Parameters are referenced in the script by just typing their variable name (from the
Variable column). You can also insert a reference to a parameter by selecting the
parameter and clicking the button.
Simple functions can be declared and called. Here is a trivial example:
int fact(int x) { return x == 0 ? 1 : x * fact(x-1); }
fact_X = fact(X);
4. Click the Check Syntax button to verify the syntax of your script. Any messages
about the script are displayed on the status bar at the bottom of the editor. Any
syntax errors in the script are highlighted in pink. The pink error highlight will go
away as soon as you edit the program.
Note: You can click the Details... button to get more information on a syntax error.
5. Repeat this procedure, as desired, until your Java script is complete.
You can write classes in the script and create instances of them.
You can write functions using the Java method declaration syntax. For example:
int factorial(int i) {
if (i <= 0) return 1;
else return i * fact(i-1);
}
intVar2 = factorial(intVar1);
Note that a function cannot be called until after it is defined. Also, the body of the
function does not have access to global variables or parameters, only to the
function arguments. So, for example, it would be an error to try to reference
intVar1 inside the body of factorial.
Import statements can be interspersed with statements in the script, as long as the
Import statement occurs before the symbols it imports are used.
Real as “double”
Integer as “int”
Boolean as “boolean”
String as “java.lang.String”
The value of the parameter is copied into the Java variable before the script starts, and
the value of the Java variable is copied back into the Parameter after the script finishes
(for parameters with mode Output and In/Out only).
A Parameter can also be put into the script as a “Value” object. For example, a Real
parameter could be represented by a RealValue. Value objects have methods like
“setValue(String)” or “getAsReal()”. The advantage of a Value is that there is no need
to copy the contents out to the parameter - any change to the Value immediately
updates the associated Parameter. Value objects can also be updated inside functions,
allowing you to emulate the C-language pass-by-reference operator “&”.
A parameter can also be put into the script as a Variable object. Using the Variable
allows access to additional information beyond just the value. You can get the
parameter name, mode, data type. Members of aggregate Parameters can be accessed
in this way:
abc = AggParm.getMember("abc");
Array Parameters are added to the script as an array of the primitive type, an array of
Value objects, or an ArrayVariable.
jobLog. Used to log messages to the Isight job log. The standard usage is:
jobLog.logWarn("Warning - things are going wrong!");
Note: System.out and System.err are directed to the gateway.log file when an
Isight model is run (and to the Station log file when the model is run via Fiper).
The jobLog variable is the preferred way to report messages from a Script.
<Isight_install_directory>/javadocs/com/engineous/sdk/runtime
Script Execution
The script is evaluated by statement at runtime until the last statement completes.
When the last statement completes, the Script component is considered to have
finished successfully. If any statement gets an error, or the script throws an uncaught
exception, the Script component is considered to have failed.
Resizable Arrays
Array parameters can change size at runtime. You can find the size of array parameters
by using one of the following methods:
N= array.length;
Ncolumns= array2D.getDimSize1(2);
For more information, refer to the ArrayVariable.html file in the following Isight
installation directory:
<Isight_install_directory>/javadocs/com/engineous/sdk/vars
A resizable Array Parameter with model Output or In/Out can be resized by the script
at runtime.
myArray.setDimSize(newSize);
When an ArrayVariable is resized this way, the value of existing array elements
are preserved, and new elements are set to the default value 0.0.
Understanding Limitations
The Script component cannot be used to create new parameters or to change the type of
an existing parameter at runtime. There is no way to run a script at design time. The
script will run only when the model that contains the script component is run.
Note: The Simcode component can be used by the iSIGHT File Parser or the Fast
Parser instead of the Data Exchanger to read and write files. For more information on
setting this preference, refer to the Isight User’s Guide.
The input data exchanger is used to update an input file (such as a NASTRAN input
deck) with values taken from parameters. The OS Command portion runs an external
program (or a script that calls multiple programs) that reads input from files and writes
results to files. Finally, the output data exchanger examines the files written by the
program and extracts values into parameters.
The big advantage of using the Simcode component over separate Data Exchanger and
OS Command components is that copying of the input and output files between
machines when running on a Fiper ACS is avoided; the files are created, used, and
(unless saved) discarded in one step.
Note: This section provides a brief overview of the parts of the Simcode component
editor. For more detailed information on using the Data Exchanger component, see
“Using the Data Exchanger Component,” on page 266. For more detailed information
on using the OS Command component, see “Using the OS Command Component,” on
page 426. For more information on using the Grid Adapter, see “Setting Grid Adapter
Options,” on page 441. For more information on using the Fast Parser or iSIGHT File
Parser, see “Using the Fast Parser Component,” on page 366 or “Using the iSIGHT
File Parser Component,” on page 392. For more technical information on setting up a
simcode, contact your SIMULIA representative.
The Data Exchanger in a Simcode differs only slightly from the Data Exchanger
component:
The Input tab is limited to writing files and the output tab is limited to reading files.
A separate Data Exchanger component can read or write files, and can even open
multiple files in different modes.
The dialogs used to open files on the Input and Output tabs are slightly different
from the one used by a Data Exchanger component.
The Input tab uses only one input file parameter for the file to be written. This file
is effectively updated in-place. A Data Exchanger component would have two file
parameters: an input parameter for the template file and an output file parameter
for the file that is written.
The Output tab uses one output file parameter for the file written by the Command.
The file is read by the Data Exchanger after the command writes it and before it is
optionally mapped to a subsequent component. A Data Exchanger component
would use an Input file parameter for the file to read.
The main differences between using the Simcode component over the separate iSIGHT
File Parser component is the output file is of mode output when in Simcode, but input
when in standalone (this lets it be mapped to other components downstream).
Additionally, when using the Simcode component, the modes cannot be changed.
Similarly, when using the Simcode component with Fast Parser, the modes cannot be
changed.
1. Double-click the component icon to start the component editor. For more
information on inserting components and accessing component editors, refer to the
Isight User’s Guide. The Component Editor dialog box appears.
There are three tabs at the top of the editor, one each for the Input parser, the OS
Command, and the Output parser. Each of these tabs also has sub-tabs containing
additional options.
Note: For the following documentation example, the Data Exchanger is used as the
file parser.
2. Specify the command and arguments on the Command tab. For more information
on using this tab, see “Using the OS Command Component,” on page 426. This tab
is essentially an embedded OS Command component.
Note: If you are using Fast Parser or iSIGHT File Parser, see “Using the Fast
Parser Component,” on page 366 or “Using the iSIGHT File Parser Component,”
on page 392.
This tab is used to define the input data sources for the simcode.
4. Click the large button in the center of the dialog box to begin the process of
defining a data source.
5. Select the source of the data you want to update in the input file parse. The
following three options are available:
Write a new file from scratch. A new file will be created at runtime. Proceed
to step 12.
Note: An Input file parameter is created for the Template file. While the template
file is usually fixed, it is possible to map another file parameter to the Template file
parameter, allowing the template to vary at runtime.
7. Enter the template file into the text box or use the Browse button to locate the file.
Store content of the template in the model. This option allows for the model
to be self-contained – it can be run even if the file is subsequently deleted.
However, the file must be explicitly re-loaded (using the Files tab on the
Design Gateway) to pick up any changes made to the file after the model is
created.
Read template from this file for every run. The model will reference the file
on disk every time the simcode executes. This option is faster, saves space for
large files, and causes the model to see any changes to the file the next time it
is run. However, it can prevent distributed execution of the model, and the
model becomes unusable if the file is deleted.
11. Select the file parameter that will provide the template data that will be updated by
the input file parse. This file parameter can already be part of the Simcode
component, or it can be an output from another component earlier in the workflow.
12. Click Next. The Select Local File Name screen appears.
13. Enter the file name or use the Browse button to locate the file. Isight automatically
fills in the text with the simple name of the file (the name with the path removed).
This is the name the file will have at runtime. The name defaults to the name of the
template file.
The File Format options dialog box appears. For more information on using this
dialog box, see “Creating a New Data Exchanger Program,” on page 279.
14. Click the Output tab. The contents of the tab appear
Note: If you are using Fast Parser or iSIGHT File Parser, see “Using the Fast
Parser Component,” on page 366 or “Using the iSIGHT File Parser Component,”
on page 392.
This tab is used to read data for the output data sources for the simcode.
15. Click the large button in the center of the dialog box to begin the process of
defining a data source. The Select Sample File screen appears.
16. Enter an example of the output file format or use the Browse button to locate the
file. This file format is used when setting up the parse.
17. Enter the local file or use the Browse button to locate the file. This file is the one
that OSCommand will write to during runtime. Isight automatically fills in the text
with the simple name of the file (the name with the path removed). This simple
name is the name the file will have at runtime. The name defaults to the name of
the template file.
19. Select where the data will be stored after the file is parsed. The three options
available are:
Don’t store the file. The file cannot be mapped to other components.
Write to a file. You will need to enter the name of the file or use the Browse
button to locate the file.
20. Click Next. The File Format screen appears. For more information on the File
Format screen, see “Creating a New Data Exchanger Program,” on page 279.
21. Click OK to close the editor.
Additional Information
The following additional information should be noted when using the Simcode
component:
Files set up on the Required Files subtab of the Command tab are available for use
by the Input and Output tabs. Select the parameter associated with the file from the
File to Read at Runtime or File to Write at Runtime drop-down lists on the
Open Data Source wizard of the Input or Output tab.
The input files set up by the input file parse and the output files passed by the
output parse can be substituted into the command line, and can be used to redirect
standard input or output of the command.
The files are available in the drop-down parameter list as file parameters.
The absolute path to the file will be substituted into the Runtime command text
box.
1. From the Isight Design Gateway, select Preferences from the Edit menu. The
Preferences dialog box appears.
2. Expand the Components folder on the left side of the dialog box; then, select
Simcode.
3. Set the Default Parser type option using the corresponding drop-down list. This
option allows you to select the parser you want to automatically use when you are
creating a new Simcode component. You can choose Fast Parser, iSIGHT File
Parser, or Data Exchanger. Data Exchanger is the recommended file parsing tool
and is selected by default. The iSIGHT File Parser and Fast Parser are used as an
aid in converting an iSIGHT description file into an Isight model.
4. Click OK to save your changes and close the Preferences dialog box.
1. Double-click the component icon to start the component editor. For more
information on inserting components and accessing component editors, refer to the
Isight User’s Guide. The Component Editor dialog box appears.
To use an existing Word document, click the Browse... button in the Starting
Document area; then, navigate to the document you want to use. Word is
started and the document is opened. When an existing document is opened, it
is copied into the system temporary directory and all modifications are made to
that document. The original document remains unchanged.
Note: Only one document can be opened using the component. When an additional
new document is created, or another existing document is opened, the previously
opened document is closed.
Important: If you manually close the Word document that was opened by the
component editor, the editor is not notified. You must click OK to close the editor
and then re-open it.
Now you need to map the desired Isight parameters to the Word document.
3. Highlight the area or place the cursor in the Word document where you want the
parameter value to appear.
Note: You can insert parameter values into a formal Word table in the document
by selecting the desired cell location(s).
5. Perform one of the following actions, based on how you want to map parameters to
the Word document:
If you want to map parameters individually and control their exact location in
the Word document, proceed to step 6.
6. Select a parameter from the Parameter to insert drop-down list in the Instructions
area.
You can sort the parameters in the list by component. Right-click in the list, and
then select Group Parameters. The setting is now saved in the preferences for this
component. There is a Preference option to set the initial default for this option for
all applicable component editors. You can also choose to not group parameters for
the currently selected component. For more information on these Preference
options, refer to the Isight User’s Guide.
Note: You can also create a new parameter using the button. Once created,
you can map the parameter into the Word document.
7. For scalar parameters, if you want the parameter to be inserted in the format
“<name> = <value>”, click the Insert as <name> = <value> check box.
Otherwise, only the parameter value will be inserted.
Note: You can insert the contents of a file (image files only) into the Word
document by selecting an appropriate file parameter.
8. For array parameters, specify the starting index and ending index of part of the
array that you want inserted. The parameter value for every specified element will
be inserted with a line break between each element.
9. Click the button. The parameter is mapped, and the parameter name is
displayed in the Word document in highlighted text.
10. Repeat step 6 and step 9 for each parameter you want to map.
12. Click the button to insert a “<name> = <value>” list. A name-value pair table
is added to the Word document in the location specified. All supported parameters
in the component are listed.
13. If you want to execute any macros that are defined within the specified Word
document, perform the following steps:
b. Select a macro that you want to execute (only those that are specified to be
public will be presented).
d. Click the button to add the macro to the list. You can also click the
Note: Macros will be executed after any parameter mappings that have been
defined.
14. (optional) To save the updated Word document following execution, click the Also
save to disk check box in the Document Produced area; then, specify a location for
the file in the corresponding text box. You can either type a location, or navigate to
a location using the Browse button.
Note: The modified Word document will be provided as an output file parameter
from this component even if you choose not to save it to a specific location on your
disk.
15. Determine if you want the Word document visible during the model execution
using the Show Word during execution check button.
16. Click the Close Word document check box if you want Word to close the opened
document (when not selected, Word and the document remain open after the model
is executed); then, use the corresponding drop-down list to determine when Word
should close. The following options are available:
when job completes. This option closes the document after the execution of
the entire Isight job is completed.
after each execution. This option closes the document after each execution of
the Word component. The Word component may be executed numerous times
during a single job.
During execution, the parameters mapped to the Word document are replaced by
their respective values. After execution, the Word document is provided as a
new/modified output file parameter from this component.
These steps are not necessary if you are executing using the standard Isight desktop
(Standalone) execution.
1. Click the Start button; then, click the Run... option. The Run dialog box appears.
2. Type dcomcnfg in the Open text box; then, click OK. The Component Services
dialog box appears.
3. Click Component Services on the left side of the dialog box. Folder options
appear on the right side of the dialog box.
5. Double-click the DCOM Config folder. The contents of the folder appear.
7. Select Properties from the menu that appears. The Microsoft Word Document
Properties dialog box appears.
9. Click the Customize radio button in the Launch and Activation Permissions area;
then, click the Edit... button. The Launch Permission dialog box appears.
10. Click the Add... button. The Select Users, Computers, or Groups dialog box
appears.
11. Type the necessary username (be sure to include the computer/domain name) in
the Enter object names to select text box.
Note: You can click the Check Names button to verify that the username you
entered is valid. You can also search for the name using the Advanced... button. If
the username you specify matches more than one known user, the Multiple Names
Found dialog box appears, allowing you to pick the exact user.
12. Click OK. You are returned to the Launch Permission dialog box, and the
username you entered now appears in the list at the top of the dialog box.
13. In the Permission for <username> area; click the following check boxes in the
Allow column:
Local Launch
Local Activation
14. Click OK. You are returned to the Microsoft Word Document Properties dialog
box.
15. Click OK. You are returned to the Component Services dialog box.
17. Proceed to “Setting Up the Component,” on page 497 for information on using the
Word component.
1. From the Isight Design Gateway, select Preferences from the Edit menu. The
Preferences dialog box appears.
2. Expand the Components folder on the left side of the dialog box; then, select
Word.
3. Set the Default document close option setting. This option allows you to
determine the default behavior for the Word file that is created with the
component. You can choose to leave the file open after execution, or to close it
after each execution of the component or after the entire job completes.
4. Click OK to save your changes and close the Preferences dialog box.
A Component Reference
Information
Parameter Study
Although this term can be used quite generally to refer to any study of design
parameters, in Isight “Parameter Study” is used to refer to a true study of the sensitivity
of the design to each factor independent of all other factors. In other words, each factor
is studied at all of its specified levels (values) while all other factors are held fixed at
their baseline. A baseline point is also analyzed for reference, resulting in
(i = # factors, ni = # levels for factor i) design point
evaluations. Although this does not provide any interaction information, it does allow
you to study many factors at many levels with relatively few design point evaluations.
If interactions are insignificant, the results are a good indicator of the effects of the
individual factors.
Full-Factorial Design
A full-factorial design is one in which all combinations of all factors at all levels are
evaluated. It is an old engineering practice to systematically evaluate a grid of points
requiring (i = # factors, ni = # levels for factor i) design point
evaluations. This practice provides extensive information for accurate estimation of
factor and interaction effects. However, it is often deemed cost-prohibitive due to the
number of analyses required.
Orthogonal Arrays
The use of orthogonal arrays lets you avoid a costly full-factorial experiment in which
all combinations of all inputs (or factors) at different levels are studied (pn for n factors
each at p levels), and instead perform a fractional factorial experiment. A fractional
factorial experiment is a certain fractional subset (1/2, 1/4, 1/8, etc.) of the full factorial
set of experiments, carefully selected to maintain orthogonality (independence) among
the various factors and certain interactions. It is this orthogonality that allows for
independent estimation of factor and interaction effects from the entire set of
experimental results. While the use of orthogonal arrays for fractional factorial design
suffers from reduced resolution in the analysis of results (i.e., factor effects are aliased
with interaction effects as more factors are added to a given array), the significant
reduction in the required number of experiments (cost) can often justify this loss in
resolution as long as some of the interaction effects are assumed negligible.
In fractional factorial designs (which are essentially what orthogonal arrays are used
for), the number of columns in the design matrix is less than the number necessary to
represent every factor and all interactions of those factors. Instead, columns are
“shared” by these quantities, an occurrence known as confounding. Confounding
results in the dilemma of not being able to realize which quantity in a given column
produced the effect on the outputs attributed to that column (from post-processing
analysis). In such a case, the designer must make an assumption as to which quantities
can be considered insignificant (typically the highest-order interactions) so that a
single contributing quantity can be identified.
In essence, for an orthogonal array of a given size, the more factors and interactions
you want to study, the greater the confounding. This results in lower confidence in the
analysis of results (since more assumptions of insignificant factors must be made).
Orthogonal arrays have been used in design since as early as the 1940's by Plackett and
Burman, who used saturated designs (only studying factor effects), and were really
popularized by Taguchi, who developed a family of 2- and 3-level orthogonal arrays to
study interaction effects (Ross, P.J., Taguchi Techniques for Quality Engineering,
McGraw-Hill Publishing Company, New York, NY, 1988). The 3-level arrays also
allow for an estimation of 2nd order effects (i.e., design space curvature).
The use of these orthogonal arrays provides a systematic and efficient method to study
the design space and provide suggestions for improving the design. However, the
actual tasks of selecting the appropriate orthogonal arrays to use and assigning the
factors and interactions to columns can be tedious and overwhelming. The automation
of this procedure in Isight allows a designer with little or no knowledge of orthogonal
arrays to efficiently and effectively study the design space using this formal DOE
methodology.
Latin Hypercube
Another class of experimental design which efficiently samples large design spaces is
Latin Hypercube sampling. With this technique, the design space for each factor is
uniformly divided (the same number of divisions (n) for all factors). These levels are
then randomly combined to specify n points defining the design matrix (each level of a
factor is studied only once). For example, Figure A-1 on page 511 illustrates a possible
Latin Hypercube configuration for two factors (X1, X2) in which five points are
studied. Although not as visually obvious, this concept easily extends to multiple
dimensions.
Figure A-1. Latin Hypercube Configuration for Two Factors, with Five Points
An advantage of using Latin Hypercube over Orthogonal Arrays is that more points
and more combinations can be studied for each factor. The Latin Hypercube technique
allows the designer total freedom in selecting the number of designs to run (as long as
it is greater than the number factors). The configurations are more restrictive using the
Orthogonal Arrays (L4, L8, etc.).
Note: A drawback to the Latin Hypercube is that, in general, they are not reproducible
since they are generated with random combinations. In addition, as the number of
points decreases, the chances of missing some regions of the design space increases.
optimization process is to design a matrix in which the points are spread as evenly as
possible within the design space defined by the lower and upper level of each factor.
The Optimal Latin Hypercube concept is illustrated in Figure A-2 for a configuration
with two factors (X1, X2) and 9 design points. In Figure A-2 (part a), a standard three
level Orthogonal Array is shown. While this matrix has nine design points, there are
only three levels for each factor. Consequently, a quadratic model could be fit to this
data, but it is not possible to determine if the actual functional relationship between the
response and these two factors is more nonlinear than quadratic. Figure A-2 (part b)
shows a random Latin Hypercube.This matrix also includes nine design points for the
two factors, but there are nine levels for each factor as well, allowing higher order
polynomial models to be fit to the data and greater assessment of nonlinearity.
However, the design points in Figure A-2 (part b) are not spread evenly within the
design space. For example, there is little data in the upper right and lower left corners
of the design space. An Optimal Latin Hypercube matrix is displayed in Figure A-2
(part c). With this matrix, the nine design points cover nine levels of each factor and
are spread evenly within the design space. For cases where one purpose of executing
the design experiment is to fit an approximation to the resulting data, the Optimal Latin
Hypercube gives the best opportunity to model the true function, or true behavior of
the response across the range of the factors.
Figure A-2. Optimal Latin Hypercube Configuration for Two Factors, with Nine
Points
The Optimal Latin Hypercube code implemented in Isight was developed by:
Dr. Ruichen Jin, Ford Motor Company (formerly a student of Professor Chen)
There are two major advances of this algorithm, compared to other optimal design of
experiments algorithms, which increase both the efficiency and robustness of the
algorithm:
Development of an efficient global optimal search algorithm, “enhanced stochastic
evolutionary (ESE) algorithm”
Efficient algorithms for evaluating optimality criteria (significant reduction in
matrix calculations to evaluate new/modified designs during search)
Jin, R., Chen, W., and Sudjianto, A. “An Efficient Algorithm for Constructing Optimal
Design of Computer Experiments,” DETC-DAC48760. 2003 ASME Design
Automation Conference, Chicago, IL, September 2-6, 2003.
Note: Since the Optimal Latin Hypercube begins as a random Latin Hypercube and is
optimized using a stochastic optimization process, this type of matrix is generally not
reproducible (unless the same random seed is reused).
The center and star points are added to acquire knowledge from regions of the design
space inside and outside the 2-level full-factorial points, allowing for an estimation of
higher order effects (curvature). The star point(s) are determined by defining a
parameter α which relates these points to the full-factorial points by
Supper = b + (u-b) x α
Slower = b - (b-1) x α
where:
α = 1, star points are the same value as the full-factorial levels (also referred to as
face-centered central composite design)
Modeling, due to the expanse of design space covered and the higher order information
obtained.
Data File
The Data File technique provides a convenient way for you to define your own set of
trials outside of Isight, and still make use of Isight’s integration and automation
capabilities. Essentially, the design matrix can be defined by data imported from one or
more files, allowing you to execute the DOE study (automatically evaluate all the
design points) and analyze the results. Any file used must simply contain a row of tab
or space separated values for each data point and a column for each parameter to be
used as a factor from that file.
Monte Carlo Simulation methods have long been considered the most accurate means
of estimating the probabilistic properties of uncertain system responses resulting from
known uncertain inputs. To implement a Monte Carlo simulation, a defined number of
system simulations to be analyzed are generated by sampling values of random
variables (uncertain inputs), following the probabilistic distributions and associated
properties defined for each.
Sampling techniques for the Monte Carlo component in Isight are implemented as
“plug-ins.” As such, they are extendable by creating new “plug-ins” for new sampling
techniques. For more information, refer to the Isight Development Guide.
The following two sampling technique plug-ins are currently available for the Isight
Monte Carlo component:
5. Simulate the design/process (execute system analysis) using the current values for
random variables and the design variables.
6. Repeat step 3 through step 5 for the number of simulations specified in step 2.
Descriptive Sampling
The number of simulations necessary for simple random sampling is usually more than
desirable, and often more than practical. Other sampling techniques have been
developed to reduce the sample size (number of simulations) without sacrificing the
quality of the statistical description of the behavior of the system. These techniques,
called variance reduction techniques, reduce the variance of the statistical estimates
derived from the Monte Carlo simulation data. As a result, the error in estimates is
reduced (estimates from multiple simulations are more consistent), or conversely,
fewer points are needed with variance reduction techniques to obtain error or
confidence levels similar to those obtained through simple random sampling.
One such sampling technique, Descriptive Sampling (Saliby,1990), is available for the
Monte Carlo component. In this technique, the space defined by each random variable
is divided into subsets of equal probability, and the analysis is performed with each
subset of each random variable only once (each subset of one random variable is
combined with only one subset of each other random variable). This sampling
technique is similar to Latin Hypercube experimental design techniques (for more
information on Latin Hypercube refer to “Latin Hypercube,” on page 510), and is best
described through illustration as in Figure A-4 for two random variables in standard
normal space (U-space). As shown in Figure A-4, each row and column in the
discretized two variable space is sampled only once, in random order. The cloud of
points generated using simple random sampling is also illustrated for comparison.
The difference between simple random sampling and descriptive sampling is not
necessarily observable with a single simulation using each technique. The actual
estimates may be similar. However, through repeated simulation (with different
randomization), it is observed that the variance of the set of estimates from descriptive
sampling will be less than that from simple random sampling. The range of values
observed for the estimates from descriptive sampling will be less (tighter range), and
thus the confidence in the estimates is increased. Given this property of descriptive
The basic sampling procedures can be modified to include a convergence check. The
Monte Carlo component implementation of this convergence checking procedure is
described as follows. Rather than calculating all statistics only during the post
processing analysis, the mean and standard deviation for each response is updated at
specified convergence check intervals (the default is after every 25 sample points). If,
during the current convergence check, the mean and standard deviation of all responses
have not changed from the associated values at the previous convergence check (within
a user-set convergence tolerance), the simulation is terminated. The remaining
statistics are then calculated using the existing data set. Since the Monte Carlo
Simulation points are independent, these points can be executed, for efficiency, in
parallel rather than sequentially.
Additional References
For more background information about these Monte Carlo simulation methods, refer
to:
Hammersley, J.M. and Handscomb, D.C., 1964, “Monte Carlo Methods”, Chapman
and Hall, London.
Ziha, K., 1995, “Descriptive sampling in structural safety”, Structural Safety, Vol. 17,
pp. 33-41.
Introduction
Probability distributions are used in some Isight components to characterize the possi-
ble values of an uncertain random variable. Random variables will vary around a spec-
ified mean or nominal value following a defined distribution of values based on
prescribed probabilities for those values. For a given random variable X, the probabil-
ity that X will take on a value x is defined by the probability density function for that
random variable:
where fX(x) 0 for all x. The probability that the random variable X will take on a
value less than a specified threshold value x is defined by the distribution function for
that random variable, often also termed the cumulative distribution function:
where 0 FX(x) 1 for all x. For a continuous random variable X, the probability
density function, fX(x), and cumulative distribution function, FX(x), are related as
follows:
The probability density and cumulative distribution functions for a given probability
distribution are generally defined as a function of one or more distribution parameters
that define the location, shape, or dispersion of the distribution. In this document, the
probability distribution plug-ins available in Isight – normal, lognormal, Weibull,
Gumbel, uniform, exponential, triangular, and discrete-uniform – are described. The
probability density and cumulative distributions are given and the translation between
the distribution parameter(s) and the mean and standard deviation statistics of a
random variable are given for each distribution type.
Note: The integral in the previous equation becomes a summation for discrete random
variables, where the summation is taken over the discrete probability values associated
with the set of values for the random variable. One discrete distribution type is
supported in Isight – Discrete-Uniform.
[1] Evans, M., Hastings, N., and Peacock, B., 2000, Statistical Distributions, Third
Edition, Wiley-Interscience, John Wiley & Sons, New York.
X random variable
fX(x) probability density function
FX(x) distribution function
mean
standard deviation
Normal Distribution
The normal or Gaussian distribution is a two-parameter distribution, defined in terms
of the mean and standard deviation of the random variable X. The probability
density function for the normal distribution is given as follows:
The normal distribution is the common “bell curve” distribution, often used for
physical measurements, product dimensions, and average temperatures, for example.
Lognormal Distribution
Given a random variable X defined over 0 < x < , and given that Y = ln X is normally
distributed with mean Y and standard deviation Y, the random variable X follows
the lognormal distribution, defined by the probability density function:
Note that = Y and = Y. The mean and standard deviation of the random variable
X are given as follows:
and
The lognormal probability density function, shown in Figure A-6, is often used to
describe material properties, sizes from a breakage process, and the life of some types
of transistors, for example.
Weibull Distribution
The Weibull distribution can be defined by three parameters , , and . Its density
function fX(x) is defined by:
where > 0 is the scale parameter, > 0 is the shape parameter, and ( < < ) is
the location parameter.
The mean value and standard deviation of the random variable X with the
two-parameter Weibull distribution are given as follows:
and
The Weibull distribution can take many different shapes, as shown in Figure A-7. This
distribution is often used to describe the life of capacitors and ball bearings, for
example.
Gumbel Distribution
The Gumbel distribution is also known as extreme value distribution type I for the
largest or smallest of a number of values. The Gumbel probability density function for
largest and smallest elements are given in the following two equations, respectively:
largest element
smallest element
largest element
smallest element
The mean value and standard deviation of the random variable X for the Gumbel
distribution are given by:
largest element
smallest element
and
The Gumbel probability density function, shown in Figure A-8, is often used to
describe the breaking strength of materials, breakdown voltage in capacitors, and gust
velocities encountered by an aircraft, for example.
Uniform Distribution
The uniform distribution has a constant probability for all values of a random variable
X. The uniform probability density function is given by:
where the parameters a and b define the range of the uniform distribution. The uniform
distribution function is:
The mean value and standard deviation of the random variable X for the uniform
distribution are given by:
and
The uniform probability density function, shown in Figure A-9 on page 529, is used
when only a range of possible values for a random variable is known.
Exponential Distribution
The exponential distribution is a single parameter distribution, with mean and standard
deviation equal. The exponential probability density function for a random variable X
is given by:
where the parameter λ is a scale parameter. The exponential distribution function is:
The mean value and standard deviation of the random variable X for the exponential
distribution are given by:
The exponential probability density function, shown in Figure A-10 on page 530, is
often used to describe usage life of components.
Triangular Distribution
The triangular distribution is characterized by three parameters: a lower limit location
parameter, a, and upper limit location parameter, b, and a shape parameter that defines
the mode or peak of the triangle, c. The triangular probability density function for a
random variable X is given by:
The mean value and standard deviation of the random variable X for the exponential
distribution are given by:
and
The triangular probability density function, shown in Figure A-11, is commonly used
when the actual distribution of a random variable is not known, but three pieces of
information are available: a lower limit which the random variable will not go below,
an upper limit which the random variable will not exceed, and a “most likely”
(expected peak) value.
Discrete-Uniform
The discrete-uniform distribution has a constant probability for each discrete allowed
value the random variable X may take. The discrete-uniform probability density
function is then given by:
The Monte Carlo sampling performed at each step uses the Descriptive Sampling
Monte Carlo technique. For more information on Descriptive Sampling, see
“Descriptive Sampling,” on page 516. Samples are taken following the random
variable distributions and uniformly across a local range for design variables. For more
information on distribution functions, see “Understanding Distribution Types,” on
page 519.
At each SDI step, the selection of an improved point, or decision to terminate, is based
on calculation of the distance of each point to the target point, as described in the
following section.
Distance to Target
The target point is defined by the target values of all responses for which a target is
defined. The distance between any sample point and the target is calculated based on
the euclidean distance equation, with a penalty built in if the sample point is outside
any response constraint(s):
where,
the penalty is a large value added to the distance if the sample point lies outside one or
more response bounds.
For a given SDI step, the distance to the target is not reduced at least by the amount
specified in the Termination Threshold Distance (percent change from previous step),
the SDI process is terminated.
Six Sigma based robust design optimization includes in its formulation uncertainty
information related to variables, constraints, and objectives. The focus is not only to
identify solutions that are reliable or robust with respect to constraint satisfaction, but
also to reduce the variability associated with objective components. Further, by
defining the variance or standard deviation of uncertain input design parameters not as
fixed, but as design variables themselves, tolerance design/optimization can be
implemented by seeking standard deviation settings for these parameters that produce
acceptable performance variation (objective components, constraints).
The basis for robust design, six sigma quality, and the formulation of a robust design
optimization problem is summarized in this section.
The question is then, how is robustness or quality measured, and what level of
robustness is acceptable and/or desirable?
The term “sigma” refers to standard deviation, σ. Standard deviation or variance, σ2, is
a measure of dispersion of a set of data around the mean value (μ) of this data. This
property can be used both to describe the known variability of factors that influence a
system (product or process), and as a measure of performance variability, and thus
robustness or quality. Assuming performance variation is normally distributed, this
variation can be characterized as a number of standard deviations from the mean
performance, as shown in Figure A-13. The areas under the normal distribution in
Figure A-13 associated with each σ-level relate directly to the probability of
performance falling in that particular range (for example, ±1σ is equivalent to a
probability of 0.683). These probabilities are displayed in Table A-1 on page 536 as
percent variation and number of defective parts per million parts.
Robustness or quality can be measured using any of the variability metrics in Table
A-1, “sigma level”, percent variation or probability (percent variation / 100), or
Table A-1. Sigma Level As Percent Variation and Defects Per Million
Sigma Level Percent Variation Defects per million Defects per million
(short term) (long term)
±1σ 68.26 317,400 697,700
±2σ 95.46 45,400 308,733
±3σ 99.73 2,700 66,803
±4σ 99.9937 63 6,200
±5σ 99.999943 0.57 233
±6σ 99.9999998 0.002 3.4
In Figure A-13 on page 535, the lower and upper specification limits that define the
acceptable performance range are shown to coincide with ±3σ from the mean. The
design associated with this level of performance variance would be considered a “3σ”
design. Is this design of acceptable quality? Traditionally, if ±3σ worth of performance
variation was identified to lie within the set specification limits, as in Figure A-13, this
was viewed as acceptable variation; in this case, 99.73 of the variation is within spec
limits, or the probability of meeting the requirements defined by these limits is
0.9973%. In engineering terms, this probability was deemed acceptable
More recently, however, the 3σ quality level has been viewed as insufficient quality,
initially from a manufacturing perspective, and then extended into an engineering
design perspective. Motorola, in defining “six sigma quality” (Harry, 1997), translated
the sigma quality level to the number of defective parts per million (ppm) parts being
manufactured. In this case, as can be seen in Table A-1, ±3σ corresponds to 2700 ppm
defective. This number was deemed unacceptable. Furthermore, Motorola and others
observed that even at some observed variation level, mean performance could not be
maintained over time. If a part is to be manufactured to some nominal specification,
plus/minus some specification limits, the mean performance will change and thus the
distribution will shift. A good example of this is tool wear. If a process is set up to
manufacture a part to a nominal dimension of 10 in. with a tolerance of ±0.10 in., even
if the process meets this nominal dimension on average, the cutting tool will wear with
time, and the average part dimension will shift, say to 10.05 in. This will cause the
This shift was observed by Motorola and others to be approximately 1.5σ, and was
used to define “long term sigma quality” as opposed to “short term sigma quality”.
This explains the last column in Table A-1 on page 536. While the defects per million
for short term correspond directly to the percent variation for a given sigma level
associated with the standard normal distribution, the defects per million for long term
correspond to a 1.5σ shift in the mean. In this case, 3σ quality leads to 66,803 defects
per million, which is certainly undesirable, and should be unacceptable. Consequently,
Motorola defined a quality goal of ±6σ; hence “Six Sigma Quality” came to define the
desired level of acceptable performance variation. With this quality goal, the level of
defects per million, as shown in Table A-1, is 0.002 for short term quality, and 3.4 for
long term quality; both acceptable quality levels.
The focus on achieving six sigma quality is commonly referred to as design for
six-sigma (DFSS) (Harry, 1998). In implementing DFSS within a robust design
context, the “minimize performance variation” goal of robust design is qualified by
striving to maintain six-sigma (μ±6σ) performance variation within the defined
acceptable limits, as illustrated in Figure A-14 on page 538. In this figure, the mean, μ,
and lower specification limit (LSL) and upper specification limit (USL) on
performance variation are held fixed. Figure A-14 (part a) represents the 3σ design of
Figure A-13 on page 535; ±3σ worth of performance variation is within the defined
specification limits. In order to achieve a 6σ design, one for which the probability that
the performance will remain within the set limits is essentially 100%, the performance
variation must be reduced (reduced σy), as shown in Figure A-14 (part b).
Given the definition of uncertain parameters and their variability, the resulting
variability of performance parameters can be measured. Multiple techniques for
estimating performance variability exist; three are discussed:
For the example in Figure A-16, the current design point, given as X1 = X2 = 4.5, is
within the specified constraint of X1 + X2 ≤ 10, and thus this design appears to be
feasible. If both X1 and X2 are known to vary around their nominal values (roughly
between values of 3 and 6 in Figure A-16), the performance measure Y = X1 + X2 will
also vary. In this case, a portion of the expected distribution of Y lies outside the
constraint. The area of this distribution that is outside the constraint defines the
probability of failure associated with this design point and the defined constraint. The
area of the distribution that lies within the constraint (in the feasible region) is defined
as the reliability level of this design.
Many methods for determining the probability of failure or reliability (estimating the
areas inside and outside the constraints) have been developed in recent years. The
following two reliability analysis technique plug-ins are currently available in Isight:
These methods are used to evaluate reliability of the current design point. Each method
is summarized in the following sections.
X2 U2
Φ(U) Pf
Failure Region
g(U) < 0
MPPU*
MPPU*
0 β
Mean Value Safe Region
Point
g(U) > 0
Major contribution
to failure probability
fX(X) β
fromthis area
X1 U
0 0
(FORM)
Failure Surface
g(X) =0
Failure Surface
g(U) = 0
FORM takes advantage of the desirable properties of the standard normal probability
distribution. Hasofer and Lind (1974) defined the reliability index as the shortest
distance from the origin of the standard normal space (U-space) to a point on the
failure surface. Mathematically, determining the reliability index is a minimization
problem with one equality constraint:
[1]
If the failure function g(U) is linear in terms of the normally distributed random
variables Ui, the failure probability is calculated as:
[2]
In this equation, Φ is the standard normal distribution function. If the failure function is
nonlinear, or the random variables are not normally distributed, a good approximation
can still be obtained using the equation above, provided that the curvature of the failure
surface at the MPP is not too large in magnitude.
The algorithm used to solve for the MPP in the FORM Isight plug-in is the traditional
Hasofer-Lind-Rackwitz-Fiessler (HL-RF) method, a special scheme which does not
require line search during optimization. The HL-RF method was originally derived
from the Kuhn-Tucker necessary condition by Hasofer and Lind (1974). Rackwitz and
Fiessler (1978) suggested that non-normally distributed random variables can be
transformed to equivalent normal variables for use in the Hasofer-Lind reliability
analysis.
[3]
Given the reliability index from the preceding equation, the probability of failure can
then again be calculated as Pf = Φ (-β), as in Eqn. 2.
The Mean Value method, based on a first order Taylor’s expansion, is the most efficient
(in terms of the number of function evaluations, or simulation program executions,
needed to calculate the reliability) of the reliability analysis methods implemented in
Isight. It requires only one time failure function and sensitivity evaluations. However,
the mean-value reliability index is accurate only for linear failure functions with
normally distributed random variables. In most other situations, the mean-value
reliability index is not accurate since the most probable point is not on the failure
surface. Using a second order Taylor’s expansion, a higher level of accuracy can be
[4]
The mean of this performance response is then calculated by setting the uncertain
design parameters to their mean value, μx:
[5]
[6]
where σxi is the standard deviation of the ith parameter and n is the number of
uncertain parameters. For more details on this approach for variability estimation for
robust design purposes, see (Phadke, 1989; Chen, 1996).
Since first order derivatives of responses with respect to random variables are needed
in Eqn 6, and the mean value point is needed in Eqn 5, the first order Taylor’s
expansion estimates require n+1 analyses for evaluation. Consequently, this approach
is significantly more efficient than Monte Carlo simulation, and often more efficient
than DOE while including distribution properties. However, the approach loses
accuracy when responses are not close to linear. This method is therefore
recommended when responses are known to be linear or close to linear, and also when
computational cost is high and rough estimates are acceptable.
[7]
The mean of this performance response is then obtained by taking the expectation of
both sides of the expansion:
[8]
[9]
where σxi is the standard deviation of the ith parameter and σxj is the standard
deviation of the jth parameter. For more details on this approach for variability
estimation and its use for robust design, see (Hsieh and Oh, 1992).
Once the performance response standard deviation is estimated with the necessary
sensitivities, robustness or quality can be assessed (sigma level, percent variation,
probability, or defects per million parts, based on defined specification limits).
Since second order derivatives of responses, including crossed terms, with respect to
random variables are needed in Eqn 9, and the mean value point is needed in Eqn 8, the
second order Taylor’s expansion estimates require (n+1)(n+2)/2 analyses for
evaluation. Because of this expense, cross terms are not usually included (only pure
second order deviations) and the expense is reduced to 2n+1. This approach is then
essentially twice as computationally expensive as the first order Taylor’s expansion,
and usually will be more expensive than DOE. For low numbers of uncertain
parameters, this approach can still be more efficient than Monte Carlo simulation, but
becomes less efficient with increasing n. (Monte Carlo simulation is not dependent on
the number of parameters.) The second order Taylor’s expansion approach is therefore
recommended when curvature exists and the number of uncertain parameters is not
excessive.
The Mean Value reliability method utilizes the Taylor's series expansion of failure
functions g(X) at the mean values µx. The mean-value reliability index is then
calculated as a function of the mean and standard deviation of g(X), as shown in Eqn 3.
With DOE, potential values for uncertain design parameters are defined by a range
(low and high values), a nominal baseline plus/minus some delta or percent, or through
specified values. In this case each run in the designed experiment is a combination of
the defined levels of each parameter. Specific probability distributions are not defined
for the uncertain parameters. Consequently, this approach represents a higher level of
uncertainty in uncertain design parameters, with expected ranges rather than specific
distributions. Given data from a designed experiment, capturing ranges of uncertain
parameters, robustness or quality can be assessed through determining sigma level,
percent variation within specification limits, probability of meeting specification
limits, or defects per million parts based on defined specification limits.
approach, robust solutions obtained for systems involving significant uncertainty (or
noise) are usually not “optimal” in the traditional sense. The focus of robust design
optimization then is to search for robust or “flat” regions of a design space, to reduce
the effects of variations in uncertain design parameters, through the inclusion of
response variance or standard deviation within the robust optimization formulation.
This concept is illustrated in Figure A-18 on page 548.
If the function in Figure A-18 is to be minimized, the solution given by point 1 would
be chosen if uncertainty and performance variation are not considered, as with
deterministic optimization. Given uncertainty in the design parameter x, defined as a
variation of ±Δx around the chosen value, the solution at point 1 leads to a large level
of variation, Δf1, of the performance function f(x). To the right of point 1 in the figure,
there exists a more ‘flat’ region of f(x), which can be shown to be more robust, or less
sensitive to variation in the design parameter x. If point 2 is chosen, for the same design
parameter variation, ±Δx, the variation of the performance function, Δf2, is
significantly less than that at point 1. The sacrifice in choosing point 2 is the increase in
the mean value of f(x), which is higher at point 2 than at point 1. This is the trade-off
that must be evaluated in searching for a robust solution as opposed to a solution with
optimal mean performance. It can be seen in the figure that an even flatter region than
that at point 2 exists further to the right of point 2 (direction of increasing x). Although
the performance variation may be even less in this region, the mean performance may
not be acceptable. Both elements of desired mean performance and reduced
performance variation must be incorporated in a robust optimization formulation.
Variation Δ of x
n Function Minimum
o±Δx
o Robust Solution
Function f(x)
n±Δx
Δf @n
o Δf @o
n
The robust design objective for the “mean on target” and “minimize variation” robust
design goals is generally formulated as follows:
[11]
where w1i and w2i are the weights and s1i and s2i are the scale factors for the “mean on
target” and “minimize variation” objective components respectively for performance
response i, Mi is the target for performance response i, and l is the number of
performance responses included in the objective. For the case in which the mean
performance is to be minimized or maximized, rather than directed towards a target,
the objective formulation of equation Eqn 11 can be modified as shown in Eqn 12,
where the “+” sign is used before the first term when the response mean is to be
minimized, and the “-” sign is to be used when the response mean is to be maximized.
[12]
For the robust optimization formulation given in equation 10, deterministic constraints
are then modified to create quality constraints by including both the mean and standard
deviation of performance in the formulation. These quality constraints are formulated
as follows:
Any “sigma-level” quality can be formulated by changing the constraints to reflect the
acceptable “number of sigmas”, n, of performance variation. As discussed earlier, ±3σy
was traditionally considered acceptable for engineering purposes, but to achieve six
sigma quality, to significantly reduce the number of defective parts per million and
associated probability of violating specification limits as shown in Table A-1 on
page 536, n in equations 13 and 14 should be set to 6. With the direct correlation
between sigma level and probability of consistently meeting the specifications, this
constraint formulation concept is similar to reliability constraints in reliability-based
design.
References
Arora, J.S., 1989, Introduction to Optimum Design, McGraw-Hill, New York.
Chen, W., Allen, J. K., Tsui, K.-L., and Mistree, F., 1996, “A Procedure for Robust
Design: Minimizing Variations Caused by Noise Factors and Control Factors,” ASME
Journal of Mechanical Design, Vol. 118, No. 4, pp. 478-485.
Choi, K.K., Yu, X., and Chang, K., 1996, “A Mixed Design Approach for Probabilistic
Structural Durability”, 6th AIAA/USAF/NASA/ISSMO Symposium on
Multidisciplinary Analysis and Optimization, Bellevue, WA, pp. 785-795.
Harry, M. J., 1997, The Nature of Six Sigma Quality, Motorola University Press,
Schaumburg, Illinois.
Harry, M., 1998, “Six Sigma: A Breakthrough Strategy for Profitability,” Quality
Progress, Vol.31, No. 5, May 1998, pp. 60-64.
Hasofer, A.M. and Lind, N.C., 1974, “Exact and Invariant Second Moment Code
Format,” Journal of Engineering Mechanics, ASCE, Vol. 100, No. EM1, February, pp.
111-121.
Hsieh, C-C., and Oh, K. P., 1992, “MARS: a computer-based method for achieving
robust systems,” FISITA Conference, The Integration of Design and Manufacture, Vol.
1, pp. 115-120.
Melchers, R.E., 1987, Structural Reliability: Analysis and Prediction, Ellis Horwood
Series in Civil Engineering, John Wiley & Sons, New York.
Phadke, M. S., 1989, Quality Engineering using Robust Design, Prentice Hall,
Englewood Cliffs, New Jersey.
Rackwitz, R. and Fiessler, B., 1978, “Structural Stability Under combined Random
Load Sequences”, Computers and Structures, Vol. 9, pp. 489-494.
Thanedar, P.B. and Kodiyalam, S., 1991, “Structural Optimization using Probabilistic
Constraints”, Proceedings, AIAA/ASME/ACE/AHS/ASC Structures, Structural
Dynamics and Materials Conference, pp. 205-212, Paper No. AIAA-91-0922-CP.
Overview
The motive underlying robust design is to improve the quality of a product or process
by not only striving to achieve performance targets, but also by minimizing
performance variation. In robust design, parameters are classified using the following
terminology.
Control Factors (x). These parameters can be specified freely by a designer. These
are equivalent to design variables in optimization.
Noise Factors (z). These parameters are uncertain. They are either not under a
designer's control, or their settings are difficult or expensive to control. Noise
factors cause the response, y, to vary and lead to quality loss (performance
variation).
The focus in robust design is to reduce the variation of system performance responses
caused by uncertainty of noise factor values, or to reduce system sensitivity. Solutions,
which are system designs represented through settings of the control factors, are sought
that minimize response variation in addition to achieving performance targets (mean,
µy, on target and minimized variance, σy2).
Figure A-19. Product Array for Response Mean and Variance Estimation
NoiseArray
. . . . . .
z2 + + - - .
z1 + - + - .
x1 x2 x3 . 1 2 3 4 .
Each array in the product array can be any designed experiment. In Figure A-19 on
page 552, a two level factorial design is depicted for both the control and noise array,
as indicated by the “+” and “-” levels for each control factor (x1, x2, x3, …) and each
noise factor (z1, z2, …). For each row of the control array (indicating one set of values
of the control factors), response values are generated for each noise factor combination
(each row of the noise array). For example, control array row 1 (run with noise row 1)
leads to the response value y11, control row 1 (run with noise row 2) leads to response
value y12, and so on. This experimentation strategy then leads to multiple response
values for each set of control factor settings, from which a response mean, μyi, and
variance or standard deviation, σyi, can be computed, as shown in Figure A-19.
Given the mean and variance data for each control experiment, the experiments can be
compared to determine which set of control settings best achieves “mean on target”
and “minimized variation” performance goals. Alternately, performance characteristics
or metrics that combine the effects of mean performance and performance variation
can be calculated and used to compare the set of designs represented by the control
factor experiments. The performance characteristics used by Taguchi are the
signal-to-noise ratio (shown as S/Nyi for each control experiment in Figure A-19), and
quality loss (measured using a loss function, Lyi in Figure A-19).
The S/N ratio calculation depends on the particular response being investigated:
a response for which a specified target value is desired is categorized as a “nominal
is best” response type
a response for which low values are desired is categorized as a “lower is better”
response type
and a response for which high values are desired is categorized as a “higher is
better” response type
The S/N ratios for these response types are typically formulated as described in Table
A-2.
The S/N ratio equations are formulated such that high values are desired (the control
experiment with the highest S/N ratio value is considered the best set of factor settings
of those included in the experiment), when S/N ratio is the performance characteristic
chosen for comparison. This formulation is consistent with the name of this
performance characteristic (signal-to-noise ratio) since the effects of noise are reduced
by increasing the ratio.
The S/N ratio equations given above are valid only for positive response values
(0 ≤ y ≤ ∞). If negative values of a response are encountered in Isight, the following
S/N ratio formulations are substituted for the standard formulations:
Lower is better: -∞ ≤ y ≤ ∞
Higher is better: -∞ ≤ y ≤ ∞
For the “Lower is better” and “Higher is better” functions, when negative values are
encountered and the above formulations are used, the response values are normalized
by the maximum absolute response value as shown (max|y|). This normalization is
performed to prevent the exponential values from getting excessively large.
The second performance characteristic used by Taguchi Robust Design Techniques, the
loss function, is generally used to measure the “loss of quality” associated with
deviating from a targeted performance value. The basic concept is shown in Figure
A-20, where it is compared to the conventional specification-limit quality assessment
approach.
With the conventional approach, lower and upper specification limits (USL and LSL in
Figure A-20) are used to define the acceptable quality range. All values within the
limits are assumed to have no quality loss, and all values outside the limits are defined
as having 100% quality loss, and related parts are reworked or scrapped. The best
example of this concept is dimensioning and tolerancing. If a part is to be
manufactured to a length of 10 inches, with a tolerance of ±0.1 inches, parts inspected
to be between 9.9 and 10.1 inches will are accepted (no quality loss) and parts less than
9.9 or greater than 10.1 inches will be rejected (100% quality loss). According to
Taguchi Robust Design Techniques, however, loss of quality occurs gradually when
moving in either direction from the target value, rather than as a sharp cutoff with the
conventional approach. Therefore, quality loss is measured by the deviation from the
target.
With the presence of noise factors, the loss can be calculated for each control/noise
factor combination in the crossed product array, as shown in Figure A-19. The loss
values must then be averaged across the noise factor cases, for each control factor case,
to compare designs (control cases) and select the case with the minimum loss. The
quality loss can also be calculated for the “Lower is better” and “Higher is better”
response types, as with the S/N ratio (the traditional loss equation given above
obviously applies to the “Nominal is best” case, since a target value is desired).
The typical loss function formulations, when noise is present, are then given as
follows:
As with the traditional S/N ratio equations, the loss function equations given above for
the “Lower is better” and “Higher is better” response types are valid only for positive
response values (0 ≤ y ≤ ∞).
If negative values of a response are encountered in Isight, the following loss function
formulations are substituted for the standard formulations, with the response values
again normalized by the maximum absolute response value (as with S/N ratios):
Higher is better: -∞ ≤ y ≤ ∞
For all loss function formulations, low values are desired (loss of quality is always to
be reduced where possible). The control experiment case with the lowest loss value is
then considered the best set of factor settings of those included in the experiment, when
loss function is the performance characteristic chosen for comparison.
The Taguchi P-Diagram is shown in Figure A-21, including the three types of factors
as inputs to the system that influence the responses. The specific goal of Taguchi
analysis with dynamic characteristics, when a signal factor is included, is to identify a
design with:
The experiment data is gathered for a dynamic system by repeating the Product Array
of Figure A-19 on page 552 for each signal factor level. Thus the number of
experiments for a dynamic system is the product of the number of control experiments,
the number of noise experiments, and the number of signal factor levels.
where y is the response, M is the signal factor, and is the slope of the line fit to the
signal/response data.
To measure both the linearity of the signal-response relationship and the variability of
noise around this modeled relationship, a dynamic signal-to-noise ratio is calculated
for each experiment run (each control array experiment). To calculate the dynamic
signal-to-noise ratio, the type of the signal-response relationship must first be defined.
Three types exist:
Zero Point Proportional. The relationship between the signal factor and response
passes through zero (zero input leads to zero output). One good example of this
relationship is a weighing system to be designed and calibrated. If no weight is on
the scale, the measurement should read zero.
Reference Point Proportional. The relationship between the signal factor and the
response passes through a reference point. This type is used when the relationship
does not pass through zero or when the signal values are far from zero. This type
can be used if the relationship can be calibrated to a known standard, as with oven
controls where one setting is calibrated to a specific temperature.
where
The dynamic S/N ratio is calculated for each control experiment. The control
experiment with the largest S/N ratio value represents the best design (best
combination of linear fit and low variability) of those executed. The main effect on
the dynamic S/N ratio can be used to identify other combinations of control factor
settings that may produce even better designs.
where y is the response, is the average from the reference standard data (data
obtained with signal factor at reference value), M is the signal factor, Ms is the
reference standard (the reference signal factor level), and is the slope of the line fit
to the signal/response data.
where y1 to yj are the response values at the reference standard signal factor level, for a
given control experiment.
The reference standard average is then subtracted from each response data value, for a
given control experiment number, and the reference standard signal factor level is
subtracted from all signal factor levels. The steps for calculating the dynamic
signal-to-noise ratio for the Reference Proportional case, using the reference standard
adjusted data and signal factor levels, is then given as follows:
where
where y is the response, m = , M is the signal factor, is the average of the signal
factor levels, and is the slope of the line fit to the signal/response data, and e is the
error in this fit.
The dynamic signal-to-noise ratio is calculated for the General Linear Equation case as
follows:
where
The nonlinear relation illustrated in Figure A-23 is not easily modeled by traditional
dynamic S/N ratio formulations of zero-point proportional form, reference-point
proportional, or the general linear equation form.
The procedure employed in this new approach is a two step approach, as shown in
Figure A-25 on page 568. Conceptually, the procedure is similar to traditional Taguchi
approaches:
One goal with Taguchi methods is to always reduce the response variation due to
the affects of uncontrollable noise. Control factor settings are sought that minimize
the effects of the noise factors.
In this case the “mean on target” Taguchi goal is actually expanded by the
requirement to move the entire signal-response relationship to the “ideal” or target
relationship.
Note that the processes illustrated in Figure A-25 on page 568 are performed in the
transformed “linearized” space. In the new measurement space, the Y values of the
reference noise condition are plotted on the x-axis and the observed response values
are plotted on the y-axis. The reference noise condition is user-specified. It may be
defined by a certain noise condition or by a hypothetical zero-noise condition. As a
result of the transformation, the curve of N0, the reference noise condition, becomes a
line through origin and at 45 degree angle with the x-axis, as shown in Figure A-26 on
page 570. The curves of the other noise conditions N1, N2 will vary around the N0
reference noise condition, not necessarily linearly.
The three noise conditions (N0, N1, and N2) are user defined. The combinations of
noise factor levels necessary to define N0, N1, and N2 are generated in one of two
ways:
If a full noise matrix is employed, a reference noise condition is still employed. This
reference noise condition is often estimated using the average response values at each
signal level. This approach allows for application of the dynamic-standardized S/N
methodology.
where:
Note that the denominator in the generic version is divided by . This step is
different than the traditional definitions of S/N ratio.
Once the standardized S/N ratio values are calculated for all control experiments, the
calculated main effects can be used to “tune” the design for robustness, and to adjust
the system behavior to the target “ideal” conditions. A transformation concept similar
to that used for the dynamic-standardized S/N ratio approach is employed for this
tuning. The values of the x-axis in this transformation are the target response values
(the reference response targets) under N0 for each signal level. The values of the y-axis
are calculated response values. Combinations of control factor levels can then be
explored to achieve a linear relationship in this transformed measurement space.
This tuning, illustrated in Figure A-27, is performed through the following procedure:
Byrne, D. M. and Taguchi, S., 1987, “The Taguchi Approach to Parameter Design,”
40th Annual Quality Congress Transactions, Milwaukee, Wisconsin, American Society
for Quality Control, pp. 19-26.
Chen, W., Allen, J. K., Tsui, K.-L., and Mistree, F., 1996, “A Procedure for Robust
Design: Minimizing Variations Caused by Noise Factors and Control Factors,” ASME
Journal of Mechanical Design, Vol. 118, No. 4, pp. 478-485.
Phadke, M. S., 1989, Quality Engineering using Robust Design, Prentice Hall,
Englewood Cliffs, New Jersey.
Ross, P. J., 1988, Taguchi Techniques for Quality Engineering, McGraw Hill, New
York.
Roy, R., 1990, A Primer on the Taguchi Method, Van Nostrand Reinhold, New York.
temperature is an ASA parameter that mimics the effect of a fast moving particle in a
hot object like a hot molten metal, thereby permitting the ball to make very high
bounces, and able to bounce over any mountain to access any valley, given enough
bounces. As the temperature is made relatively colder, the ball cannot bounce so high,
and it also can settle, becoming trapped in relatively smaller ranges of valleys.
We imagine that our mountain range is aptly described by a “cost function” (Objective
and Penalty parameter inIsight). We define probability distributions of the two
directional parameters, called generating distributions since they generate possible
valleys or states we are to explore. We define another distribution, called the
acceptance distribution, which depends on the difference of cost functions of the
present generated valley we are to explore and the last saved lowest valley. The
acceptance distribution decides probabilistically whether to stay in a new lower valley
or to bounce out of it. All the generating and acceptance distributions depend on
temperatures.
In a D-dimensional parameter space with parameters p^i having ranges [A_i, B_i],
about the k'th last saved point (e.g, a local optima), p_k^i, a new point is generated
using a distribution defined by the product of distributions for each parameter, g^i(y^i;
T_i), in terms of random variables y^i in [-1, 1], where p_k+1^i = p_k^i + y^i(B_i -
A_i), and “temperatures” T_i, g^i(y^i; T_i) = 1/[2(|y^i| + T_i)(1 + 1/T_i)].
The cost functions, C(p_k+1) - C(p_k), are compared using a uniform random
generator, U in [0, 1), in a “Boltzmann” test: If exp[-(C(p_k+1) - C(p_k))/T_cost] > U,
where T_cost is the “temperature” used for this test, then the new point is accepted as
the new saved point for the next iteration. Otherwise, the last saved point is retained.
The annealing schedule for each parameter temperature, T_i, from a starting
temperature T_i0, is as follows:
The annealing schedule for the cost temperature is developed similarly to the
parameter temperatures. However, the index for reannealing the cost function, k_cost,
is determined by the number of accepted points, instead of the number of generated
points as used for the parameters.
their relative first derivatives with respect to the cost function, in order to guide the
search “fairly” among the parameters.
Downhill Simplex
The downhill simplex method is a geometrically intuitive algorithm. A simplex is
defined as a body in n dimensions consisting of n+1 vertices. Specifying the location of
each vertex fully defines the simplex. In two dimensions, the simplex is a triangle. In
three dimensions, it is a tetrahedron. As the algorithm proceeds, the simplex makes its
way downward toward the location of the minimum through a series of steps. These
steps can be divided into reflections, expansions, and contractions. Most steps are
reflections, which consist of moving the vertex of the simplex where the objective
function is largest (worst) through the opposite face of the simplex to a lower (better)
point. Reflections maintain the volume of the simplex. When possible, an expansion
can accompany the reflection in order to increase the size of the simplex and speed
convergence by allowing larger steps. Conversely, contractions “shrink” the simplex,
allowing it to settle into a minimum or pass through a small opening like the neck of an
hourglass.
This method has the highest probability of finding the global minimum when it is
started with big initial steps. The initial simplex will then span a greater fraction of the
design space and the chances of getting trapped in a local minimum are smaller.
However, for complex hyperdimensional topographies, the method can break down.
The following are the sequence of steps followed by the Modified Method of Feasible
Directions technique:
1. q = 0, x = x0
2. q = q + 1
The Modified Method of Feasible Directions technique uses one of the following
methods to find the search direction at each iteration q:
If any constraints are active and none are violated, the Modified Method of
Feasible Directions
Minimizes F(xq-1) x Sq
Minimizes F(xq-1) x Sq -
Sq x Sq 1
Where: J is the set of active and violated constraints
MOST
This technique first solves the given design problem as if it were a purely continuous
problem, using sequential quadratic programming to locate an initial peak. If all design
variables are real, optimization stops here. Otherwise, the technique will branch out to
the nearest points that satisfy the integer or discrete value limits of one non-real
parameter, for each such parameter. Those limits are added as new constraints, and the
technique re-optimizes, yielding a new set of peaks from which to branch. As the
optimization progresses, the technique focuses on the values of successive non-real
parameters, until all limits are satisfied.
It has been said that one of the problems with optimization is that “there is nothing you
can say about an arbitrary system”. This statement is correct to some extent. However,
based on many years of experience in applying optimization to various engineering
problems, the following observations can be made. First of all, a system can be
classified once more is known about it. Secondly, once classified, there is some
combination of optimization methods that would work better than random guessing.
Knowing the type of system you have is critical to being able to solve the system
efficiently. Optimization theory has always been developed the other way around.
Assuming that the system has a certain mathematical form, what is the most cost
efficient way of optimizing this system? Over time, a large collection of optimization
methods based on such assumptions were developed. Each of these methods had many
degrees of freedom to adapt the method to the problem at hand. In essence, the specific
problem that people wanted to solve (for example, design a lighter structure) was now
transformed into finding the right algorithm for designing the structure. This second
task was in many ways a harder task than the original. For many engineers and
scientists, the problem was transformed from a known to an unknown domain. As a
consequence, optimization technology did not become the big commercial success
many had hoped for.
Thus the issue of control became a driving factor in the development of the Pointer
optimization engine. Pointer uses a proprietary algorithm that automatically controls a
set of optimization resources. Similarly, Pointer efficiently solves a wide range of
problems in a fully automatic manner by harnessing and leveraging the power of a
group of distinct complementary optimization algorithms.
the search and any previous experience in terms of the selection and right settings of
the optimizers.
The Pointer control algorithm varies the optimizer settings in such a way that either:
1. The best answer is found in the shortest time. In this case the optimizers are
configured such that the maximum rate of improvement of the objective function is
achieved from a single starting position.
2. The most experience is gained. The optimizer settings are selected in such a way
that the highest optimizer robustness is achieved for a given optimization time. The
robustness is defined as the mean harmonic error of the best objective function
(ever found) normalized by the local optimum for a set of random starting vectors.
This approach is an order of magnitude slower than merely finding the best answer,
but the experience gained will allow you to solve similar classes of problem in the
shortest possible time.
Usually, it is not possible for the user to identify the problem topology in such a way
that he/she can select the proper algorithms and settings. A real-life example is shown
in Figure A-30. The topology is that of the drag of an airfoil computed by CFD as a
function of its shape. Previously, the user (from experience) assumed that the drag was
a smooth function of the geometry. From the starting point, a small step was selected to
calculate the gradients accurately. Because the topology of the drag had surface waves,
the steepest downhill slope was considered to be along this wave. If the user had
picked a larger step size, he/she would have concluded that the direction normal to the
waves has the “relevant” steepest downhill slope.
These waves are not caused by the physics of the problem (drag), but by numerical
interference of converging loops inside the numerical code that calculates the drag.
Almost all simulation codes that use partial differential equations exhibit such
behavior. Note that the spikes in this plot represent code-crashes, another difficult
topological feature not inherent in the real physics.
A dozen approaches were identified to correct pathological topologies and find the
correct relevant search direction. These pathological topologies are created by the type
of numerical algorithm used in the simulation code and not by the specific design
problem the simulation code addresses. Therefore, the solution that Pointer presents is
valid over a wide range of topologies.
For smooth problems, the best optimizer is the sequential quadratic programming
(SQP) algorithm. The SQP algorithm uses function calls close to the starting point to
determine the topology of the problem. SQP is widely used in trajectory optimization
and structural optimization.
The SQP algorithm used in Pointer is NLPQL, developed by Dr. Klaus Schittkowski,
which solves general nonlinear mathematical programming problems with equality and
inequality constraints. It is assumed that all problem functions are continuously
differentiable. Proceeding from a quadratic approximation of the Lagrangian function
and a linearization of the constraints, a quadratic subproblem is formulated and solved
by the dual code QL. Subsequently, a line search is performed with respect to two
alternative merit functions and the Hessian approximation is updated by a modified
BFGS-formula.
face of the simplex to a lower point. If this new vertex is better, the old one is deleted.
If there is no improvement after a number of steps, the method “shrinks” the simplex
by reducing the length of each side, thus trapping the optimum solution. Instead of
moving the best point directly in the direction of the steepest decent (the approach of
SQP) it moves the worst points of the initial set in the direction of the best. Thus SQP
and downhill simplex have little overlap in terms of assumptions and computational
use of objective function data.
A third algorithm in Pointer is the so-called evolution or genetic algorithm. SQP and
downhill simplex determine analytically where the best answer is based on previous
objective function calls. Genetic algorithms work well because they incorporate
randomness in their search. It gives the algorithm the ability to correct deterministic
search bottlenecks. No objective function continuity is required for this method.
Genetic algorithms almost always work. However, they are not often the best
algorithms to use because of their very high computational expense.
However, any optimization problem can usually be solved when the right answer is
already known and the starting point “tweaked”. This comparison will therefore focus
on the added value of Pointer versus the already good set of core algorithms.
Although many problems have been solved with Pointer, this section uses a truss
optimization problem because it represents a typical design problem in mechanical or
aerospace engineering. Figure A-31 on page 588 shows a truss as it supports a series of
point loads.
The bridge is supported at the right edge and free at the left edge. It has six joints
whose positions are chosen by the optimizer so as to minimize the weight or cost
subject to a given set of loads. The weight is computed by determining the minimum
bar dimensions that can support the applied load.
The problem is hard because, as the joints locations move with respect to each other,
the loads on the bars change from compression to tension and back. A bar under
compression has to be much thicker (thin-walled cylinder) than a bar under tension
(wire). The weight of the truss structure is therefore a discontinuous function of the
locations of the joints.
The optimization task was to minimize the weight of the truss in 30 minutes. In 1993,
Mathias Hadenfeld completed the task of optimizing the truss from a random starting
point. Hadenfeld was an expert user of optimization. He was free to tweak and restart
the solution until he felt that no more progress could be made, but he could not change
the starting point. His results are shown in Figure A-34 on page 591 (blue symbols).
On average he had more success with the genetic algorithm than the downhill simplex
method, and the gradient method failed completely. On average his answers were 15%
away from the known optimum due to the difficult topology of the problem.
In 1999, the same problem was given to 13 groups of between 7 and 15 professional
engineers. They were given Pointer and its graphical user interface displaying the truss.
They were asked to minimize the weight without using the optimization. They had to
manually input the joint coordinates and run the analysis to see whether an
improvement was made. Since an analysis could be done in less than a second, 30
minutes was ample time to complete the task. The number of iterations was between 10
and 30. In our experience this also represents the average number of industrial product
design iterations.
The groups of engineers came up with vastly different answers. Some were more than
twice as heavy as the best solution. However, some came up with solutions that were
within 5% of the right answer. It is interesting to note that they only achieved these
results through the use of the graphical representation of the truss. In a subsequent test
without the graphics, no engineer came within 20% of the optimizer's best answer.
Next, Pointer was given the control over the core optimizers (red symbol). Just like the
expert, Pointer required less function calls to find the answer with genetic algorithm
than with the downhill simplex algorithm. But with its standard settings, which allow
the optimal use of all algorithms, Pointer was able to consistently get the right answer
from any starting point with just 1000 function calls (< 1 minute).
Even though the group of experts got very different answers when solving the problem
manually, all achieved the correct answer when using Pointer under its standard
settings.
Figure A-34 on page 591 shows the result of the test. Nevertheless, maybe not
surprisingly, Pointer performed exceedingly well on the test. It was the only code
capable of solving all the problems. This is all the more impressive considering that the
same standard settings were used for all the test cases and that there was no expert
intervention in the process. In terms of speed it is usually comparable to the best codes
for each of the specific problems. The “best” benchmark results were provided by Dr.
Egorov the author of IOSO. Other codes in the comparison were IOSO, NLPQL,
DONLP, and MOST.
Figure A-33. Design of a Truss for Minimum Weight Using Various Tools
Summary
The Pointer technique consists of a complementary set of optimization algorithms:
linear simplex, sequential quadratic programming, downhill simplex, and genetic
algorithms. Since all the optimizer control parameters are automatically set with a
special control algorithm, Pointer can efficiently solve a wide range of problems in a
fully automatic manner. This was demonstrated by benchmarking the code with Eric
Sandgren's optimization benchmark test. Pointer was the only algorithm that could
solve all problems and was usually as fast as the best algorithm known for each of the
individual test problems.
Pointer References
Schittkowski, K., “NLPQL: A Fortran subroutine for solving constrained nonlinear
programming problems”, Annals of Operations Research, Vol.5, 485-500. 1985.
Ashley, H., “On Making Things the Best - Aeronautical uses of Optimization”, Wright
brothers lectureship in aeronautics, Journal of Aircraft. vol. 19 No. 1. 1982.
Dr. Spellucci:
(http://www.mathematik.th-darmstadt.de/ags/ag8/spellucci/spellucci.html).
Great changes were applied to the original NSGA to create NSGA-II. The only
common feature between the two was adopting non-dominated sorting. NSGA-II has
Pareto archive P and population Q for genetic search just like SPEA. However, it
utilizes Pareto archive more aggressively. In NSGA-II, the number of individuals in
archive P is N, and it is equal to the number of individuals in population Q. On the
other hand, in SPEA, the best number of individuals in archive P is generally perceived
as one-fourth of the individuals in Q. Genetic operators of crossover and mutation are
performed on population , and the selection for extracting the next generation was
applied on set union , creating . The selection consists of two mechanisms:
“non-dominated sorting” and “crowding distance sorting.” was generated from
by means of selection, the so called “Mating selection.”
Meanwhile, Zitzler improved his SPEA. SPEA2 was proposed in 2001. SPEA2 was an
improvement over SPEA in the following areas:
The second point mentioned above is the same as the mechanism used in NSGA-II.
Moreover, SPEA2 and NSGA-II have the following common points between them:
In other words, the outline of the algorithm is similar in both NSGA-II and SPEA2, and
the differences exist only in the definition of fitness, the reducing mechanism of the
archive, and the selection for . Today, these two algorithms are the main methods
for multi-objective Genetic Algorithm. It is also said that the performance of the two is
almost even.
single-objective Genetic Algorithm. The idea of Pareto optimality was not explicitly
included.
The first multi-objective Genetic Algorithm that adopted the idea of Pareto optimality
explicitly was MOGA (Multi-Objective Genetic Algorithm), proposed by C.M.
Fonseca and P.J. Fleming in 1993. In the remainder of this section, this algorithm is
referred to as “Fonseca's MOGA,” in order to distinguish it from the generic
designation of Multi-Objective Genetic Algorithm.
Although Fonseca's MOGA was the first multi-objective Genetic Algorithm that
adopted the idea of Pareto optimality explicitly, problems were reported. In the
calculation of fitness with niching, as mentioned previously, the algorithm sometimes
generated irrational fitness values, such as solutions on the Pareto front that had lower
fitness due to niching.
This means that solutions on the Pareto front will have the highest fitness value. It is
similar to Fonseca's MOGA in that is performs niching based on the sharing distance.
Pareto optimality and the idea that crowding level and ranking value are equal to
fitness in SPEA.
The term “neighborhood cultivation” means that the individuals to be crossed over
must be within a certain range of the objective space. The idea comes from research for
improving the searching performance of multi-objective Genetic Algorithm by means
of dividing populations into sub-populations, and sending each sub-population onto
distributed CPUs in a network environment. In the divided population of such
multi-objective Genetic Algorithms, a division is decided by one of the element
objectives. Genetic operators are applied in each divided sub-population.
Consequently, in the divided model, crossover operations are performed between
neighbors, which are grouped by the division based on an objective. From a series of
numerical research on this divided population model, it was concluded that crossover
with similar individuals would produce better results than that of greatly differing
individuals. The “neighborhood cultivation” is the crossover operation that performs
the crossover with similar individuals. When the number of objective functions is p,
individuals are sorted according to one of those objective functions, and crossover is
performed by two adjoined pairs. The objective function of each generation that is used
for sorting is changed. However, with complete sorting there is a danger of loss of
diversity in the solution. For this reason, around five percent of the disturbance is
applied on sorted individuals before the crossover operation.
The method of selecting the search population Q(t+1) from the Pareto archive P(t+1) in
NCGA involves simply copying archive P(t+1) of N individuals onto Q. On the other
hand, Non-dominated Sorting Genetic Algorithm - NSGA-II and SPEA2 select
individuals by binary tournament selection with repetition. For more information on
these two algorithms, see “NSGA-II Reference Information,” on page 592. This
difference relates to the standpoint of whether to put the selection pressure onto elites.
The disadvantage of this option is that the selection pressure can result in lost diversity,
and lend to a danger of being trapped in a local optimum.
A research group tested NCGA and compared it with NSGA-II and SPEA2 with
numerous test functions, including an element knapsack problem. They concluded that,
when the following are true, NCGA provides better results than NSGA-II or SPEA2:
NCGA References
Major references and papers are listed below. The references apply to the
Neighborhood Cultivation Genetic Algorithm - NCGA and Non-dominated Sorting
Genetic Algorithm - NSGA-II sections only.
Since the dimension x can refer to any arbitrary dimension or property, the exponential
factor β accounts for this. For example, if a frame structure were to be designed where
the member cross sections were assumed to be square with length x, then the ideal
value of β would be 0.5 for axial loads and 0.33 for flexural loads. In the Stress Ratio
algorithm, all design variables use the same β with the understanding that the smaller
the exponent, the more stable (and slower) the convergence.
RBF Model
Radial Basis Functions are a type of neural network employing a hidden layer of radial
units and an output layer of linear units, and characterized by reasonably fast training
and reasonably compact networks.
Weissinger (1947) was the first to use numerical potential flow to calculate the flow
around wings. The potential flow equations are a radial basis function. R. Hardy(1971)
realized that the same concept could be used to fit geophysical data to geophysical
phenomena. Broomhead, D. S., and D. Lowe (1988) renamed this technology “neural
nets” and it was subsequently used to approximate all types of behavior.
Usage in Isight
In Isight we follow the Hardy (1972) method as described by Kansa (1999):
Obviously, as the type of physics that are models varies a different type of basis
function would be needed to provide a good fit. The response surface goes through all
the given interpolation data.
For the Isight implementation, the following variable power spline radial basis
function is used:
The reason for choosing this radial basis function is its ability to model extreme
functions within a narrow range of values of c.
For a value c = 1.15, a good approximation of a step function can be achieved with just
seven interpolation points. For a value of c = 2, a good approximation of a linear
function can be achieved with just three interpolation points, as shown below.
Picking c = 3 for the step function example would produce an approximation that
would go through the seven data points, but it would more resemble a single sine wave
than a step function. There are obviously an infinite number of solutions that will go
through any given set of data points.
In most of the literature this problem is solved by splitting the interpolation data into
two groups. One group is used to create the radial basis function approximation and
one is used to compute the error between the radial basis function approximation for
those points and the actual function values. The shape function is optimized to
minimize the summed errors. This is a valid approach when a lot of data points are
available for the interpolation, but in our practice we deal mostly with very sparse data
sets and it seems inefficient to use only half the data to create the actual response
surface.
So we adopted a different approach. We define a good fit as one whereby the shape of
the curve does not change when a point is subtracted. We optimize the value of c for a
minimum sum of the errors N-1 data points.
This approach can be illustrated by going back to the picture of the straight line
approximation, we approximate the first point (circle) from the two top points (square).
The center point is approximated from the two extreme points and the top point is
approximated from the two bottom points. For c = 2 the sum of the errors for the
“missing points” is low.
For c = 0.2, the sum of the errors of the “missing points” is extremely high.
Note: We would like to thank Dr. Kansa for getting us started with the development of
this method.
References
J. Weissinger. “The lift distribution of swept-back wings”. NACA TM 1120, 1947.
Kansa, E.J. “Motivation for using radial basis functions to solve PDE's”. 1999.
(Unpublished paper: author kansa@IrisINTERNET.net).
Where:
A lower order model (Linear, Quadratic, Cubic) includes only lower order polynomial
terms (only linear, quadratic, or cubic terms correspondingly). Notice that 3rd and 4th
order models in Isight do not have any mixed polynomial terms (interactions) of order
3 and 4. Only pure cubic and quartic terms are included to reduce the amount of data
required for model construction.
This option allows you to select a sub-set of polynomial terms using one of the
four available term selection methods (Sequential Replacement, Stepwise
Regression (Efroymson's algorithm), Two-at-a-time Replacement, or Exhaustive
Search). For more information about term selection, see “Polynomial Term
Selection in RSM,” on page 607.
The number of design points (if Random Designs is used for initialization).
The size of the design space around the baseline point in which the initial random
designs are generated (if Random Designs option is used for initialization).
The size of the design space can be set individually for each input parameter. The
bounds of the design sub-space can be entered directly (absolute values) or
calculated by Isight by applying lower and upper bounds to the baseline value of
each parameter (relative to baseline).
Sampling data points needed for initialization of the Response Surface Model
approximations can be obtained using one of the available sampling methods in Isight.
Typical initialization mode for a Response Surface Model, if no previous data is
available, is Random Designs. In this case, Isight will generate the required number of
random designs inside the specified boundaries, and then run exact analysis for each of
those designs. Obtained data then are used for calculating polynomial coefficients of
the model. Least squares fit is used to calculate the coefficients.
The recommended number of sampling points for initialization is twice the number of
polynomial coefficients, which for a linear polynomial is (N+1), for a quadratic
polynomial is (N+1)(N+2)/2, for a cubic polynomial is (N+1)(N+2)/2 + N, and for a
quartic polynomial is (N+1)(N+2)/2 + 2N, where N is the number of input variables.
Selects the best model when a limited number of design points are available
Given a set of k predictor variables X1, X2, X3,…,Xk, select a subset of p (p<k)
predictor variables that minimizes the Residual Sum of Squares:
In the case of RSM term selection, predictor variables are the polynomial terms, X1,
X2,…, X1^2, X2^2, …, X1*X2, etc. The best combination of the polynomial terms is
selected so that the Residual Sum of Squares is minimized. Since the residuals can be
non-zero only when the model has at least one degree of freedom, minimization of the
RSS implies that the maximum number of polynomial terms selected must be lower
than the number of design points used for the RSM. Otherwise the RSS will be exactly
zero and no term selection will be possible.
The four term selection methods available in Isight have the following features:
start with the constant term, select the next best term
at every step of the forward selection, for every previously selected term find
the best replacement that will decrease the RSS and swap the variables
The Sequential Replacement algorithm does not guarantee the best model.
Stepwise Regression (Efroymson's algorithm). This method is a variation of the
Forward selection with the following steps:
start with the constant term, select the next best term
at every step of the forward selection, add the next best term if it sufficiently
decreases the RSS using the following criterion
at every step of the forward selection, check if one of the selected terms can be
dropped without appreciably increasing the RSS using the following criterion
repeat the process until no more terms satisfy the first criterion or until the
maximum desired number of terms is selected
start with the constant term, select the next best term
find the best replacement combination that will decrease the RSS and swap the
variables
The Exhaustive Search algorithm is the most expensive one from the four
available. It guarantees finding the best model at the cost of a high computational
time. The number of design points and the number of selected terms will greatly
affect the computational cost, and can make this algorithm a non-viable option for
large data sets and a large number of inputs.
Index
A COM component
editing 262
usage 261
activity components 22, 238 component API 37
adaptive simulated annealing Component contents 36
overview 575 components
parameters 130 preferences 36
API, component 37 properties 25
approximation component
coefficient data 252
defining parameters 243
error analysis 251
selecting technique 239
D
specifying data file 239
technique options 246 data exchanger component
approximation models adjusting a basic swipe 306
RBF model 598 calculation editor 319
response surface model 605 changing the format of a section of a data
archive-based micro genetic algorithm source 299
parameters 125 click 278
creating a bottom-up data exchange 292
creating a new program 279
creating a top-down data
C exchange 288, 291
data source area 270
deleting a data source 323
calculator component editing a program 287
API 260 editing Java code directly 303
array parameters 259 editing read/write statement 296
editing 255 filtering parameters 322
limitations 260 for loop editor 321
undeclared parameters 258 formatting numbers during a write
central composite design (DOE) technique 513 operation 298
full Java code view 276
general text format option 304
G I
generalized reduced gradient - LSGRG2 iSIGHT component
overview 581 “directories with spaces” problem 391
parameters 136 default execution directory, changing 391
Grouping Parameters editing 379
Calculator 256 file parameters 384
DOE, factors 46 iSIGHT version required 378
DOE, post processing 51 limitations 378
Excel 347 iSIGHT file parser component
Loop, Do Until 79 advanced parser, using 405
Loop, For 68 creating parse using component editor 406
Loop, For Array 70 editor overview 393
Loop, For Each 74 importing description file 396
N
M NCGA (neighborhood cultivation genetic
algorithm)
mail component overview 593
API 411 parameters 143
editing 408 NLPQL (sequential quadratic programming)
MATLAB component overview 583
actions, adding 414 parameters 145
defining normal distribution 521
commands 423 NSGA-II (non-dominated sorting genetic
mappings 420 algorithm)
editor, starting 414 overview 592
environment, setting 413 parameters 147
usage 412
O LSF 442
preferences 448
required files options 440
optimization component SSH 445
API 153 usage 426
configuring 105
editing multiple parameters 123
feasibility 103
mapping items to parameters 118
overview 102
P
preferences 152
step epilogue 31 parameter study (DOE) technique 508
step prologue 31 pause component
tuning parameters 124 ask a question 459
optimization techniques automatic resume 462
adaptive simulated annealing 575 display parameters 461
downhill simplex 577 execution location 462
generalized reduced gradient - known issues 463
LSGRG2 581 pause only 457
hooke-jeeves direct search method 582 specifying an action 456
modified method of feasible directions 578 usage 455
MOST (multifunctional optimization pointer
system tool) 582 control algorithm 584
multi-island genetic algorithm 582 optimization algorithms used 584, 586
NCGA (neighborhood cultivation genetic overview 583
algorithm) 593 parameters 150
NLQPL (sequential quadratic performance 587
programming) 583 preferences
NSGA-II (non-dominated sorting genetic accessing 36
algorithm) 592 data exchanger component 324
pointer 583 DOE component 65
stress ratio optimizer 597 Excel component 364, 425
orthogonal arrays (DOE) technique 509 Monte Carlo component 101
OS command component optimization component 152
advanced options 435 OS command component 448
API 453 simcode component 495
basic options 428 six sigma component 210
configuring 427 Taguchi robust design component 234
grid adapter options 441 Word component 504
grid plugin, configuring 450 process components 22, 40
limitations 448 properties, component 25
R simcode component
editing 487
files on command line 494
RBF model how different from separate
in Isight 599 components 486
introduction 598 preferences 495
reference component required files 494
“federation” defined 464 usage 485
accessing and using 466 simple random sampling 516
model, using 470 six sigma component
prerequisites 465 API 210
remote model, using 472 configuring 173
setting up 466 editing multiple parameters 208
submodel, using 468 mapping items to parameters 203
usage 464 overview 172
reliability analysis preferences 210
first order reliability method (FORM) 541 six sigma robust design
mean value method 542 defined 534
response surface model introduction 534
introduction 605 optimization formulation 546
R2 analysis of response surface models 609 sampling techniques 538
SSH 445
stress ratio optimizer
overview 597
S parameters 151
script component
dynamic Java 481
editing 478 T
job log 483
limitations 485 Taguchi robust design component
local directory 483 API 235
parameter data types 482 configuring 211
resizable arrays 484 editing multiple parameters 233
script execution 483 factor information 214
usage 477 mapping items to parameters 228
SDI component post processing 226
configuring 154 preferences 234
editing multiple parameters 170 responses 223
mapping items to parameters 165 task component 235
overview 153, 211 triangular distribution 530
tuning parameters (optimization) 124
W
weibull distribution 524
Word component
editing 497
execution permissions (Fiper) 500
preferences 504
usage 496