Professional Documents
Culture Documents
A Structured Approach To Strategic Decis
A Structured Approach To Strategic Decis
1
mission set. This study took the democratic to be as simple as possible. The key data that
approach, as it is most applicable for long- it collects are: contact information, technology
term planning studies where focused mission name, linkage to the capability-breakdown
designs have not as yet been selected. structure, expected cost, probability of success
A second study was completed at the Jet (PoS) if fully funded, metrics used, state of the
Propulsion Laboratory (JPL) under separate art (SOA), expected increase over state of the
sponsorship, in parallel with the CRAI effort. art, and budgetary and development plans.
This study utilized the same analysis structure,
and its results are presented here with the title CRAI Analysis
“Exploration Mission Analysis.”
The CRAI effort supplied a total of 440
Methodology technology data sheets, categorized into the
following Technology Areas:
The CRAI team employed a methodology 2.1 Communications & Info Systems
described in Reference 4, and in the web site: 2.2 Space Utilities and Power
http://start1.jpl.nasa.gov. This method has 2.3 Human Support Systems
been developed over several years at JPL and 2.4 Automation and Robotics
several papers have been published. 1 2 3 4 2.5 In-Space Transportation
CRAI and the NASA Space Architect’s 2.6 Scientific Instruments and Sensors
Architecture Group were to work closely 2.7 Structures and Materials
together to develop requirements. 2.8 Crew Mobility
Unfortunately, the President’s announcement 2.9 Launch Access
in February 2004 of a new exploration focus
for NASA made the immediate acquisition of Of the 440 data sheets, 102 were unusable
specific requirements difficult. Therefore, due to incompleteness in some of the essential
CRAI decided to use three pre-existing fields. The remaining 338 data sheets were
architectures as a focus for their work. These matched to requirements extracted from the
architectures are: three architectures. A technology was
considered matching only if its metrics were
1. 1998 Mars Reference Mission 5 the same as the requirements, i.e., the
2. OASIS mission 6 technology “Ka-Band Travelling-Wave Tube
3. JSC Architecture #1 7 (TWT) 100 to 250W” was considered to be a
needed technology because its metric, watts of
These three architectures were chosen power in a travelling-wave tube, was matched
because they were available, substantial, and by a requirement of 100W of power for a
somewhat quantitative. The CRAI block leads TWT in the Mars Reference Mission. This
extracted quantitative requirement data from strict matching of requirements to
these reports. technologies reduced the number of
technologies eligible for analysis to 47, and
Technology Data Collection only technology areas 2.1-2.5 had matches.
2
Either the level of abstraction has to be so Contained
high as to remove any individuality of the Stored Volume log2(10) 3.32
technology, or a method has to be determined Density
that allows the technology to be represented
by a unitless score. Our scoring algorithm The scores are then averaged. Averages
begins by converting the projected technology are taken instead of sums to keep technologies
metrics into unitless scores. with many metrics from scoring arbitrarily
For clarity, we will illustrate a simplified high.
example: Disposal of Human Waste (a Finally, the score is multiplied by the
perhaps distasteful, but essential component to percentage chance of success to arrive at an
any human exploration). It has two metrics. expected value for the score:
Each metric is scored separately and the
Table 4: Final Calculation
results are averaged.
We begin with the metrics that the Average Score % Success Expected
technologist provided: Value:
Table 1: Disposal of Human Waste 6.48 90% 5.83
Technology
Now the unitless score can be seen as an
Metric SOA Need estimate of the expected benefit of this
Time Waste 0.25 years 200 years technology to a single mission. If this
Contained technology were needed for another mission, a
Stored Volume 70 kg/m3 700 kg/m3 score would be calculated for that mission’s
Density needs, and the two scores added. The average
technology score for the 47 CRAI
To make the score unitless, we divide the technologies using this method is
need by the state of the art: approximately 1.56 points.
Table 2: Calculations #1 With a score and a cost, a benefit-cost
ratio can be obtained. This ratio is calculated
Metric Division Result for each technology under consideration. A
Time Waste 200/0.25 800 sample budgetary level for technology
Contained development across NASA is assumed, and a
Stored Volume 700/70 10 simple “grab-bag” optimization is performed.
Density The sample budget level is then increased and
the optimization ran again. By looking across
There is often a large difference in scores, a wide range of budgets, trends can be seen.
some of them being so large that they are
unintelligible. To gain a better physical Exploration Mission Analysis
intuition of these make these results more
intuitive, we take the log2 of the score. This is Similar to the CRAI analysis, the
then a measure of how many times a Exploration Mission analysis performed at
technology’s performance needs to double to JPL focused on the technology needs of a set
reach the need: of missions. Here, we examine these missions:
Table 3: Calculations #2
1) Mars Sample Return (MSR)
Metric Log Result 2) Astrobiology Field Laboratory (AFL)
Time Waste log2(800) 9.64 3) Mars Science Laboratory (MSL)
3
4) Lunar Precursor Mission (LPM) approximately $70 million in Automation and
Robotics, and approximately $20 million in
Other missions were considered in In-Space Transportation for a goal of
addition to these four, including the Jupiter maximizing total technological improvement
Icy Moon Orbiter (JIMO) and a Lunar Sample while moving towards enabling the three
Return mission. However, the JIMO office CRAI design reference missions (The other
was still performing technology trades and areas do not appear in this graph because of
thus could not provide information, and the lack of data).
Lunar Sample Return mission required only a
few minor new technologies, making it less
interesting for this study. 450 2.9 -- Launch Access
R e c o m m e n d e d In v e s t m e n t , $ M
400 2.8 -- Crew Mobility
2.7 -- Structures and Materials
425
150
175
200
225
250
275
300
325
450
400
100
125
350
375
25
75
technologies. MSR required 15, AFL 17, MSL 50
12 and LPM required 8, for a total number of Budget, $M
52 technologies to analyze.
Figure 1 -- Investment Recommendations
for Different Technology Areas as a
Exploration Analysis Function of Resources Available
The same scoring algorithm used to One of the more interesting results of this
analyze the CRAI data was used here with the analysis is demonstrated clearly by the
one modifier that probabilities of success were “Comm and Info Systems” (referred to as 2.1
not assessable for all technologies and from now on) portion of this graph. As the
therefore it was assumed that each technology potential budget increases from $25 million to
was comparable regarding this parameter. $100 million, the optimal portfolio increases
the budget of 2.1. However, at $125 million,
CRAI Results the amount of money given to 2.1 actually
decreases slightly. This displacement is due to
Figure 1 presents technology-investment a more expensive technology, which has a
recommendations as a function of total higher benefit/cost score, entering the
resources available. The decision maker can portfolio. In other words, the budget has
choose to fund at any level and examine the increased to a size where a technology that
portfolio for that budget. For instance, if the costs more, but has a higher benefit/cost ratio,
technology budget for the next 10 years were can enter the optimal portfolio. This drop in
$300 million, approximately $60 million funding in the 2.1 area occurs again at higher
should be invested in Comm and Info budget levels, and occurs in other areas at
Systems, approximately $30 million in Space various times.
Utilities and Power, approximately $110
million in Human Support Systems,
4
Ordering likely due to an error of communication
between the technologist and the analyst, since
Other factors beyond the particular budget what the architecture states as a need is
investments were also of interest. These actually met by the state-of-the-art system!) :
included: 1) the order that technology areas
came into the portfolio, and 2) the order the Table 5: Negative Score
technology areas “saturated,” i.e., when every
technology that could be chosen was chosen. Regenerative SOA Architecture OASIS
The order in which technology areas first Fuel Cell #1
come into the portfolio shows us where the Systems
“low hanging fruit” is, where the highest W-h/KG 1880 400 1000
return-per-dollar technologies are. The order
of the five areas that had technology- Since the need is significantly less than the
requirement matches was: SOA, the scoring algorithm produces a
negative score. When combined with the
1) 2.1 – Comm and Info Systems positive score of the technology’s other
2) 2.3 – Human Support Systems metric, this gives an average score of 0.08. It
3) 2.4 – Automation and Robotics seems clear that only the positive-scoring
4) 2.2 – Space Utilities and Power metric should have been submitted. This
5) 2.5 – In-space Transportation demonstrates the importance of iteration and
communication in such a large project.
Primarily, this order is because there are
medium-scoring, more affordable CRAI and Supplemental Data
technologies in the Comm and Info Systems
and Human Support Systems categories. For As the data collected was sparse, we
instance, High and Low Data Rate Coding decided to supplement it with the JPL
Systems and Data Compression,” has a cost of Exploration study (noted here on as the Pilot).
only $2M, so it is funded early on. Its score of Eleven more technology areas were added,
3.3 places it well above the average of 1.56. with 156 new technologies, for a total of 203
The order of saturation is different, technologies arranged into 16 areas. The new
however. An area will saturate when there is technologies were matched to requirements
enough budget to enable every technology from future NASA missions.
within the area, i.e., when the technology that
has the worst benefit to cost ratio is funded. Investment Strategies
The order of saturation is: E C S T ech nologies
S m all CR AI and P ilot D ata
P ressure V essel
3000
Th erm al C o ntrol
2000
A u tonom o us R obotics
00
00
00
00
00
00
00
00
0
70
30
50
90
13
15
17
19
21
23
25
27
Bu d g et, $M
2.1 -- C om m and Info S ystem s
Recommended Investment, $M
Autonomy: Long Range Traverse and Navigation
DoE RTG's
Mars Oxidant and Radical Detector (Electron Paramagnetic Resonance)
order of saturation does change. The order is Confocal Microscope with Nested Raman
XRD/XRF
Fe-NMR
Color Hand Lens/Microscope (CHAMP)
Laser Induced Breakdown Spectroscopy
Biological Assays
150
10
30
50
70
90
0
11
13
15
17
19
21
23
25
27
29
31
MEMS Avioncs
Impact Energy Absorber
TPS Design and Testing Budget, $M
6
350
enabled. The democratic approach is not the
LPM
300
MSL
AFL
method for optimizing enabled missions.
MSR
250
As the democratic approach has a poor
time enabling missions at lower budget levels,
Recommended Investment, $M
200
we can see if we can do better with the
150 mission enabling approach. Here, the analysis
100
is run on each mission’s technology suite
rather than each individual technology, where
50
MSL
not have its full technology suite until large 250
AFL
Recommended Investment, $M
LPM
300
MSL
AFL Figure 6 – Missions Enabled via the
MSR
250
Mission Enabling Method
Recommended Investment, $M
200
The biggest difference between Figure 5
and Figure 6 is that the mission enabling
150
Sensitivity/Robustness Analysis
Figure 7 – Technology Competition Border Once all of these levels are run, there is an
optimal portfolio for each budgetary level.
In Figure 7, the horizontal axis represents This represents the recommended investment
each of the 52 technologies. The technologies under the proposed conditions stated earlier.
are ordered with the highest score/cost ratio But how sensitive is a portfolio
8
recommendation with respect to uncertainties and are included. The yellow boxes are on the
regarding the capability requirements and line. Depending on their score and cost, they
technology improvement characterization? will be included in the portfolio or not. By the
To measure the robustness of inclusion for number of times they are chosen in the 200
each technology, a sensitivity-type analysis runs, we can estimate how close to the line the
was run. For each budgetary level, every center of their box is, a measurement of how
technology’s score and cost were adjusted a robust the technology is within that portfolio.
random amount from -5 to 5% and the
optimization was run. This was repeated 200 Conclusions
times for each budgetary level.
The Monte Carlo robustness/sensitivity This work has demonstrated that a
runs showed several technologies at each portfolio analysis approach applicable to
budgetary level that did not switch in and out. strategic decision-making for the Exploration
These technologies were different at each Mission is feasible. This process:
budgetary level. This is understood when – Is objective, traceable, quantitative, and
examining Figure 7— once a technology has repeatable
passed a certain level, it is always selected.
The technologies that shift in and out of
– Is capable of enabling a strategy to tactics
approach
the optimal technology portfolio are more
interesting. The following graph will be used – Provides a method to determine the
for illustration: robustness of results to alternative data
sets and policy preferences through a
systematic sensitivity analysis
Primarily due to the lack of data, and the
lack of verification of the data we had, we are
unable to say that the graphs above actually
reflect optimal portfolios. However, there are
several things that became clear through
analysis of the data:
Certain categories were consistently
chosen by the algorithm at lower budgets than
others, even with the Monte Carlo analysis.
An example of this is the 2.1 – Comm and
Info Systems area. This was due to lower cost
or higher value technologies being a part of
the category. These technologies tend to be
Figure 8 -- Estimation of Robustness chosen first, thus the category received
The boxes each represent a technology and funding at a lower budgetary level.
its uncertainty bounds. The curved line is Certain categories were consistently
imagined to be the optimal portfolio border. “saturated” first, meaning that all of the
Its shape is unknown and changes with each possible technologies were chosen by the
analysis run, but is drawn here for reference. optimization routine first. What stood out in
The red boxes have too high of a cost for these categories was that there were no low-
their score, so they are not included in the scoring, high-cost technologies. An example
optimal portfolio. The green boxes are within of this is 2.4 – Automation & Robotics. This
the border; they have a lower cost-benefit ratio area’s technologies are consistently used by
9
multiple missions, giving many of them The analysis structure can be created early
relatively high scores. on with preliminary data, and all analysis
Certain technologies were always chosen, routines can be constructed with preliminary
regardless of budgetary levels, technology data as well. Preliminary results, while not
mix, or sensitivity/robustness runs. These believable, can be used to verify that the
technologies are robust technologies and were structure has been written correctly and that
most often technologies that were used by a the technologists understand how their data
number of missions. A few examples: will be used.
a. 2.1: Ka Band TWT 100 to 250W – The CRAI effort has shown that
used by multiple missions, low cost quantitative data collection and analytical
b. 2.3: Water Recovery From Waste – decision making can, in fact, be done
very high score, fairly low cost
c. 2.4: End-Effector placement – References
relatively low cost
Certain technologies were never chosen, 1. G. Rodriguez and C.R. Weisbin, "A New
most likely due to data collection errors. Many Method to Evaluate Human-Robot System
of these had negative scores, showing us that Performance", special issue of Journal on
even simple data collection becomes difficult Autonomous Robots, Kluwer-Publishers,
when the data collection effort is scaled to an 14,165-178, March 2003.
agency as large as NASA. A few examples are 2. G. Rodriguez and C. R. Weisbin, “A New
provided here: Method for Evaluation of Human-Robot
a) 2.5: LOX?L-CH4 OMS Engine – System Resiliency, “ JPL Publication 02-
negative score, the SOA is sufficient 26, November 2002
b) 2.2: Stirling Reactor – negative score, 3. J. P. Chase, A. Elfes, W.P. Lincoln, and C.
the state of the art is sufficient R. Weisbin,” Identifying Technology
Investments for Future Space Missions”,
Lessons Learned Space 2003 Conference and Exposition,
AIAA, September 25, 2003, Long Beach,
There are several recommendations that CA, U.S.A
can be made to facilitate this process if 4. C.R. Weisbin, G. Rodriguez, A. Elfes, and
attempted in the future: J. H. Smith, “Toward a Systematic
Begin with a web-based data collection Approach for Selection of NASA
system. It is believed that this will make the Technology Portfolios”, Wiley
data entry job easier for the technologists. InterScience, Sept 2004
This type of system also eases transfer of the 5. “Mars Reference Mission”,
data into a spreadsheet. http://www_sn.jsc.nasa.gov/marsref/conte
Block leads should spend the time up front nts.html
to create a hierarchy showing all of the 6. “Orbital Aggregation & Space
technologies that they hope to collect data for. Infrastructure Systems”,
This will show alternative technology pairs, http://rasc.larc.nasa.gov/rasc_new/HPM/O
technology pairs that should be enabled ASISEXEC_97.pdf
simultaneously, and also any dependencies 7. “JSC Architecture 1” -- NASA
that might exist. Exploration Team (NEXT) Design
Data collection should be done early to Reference Missions Summary, July 11,
give enough time for a thorough analysis. 2002
10
Author Biographies: of autonomous control systems flown in
planetary missions, including the Viking, the
Jason Derleth is a systems engineer at the Voyager, and the Galileo missions.
Jet Propulsion Laboratory (JPL) of the Since the mid 1980's, he has been involved
California Institute of Technology. He in research for NASA research and technology
received an SM degree in Aeronautical and programs. His research interests include
Astronautical Engineering from the analysis in hierarchical systems, estimation
Massachusetts Institute of Technology in theory, nonlinear multi-body system control,
2003, and a BA in the Liberal Arts from St. and control for autonomous systems.
John's College in Annapolis, Maryland in
2000, where he received awards for writing Joe Mrozinski is a systems engineer at the
and excellence in the arts as demonstrated by NASA Jet Propulsion Laboratory, currently
his hand carved cello. involved with technology assessment and
Jason's current research interests include mission trade models and tools. Joe attended
new approaches to complex systems analysis, the University of Michigan where he earned a
technology portfolio and mission design BS in Aerospace Engineering in 2002, a BA in
optimization; and incorporating such Philosophy in 2003 and a Masters in Space
understanding into optimization tools. Systems Engineering in 2004.