Professional Documents
Culture Documents
Artificial Intelligence For Small Satellites Mission Autonomy
Artificial Intelligence For Small Satellites Mission Autonomy
By
Lorenzo Feruglio
******
Supervisor:
Prof. S. Corpino
Politecnico di Torino
2017
Declaration
Lorenzo Feruglio
2017
Grazie
Acknowledgment
I would like to acknowledge and thank a great number of people: not everyone can be
included here, but I’m sure the people I would like to thank already know I’m grateful to
them.
Valentina, the road will be long, but you already gave me the spark to start what I’ve
always wanted to do. Thank you my little strawberry.
My supervisor, Sabrina, for giving me the freedom to explore, even if the navigation
was in uncharted territories. That has been a nice roaming, and was only the prelude to
something bigger!
My team, in general, for sharing with me years (from 2009!!) of successful and
unsuccessful trials and errors, and motivating research.
Raffaele, Fabio, Gerard, Fabrizio, for spending time on a bunch of electronics boards
for so many years: it was worth the ride!
My mentor in the US, Alessandra, for giving me the chance to visit a wonderful
research centre, and for the time spent working together there.
The two satellites that flew in orbit, one in 2012 (e-st@r-I) and the other one in 2016
(e-st@r-II) and is still there: you give me huge chances to brag about the amazing things
I’ve had the joy to be part of. Eh, my codes have been in orbit two times already!
Loris and Giorgio for starting the new adventure together and believing in this.
Loris, Martina, Raffaele, Daniele, Christian, Christopher and Pietro for making the
workplace a lovely place to spend time at. I haaaaaaaaaaaaaloveaaaaate you guys ;)
At last, my other dear friends: Stefano, Francesco, Paolo, Gabriele: I’ve shared
so much with you and I have no intention of stopping.
Abstract
The present work has been carried out to address the issue of operations for
small satellite missions. The thesis presents a research, carried out in several
institutions (Politecnico di Torino, MIT, NASA JPL), aimed at improving the
autonomy level of space missions, and in particular of small satellites. The key
technology exploited in the research is Artificial Intelligence, a computer science
branch that has gained extreme interest in research disciplines such as medicine,
security, image recognition and language processing, and is currently making its
way in space engineering as well. The thesis focuses on three topics, and three
related applications have been developed and are here presented: autonomous
operations by means of event detection algorithms, intelligent failure detection on
small satellite actuator systems, and decision-making support thanks to intelligent
tradespace exploration during the preliminary design of space missions. The
Artificial Intelligent technologies explored are: Machine Learning, and in particular
Neural Networks; Knowledge-based Systems, and in particular Fuzzy Logics;
Evolutionary Algorithms, and in particular Genetic Algorithms. The thesis covers
the domain (small satellites), the technology (Artificial Intelligence), the focus
(mission autonomy) and presents three case studies, that demonstrate the feasibility
of employing Artificial Intelligence to enhance how missions are currently operated
and designed.
Contents
Contents ................................................................................................................... I
List of Figures ....................................................................................................... VI
List of Tables ........................................................................................................ XI
Notation ............................................................................................................... XII
Introduction .............................................................................................................. 1
1.1 Thesis Objectives ..................................................................................... 3
1.2 Thesis layout ............................................................................................ 4
Small Satellites ........................................................................................................ 7
2.1 Small Satellites and smaller systems ....................................................... 7
2.2 CubeSats .................................................................................................. 9
2.2.1 Overview ......................................................................................... 11
2.2.2 The Standard ................................................................................... 11
2.2.3 The Deployers ................................................................................. 14
2.2.4 The Evolution ................................................................................. 15
2.3 Application scenarios ............................................................................ 18
2.3.1 Historic Small Satellite Missions .................................................... 18
2.3.2 Interplanetary CubeSats .................................................................. 19
2.3.3 Earth Orbiting Constellations ......................................................... 22
2.3.4 Other relevant cases ........................................................................ 23
Space Mission Software......................................................................................... 25
3.1 Overview of Flight Software ................................................................. 25
3.1.1 Command and Data Handling......................................................... 26
3.1.2 Other software................................................................................. 27
3.2 Overview of the Ground Software......................................................... 28
3.2.1 Planning and Scheduling ................................................................ 29
3.2.2 Command Loading ......................................................................... 29
3.2.3 Science Scheduling and Support..................................................... 29
3.2.4 Failure Detection ............................................................................. 30
3.2.5 Data Analysis, Calibration, and Processing .................................... 30
3.3 Flight vs Ground Design ....................................................................... 30
Mission Autonomy ................................................................................................ 33
4.1 The problem of Autonomy .................................................................... 33
4.2 Key concepts: Automation, Autonomy, Autonomicity ......................... 34
4.3 Autonomy versus Costs of Missions ..................................................... 36
4.4 History of Autonomy Features .............................................................. 37
4.4.1 Up to 1980 ...................................................................................... 37
4.4.2 1980-1990 Spacecraft ..................................................................... 38
4.4.3 1990-2000 ....................................................................................... 39
4.4.4 2000s ............................................................................................... 39
4.4.5 Current and Future Spacecraft ........................................................ 39
4.5 ESA Autonomy Design Guidelines ....................................................... 40
4.5.1 Nominal mission operations autonomy levels ................................ 41
4.5.2 Mission data management autonomy.............................................. 42
4.5.3 Fault management mission autonomy ............................................ 42
4.6 The need of Autonomy .......................................................................... 43
4.6.1 Multi-spacecraft missions with respect to Monolithic missions ..... 44
4.6.2 Big Distances, Low Data Rates and Communications Delays ....... 45
4.6.3 Variable Ground Support ................................................................ 45
Artificial Intelligence ............................................................................................. 47
5.1 What is Artificial Intelligence ............................................................... 47
5.1.1 Definitions of Artificial Intelligence............................................... 47
5.1.2 The various philosophies of Artificial Intelligence ........................ 48
5.2 Brief history of Artificial Intelligence ................................................... 50
5.3 The basis of Artificial Intelligence ........................................................ 54
5.4 State of the Art ....................................................................................... 58
5.4.1 What belongs to Artificial Intelligence ........................................... 58
5.4.2 State of the Art by algorithm .......................................................... 58
5.4.3 State of the Art by application ........................................................ 66
5.4.4 State of the Art by Open Source products ...................................... 68
5.5 Bringing Artificial Intelligence to space ............................................... 70
5.5.1 Selection of CubeSat compatible algorithms .................................. 70
5.5.2 Mapping Artificial Intelligence algorithms to fields of application71
5.6 Machine Learning algorithms and Neural Networks ............................ 71
5.6.1 Neural Networks Principles ............................................................ 73
5.6.2 Network architectures ..................................................................... 75
5.6.3 Network training ............................................................................. 76
5.7 Knowledge-based Engineering and Expert Systems ............................. 78
5.7.1 Knowledge Based Systems ............................................................. 79
5.7.2 Expert Systems ............................................................................... 80
5.7.3 Fuzzy Logics ................................................................................... 81
5.8 Evolutionary Algorithms ....................................................................... 84
5.8.1 Genetic Algorithms ......................................................................... 85
5.8.2 Design Suggestions and Improvements .......................................... 87
Case Study: Event Detection with Neural Networks ............................................. 89
6.1 Background ............................................................................................ 89
6.2 Reference Missions ............................................................................... 91
6.2.1 Impact Mission ............................................................................... 91
6.2.2 Comet Mission ................................................................................ 93
6.3 Neural Network architecture selection .................................................. 94
6.3.1 Impact Event detection network ..................................................... 95
6.3.2 Obtaining additional information from the detection ..................... 97
6.4 Event modelling..................................................................................... 98
6.4.1 Asteroid impact modelling.............................................................. 98
6.4.2 Plume event modelling ................................................................... 99
6.5 Innovative Training Approach............................................................. 100
6.5.1 Impact event training .................................................................... 102
6.5.2 Plume event training ..................................................................... 104
6.6 Results ................................................................................................. 106
6.6.1 Performance considerations .......................................................... 106
6.6.2 Impact Event Detection ................................................................ 106
6.6.3 Plume event detection ................................................................... 110
6.6.4 Review .......................................................................................... 111
Case Study: Failure Detection with Expert Systems ........................................... 113
7.1 Background .......................................................................................... 113
7.2 Reference Mission ............................................................................... 113
7.3 Fuzzy Logics Application.................................................................... 114
7.3.1 Magnetic Torquer Modelling ........................................................ 114
7.4 Failure Modelling ................................................................................ 116
7.5 Rules definition ................................................................................... 118
7.5.1 Input and Output Variables and their membership functions ....... 118
7.5.2 Rules ............................................................................................. 120
7.6 Results ................................................................................................. 122
7.6.1 Review .......................................................................................... 123
Case Study: Tradespace Exploration with Genetic Algorithms .......................... 125
8.1 Background .......................................................................................... 125
8.2 Reference Mission ............................................................................... 128
8.3 Genetic Algorithms for Tradespace Exploration ................................. 129
8.3.1 Intelligent exploration ................................................................... 129
8.3.2 Population dynamics ..................................................................... 130
8.4 Algorithm Design ................................................................................ 131
8.4.1 Architecture .................................................................................. 131
8.4.2 The Design Vector ........................................................................ 132
8.4.3 The Algorithm............................................................................... 133
8.4.4 The optimizer ................................................................................ 134
8.5 Results ................................................................................................. 136
8.5.1 Efficient tradespace exploration ................................................... 137
8.5.2 Impact of the CubeSat database integration.................................. 137
8.5.3 Requirements compliance ............................................................. 138
8.5.4 Algorithm performance comparisons ........................................... 139
8.5.5 Review .......................................................................................... 141
Conclusions .......................................................................................................... 143
The domain: Small Satellites ..................................................................... 143
The focus: Mission Autonomy .................................................................. 144
The technology: Artificial Intelligence...................................................... 146
References ............................................................................................................ 147
Appendix A – Interesting images acquired through the research ........................ 157
Appendix B - Asteroid modelling on blender® ................................................... 162
List of Figures
Figure 1 Thesis structure. Bigger circle represents the main conceptual sections
of the thesis. ............................................................................................................. 4
Figure 2 Nano- and Microsatellite launch history and forecast at 2015 (1 - 50
kg) – Credits SpaceWorks® .................................................................................... 9
Figure 3 CubeSat spacecraft. The three winners of first ESA Fly Your Satellite!
competition: OUFTI-1, e-st@r-II, AAUSAT-4. Credits ESA............................... 10
Figure 4 CubeSat modularity is by design one of the key characteristics of the
platform. Credits RadiusSpace .............................................................................. 13
Figure 5 P-POD CubeSat deployer. Credits CalPoly ...................................... 14
Figure 6 Nano- and Microsatellite launch history and forecast at 2017 (1 - 50
kg). Credits NanoSats.eu ....................................................................................... 15
Figure 7 Repartition of the CubeSat projects among organization types. Credits
NanoSats.eu ........................................................................................................... 16
Figure 8 Repartition of the CubeSats per developer nation. Credits NanoSats.eu
............................................................................................................................... 16
Figure 9 Nanosatellite types are not equally chosen by the mission designers.
Credits NanoSats.eu ............................................................................................... 17
Figure 10 Nanosatellite operational status [16]. Credits NanoSats.eu ............ 18
Figure 11 Evolution of mission lifetime. Credits DLR ................................... 19
Figure 12 Evolution of Bus and Payload Mass. Credits DLR ........................ 20
Figure 13 Artist rendering of two 3U CubeSats to Europa. Credits NASA JPL
............................................................................................................................... 21
Figure 14 Example of Operating System layers: core Flight Software. Credits
NASA..................................................................................................................... 27
Figure 15 Hubble Space Telescope. Credits NASA ....................................... 38
Figure 16: History of Artificial Intelligence ................................................... 50
Figure 17 Mapping between applications presented in the thesis and potential
Artificial Intelligence algorithms to solve those problems .................................... 71
Figure 18 Machine Learning algorithm map, grouped by type. Credits
Brownlee. ............................................................................................................... 72
Figure 19 Biological model of a neuron. Credits Rojas .................................. 74
Figure 20 The artificial model of a neuron, seen as a computing element.
Credits Rojas .......................................................................................................... 74
Figure 21 Definition of "knowledge" by Merriam-Webster English dictionary
............................................................................................................................... 78
Figure 22 Basic Knowledge Based System architecture ................................. 79
Figure 23 Examples of membership functions. Credits MathWorks .............. 82
Figure 24 Example of MOM and COG methods for defuzzification ............. 83
Figure 25 Non-comprehensive map of Evolutionary Algorithms and their
variants ................................................................................................................... 85
Figure 26 AIM and COPINS Design Reference Mission. Credits ESA ......... 93
Figure 27 Jets emitted by comet 67P. Source ESA ......................................... 93
Figure 28 Plumes emitted by Enceladus, a moon of Saturn. Source ESA ...... 94
Figure 29 Feed-forward network architecture ................................................. 95
Figure 30 Performance trends for networks with two hidden layers. Each dot
represents a cluster of networks with 1 to 15 neurons in the first layer, and the X-
axis number of neurons in the second layer. .......................................................... 96
Figure 31 Average performances with respect to network architecture. Each
box plot is the result of 300 network initializations. Red line represents the median,
box lines represent first and third quartiles. When no box is drawn, all data except
the outliers are collapsed in the median value. Outliers represent samples that lie
further than 1.5 times the interquartile range. ........................................................ 97
Figure 32 Asteroid modelling ......................................................................... 98
Figure 33 Impact on the secondary body ........................................................ 99
Figure 34 Impact location, as seen from two different observation points ..... 99
Figure 35 Asteroid modelling and plume event ............................................ 100
Figure 36 Plume event simulated on the comet 67P ..................................... 100
Figure 37 Directing the neuron training with pseudo-random colouring of the
impact location: rectangular and truncated cone shapes ...................................... 103
Figure 38 Trained network, input to hidden layer weights of a simple neuron.
Darker pixels correspond to lower weights. Direct match between overlay and
weights. ................................................................................................................ 104
Figure 39 Trained network, input to hidden layer weights of a single neuron.
darker pixels correspond to lower weights. Interesting outcome of the training. 104
Figure 40 Examples of 67P images with an artificial plume overlay ........... 105
Figure 41 Trained weights for the plume detection problem. The uniform grey
areas around the centre of the image are a result of having removed constant lines
throughout the dataset .......................................................................................... 105
Figure 42 Impact event from first capturing point ........................................ 107
Figure 43 Impact event from second capturing point ................................... 107
Figure 44 Impact event, dark sky in the background. Continuous line: impact
detected; dashed line: no detection ...................................................................... 108
Figure 45 Impact event, main body in the background. Continuous line: impact
detected; dashed line: no detection ...................................................................... 108
Figure 46 Robustness to imprecisions in camera pointing. Continuous line:
impact detected; dashed line, no detection .......................................................... 109
Figure 47 Robustness to imprecisions camera pointing (cont.). Continuous line:
impact detected; dashed line: no detection .......................................................... 109
Figure 48 Confusion matrices for one body and two bodies simulations with
disturbances. Class 1 represents the impact event, Class 2 represents the no-impact
images .................................................................................................................. 110
Figure 49 Confusion matrix for plume event on comet 67P ......................... 110
Figure 50 Detection of plume events: real images taken by the Rosetta mission
............................................................................................................................. 111
Figure 51 Magnetic torquer example: coil configuration.............................. 115
Figure 52 Magnetic torquer example: rod configuration .............................. 115
Figure 53 Representation of the resultant force due to magnetic field interaction
............................................................................................................................. 116
Figure 54 Failure modelling, output of the control command to the MT. Clock-
wise, starting from top-left: float, lock-in-place, hard-over, loss of efficiency ... 118
Figure 55 Input variables and their membership functions ........................... 119
Figure 56 Output variables: de-fuzzification is not needed, as the failure
identifier is an integer number ............................................................................. 120
Figure 57 Membership function for the current input variable ..................... 120
Figure 58 Rule evaluation and failure detection: hard-over detected ........... 122
Figure 59 Output of the Expert System: from the left, unfiltered, basic and
medium filters applied. Each step represents a different value of the output
variables, therefore represents a different failure detected .................................. 122
Figure 60 A few examples of utility function. Credits MIT ......................... 126
Figure 61 MATE logic flow .......................................................................... 127
Figure 62 The implemented algorithm consists in combining Genetic
Algorithms with Multi-Attribute Tradespace Exploration. Solution generation,
requirements management and post-processing design and visualization are also
performed. ............................................................................................................ 131
Figure 63 Optimization process: evolution in time of the population. The
improvement of the utility with the increase of the generation number is shown.
............................................................................................................................. 135
Figure 64 Solution spaces (100k points): from the left, cost-size-utility, size-
utility and cost-utility plots .................................................................................. 137
Figure 65 3U internal configuration .............................................................. 138
Figure 66 Solar panels configuration: example outputs ................................ 139
Figure 67 Plume events: detection of upper or lower direction .................... 157
Figure 68 Plume events: detection of four directions ................................... 158
Figure 69 Plume events: detection of eight directions .................................. 158
Figure 70 Impact sequence on an asteroid, simulation with dark sky in the
background........................................................................................................... 159
Figure 71 Impact sequence on an asteroid, simulation with main body in the
background........................................................................................................... 159
Figure 72 Early experimentations with Neural Networks: cats are recognized
as fully pictured asteroid. The picture right from the cat is wrongly classified. . 160
Figure 73 Experimenting with the overlay training methodology described in
the thesis .............................................................................................................. 160
Figure 74 67P plume events as modelled on blender® ................................. 161
Figure 75 67P plume events as photographed by the Rosetta mission ......... 161
Figure 76 Asteroid Modelling: creation of the starting cube ........................ 162
Figure 77 Asteroid Modelling: Subdivision Surface Modifier ..................... 163
Figure 78 Asteroid modelling: texture .......................................................... 163
Figure 79 Asteroid modelling: editing the geometry .................................... 164
Figure 80 Asteroid modelling: final result .................................................... 164
Figure 81 Plume modelling parameters ........................................................ 165
List of Tables
Introduction
The last two decades have been interesting times for space missions and have
seen a dedicated effort among the major players in the space domain to design and
develop unmanned mission ideas and concepts that are more challenging than ever.
Thanks to the consistent successes of great interplanetary and Earth orbiting
missions, space engineering has been pushing the boundaries for constant
improvement, envisioning everyday increasingly daring missions. The traditional,
monolithic, high-performance spacecraft have not been the only category of space
systems influenced by this push in innovation and in ambition: smaller satellites
have been gaining traction, thanks to newly developed technologies and to a
consolidation of the present state of the art. Small satellites, nanosatellites,
CubeSats, are experiencing a renovated and never-before-seen interest and
exploitation, thanks to the game-changing characteristics that this type of space
systems possess. The effort in using smaller satellites is common and shared among
the major agencies and industries in the world panorama.
Since 2013, ESA has initiated seven different CubeSat projects for low-cost In-
Orbit Demonstration (IOD) of innovative miniaturized technologies within the
framework of Element 3 of the General Support Technology Programme (GSTP).
The first technology IOD CubeSat to be launched, a 3U CubeSat called GOMX-3,
was deployed from ISS in October 2015 and has been a complete success over its
1-year lifetime in Low Earth Orbit (LEO) until re-entry. Other IOD CubeSats in
development are planned for launch in 2017 and 2018. Additional design effort has
been spent at ESA to study the applicability of small satellites for interplanetary or
2 Introduction
This being said, it is important to consider that the thesis developed is presented
as a conclusion of an Aerospace program: the thesis and the research work
performed did not have the objective of determining which, among the available
Artificial Intelligence algorithms, is the best candidate to perform the automation
of a certain type of operations. Instead, the research is meant to be considered as a
feasibility study for developing AI-based solutions to real operations problems.
Additional studies and comparisons will have to follow in order to assess whether
the proposed algorithms are in fact the best options to solve the problem addressed
4 Introduction
in the case studies. Moreover, chosen candidates will have to be compared in future
works with other, non-AI-based algorithms.
Chapter 2
Chapter 3
Chapter 4 is the first of the two major chapters of this thesis, and introduces the
concept of Mission Autonomy. The Chapter discusses about the need of improving
Mission Autonomy on modern spacecraft, presents key terminology used
throughout the thesis and discusses about past practices and current standards of
autonomous operations on spacecraft. Finally, it presents the various issues that are
currently driving the development on Mission Autonomy: control and operation
management of big constellations, interplanetary missions performed with Small
Satellites and unreliable ground support.
Chapter 5
(listing the open-source technologies, frameworks and software that can be used to
develop Artificial Intelligence applications, both for space or other domains). The
chapter then focuses on three category of algorithms that were used in the case
studies of the thesis: Machine Learning, and in particular Neural Networks, Expert
Systems, and in particular Fuzzy Logics, and finally Evolutionary Algorithms, in
particular Genetic Algorithms.
Chapter 6, 7 and 8: The Case Studies
Chapter 6, 7 and 8 present three case studies developed for this research:
respectively Event Detection, Failure Detection and Tradespace Exploration. The
Event Detection case is developed using Neural Networks: an algorithm and an
innovative training approach is presented to be used during interplanetary missions
on a comet / asteroid, enabling detection of impact events or spontaneous gas
emissions. The Failure Detection case presents the use of Expert Systems to detect
failures that happen on a common actuator of a Small Satellite, Magnetic Torquers.
The presented approach performs considerably well on this category of
components, but is at the same time easily re-configurable to work on other types
of actuators or sensors of a spacecraft. Finally, the Tradespace Exploration case
presents the use of Genetic Algorithms exploited to support decision makers (in this
application, mission designers) in performing a very fast analysis on all the possible
alternate solutions for the design of a specific mission.
Chapter 2
Small Satellites
Refer to 1.1 for the interpretation of “Small Satellite” throughout the presented
research. Different entities (being them space agencies or companies) implement
their own classification based on satellite dimension, and most of them overlap, as
summarized in Table 1 [4].
Despite the lack of fully standardized classification, the majority of the entities
in the space industry agree on common aspects:
From an historical point of view, from the launch of the first satellite (the
Sputnik-1, launched in 1957 with a mass of 84 kg) the size trend of satellites has
moved towards bigger, more complex, redundant and better performing systems.
This trend has been evident in several categories of satellites, from Earth
observation ones to geostationary telecommunication satellites. With the advent of
small satellites, and in particular of nano-satellites, the proportion between the
different categories of launched systems have shifted considerably. Market
predictions for nano- and micro-satellite launches show a sustained growth in the
number of satellites launched (Figure 2). Nonetheless, the small satellite trend is
clear and showing defined growth.
2.2 CubeSats 9
2.2 CubeSats
Categorizing satellites by mass is not the only way, as other means (such as mission
objectives, launch orbits and so on) could re-arrange the satellite database in other,
still meaningful ways. Often times, categorizing satellite systems in different ways
produces overlapping representations of the satellite missions ecosystem. A well-
known example of this phenomenon is constituted by CubeSats (Figure 3).
10 Small Satellites
Figure 3 CubeSat spacecraft. The three winners of first ESA Fly Your
Satellite! competition: OUFTI-1, e-st@r-II, AAUSAT-4. Credits ESA
CubeSats are a category of space systems developed according to an open-
source standard, proposed for the first time in 1999 by professors Jordi Puig-Suari
of California Polytechnic State University and Bob Twiggs of Stanford University
[17]. The objective behind the definition of the standard was to create a spacecraft
system concept that would not only allow university groups to rapidly design and
develop a small space project, but also would ensure that the chances of being
accepted on traditional launchers as a secondary payload were maximised. To reach
stable rates of acceptance among launch providers, the standard was designed to
cover not only the space system itself, but also its interfaces with the launcher, via
the design of a deployment system able to guarantee safeness for the other, most of
the times more expensive and demanding, spacecraft on the launcher. In the initial
vision, the CubeSat development would require less than 100.000$ to build for each
One Unit (1U), allowing in addition a short duration of the launch procurement
phase. In general, time and cost of the development can vary significantly
depending on several factors, among which institution carrying out the project,
budget and quality level are the most influencing ones. As introduced above, the
CubeSat spacecraft encompass different size and mass categories, starting from the
nanosatellite one to the microsatellite one.
2.2 CubeSats 11
2.2.1 Overview
• Dimension of 10 x 10 x 10 cm
• Mass up to 1.33 kg (originally 1 kg until 2009)
• Modularity
• Standardized requirements
These characteristics are peculiar, and tend to be rigorously applied for each
spacecraft in the category. In some cases, depending on the market availability of
the deployers, some parameters are revised for each spacecraft, reducing the
standardization of the CubeSats.
Several characteristics are still applicable through the majority of the developed
and launched CubeSats projects.
Budget CubeSats are typically missions that are designed and developed
allocating budgets lower than those allocated in traditional systems, both for
educational projects and for commercial or scientific missions. Standardization,
simplicity in the design, reduced and more agile project management and quality
assurance efforts, agile approaches to testing, verification and validation, and
ultimately limited or no built-in redundancy are causes and consequence of the
different approaches.
As with the evolution of the market and the availability of CubeSat components,
the CubeSat deployment technology has seen an increase in the number of available
options [4].
Poly-Picosatellite Orbital Deployer (P-POD) It is the original standardised
deployer, developed by California Polytechnic State University (Figure 5).
Tyvak Deployers RailPOD Mk.II, NLAS Mk.II, 12U Dispenser, are three
deployment solutions developed by Tyvak Inc. Mass optimized and support up to
12U CubeSats.
The CubeSat ecosystem has been object of a distinct evolution in the last two
decades, and is interesting to report the status of the technology as of March 2017
(Figure 6). Nanosatellites, despite with some deviation, have maintained the
expected forecasts made concerning the adoption of this disruptive technology.
Biggest contributions to the increase of the numbers have been private companies
and educational projects, as seen in Figure 7.
• Smaller platforms (1U) often involve lesser costs and more launch
availability, therefore enabling more and more entities to develop their own
mission
• Increased sizes enable more complex and more performing platforms and
payloads. In this sense, 3U and 6U CubeSats are the preferred choice when
performances requirements are stringent.
Several missions could be cited among the set of historical Small Satellite missions.
Here a few important missions are presented.
First CubeSats were launched in 2003 from Plesetsk, Russia, and placed in a
sun-synchronous orbit. They were the Danish AAU CubeSat and DTUSat, the
Japanese XI-IV and CUTE-1, the Canadian Can X-1 and the US Quakesat. CUTE-
1, after at least 9 operational years in orbit, is, among other examples (such as Swiss
Cube) one of the longest operating CubeSat mission ever deployed.
instrument and a black and white camera with a miniaturised telescope. Launched
up to 2017 are the PROBA-1, PROBA-2 and PROBA-V.
IPEX is a CubeSat developed and launched by NASA JPL with the objective
of validating autonomous operations for onboard instrument processing and
product generation. The CubeSat is the first, and probably only, CubeSat
implementing state of the art level of autonomy on board. In addition, the CubeSat
carried the Continuous Activity Scheduler Planner Execution and Re-planner, to
enable mission replanning [20], [21].
The evolution of space systems has progressed without interruption since the
Sputnik-I satellite was launched. Improvement in the technologies, in the design
and fabrication processes, advancements in the scientific research, innovative
mission concepts enabled by successfully reaching previous mission objectives, can
be all seen as reasons for the advancement in the performances of the spacecraft
platforms and payloads. Some trends are interesting: mission lifetime has, on
average, increased through the years (Figure 11); spacecraft bus mass has increased,
while payload mass has remained constant (Figure 12). In general, the increased
bus mass is connected to higher requirements for mission lifetime, radiation
shielding and/or redundancies integrated in the platform. When considering the
trends of the various subsystems technologies, the trend is reversed: newer
subsystems would perform better and with a lower mass (normalized) with respect
to older counterparts [22].
AIM mission and its CubeSats to Didymos Binary Asteroid (cancelled) [23]:
a spacecraft would release up to 6U total of CubeSats in situ at the Didymos
asteroid, to perform technological and/or scientific objectives, either by supporting
the main mission or by fulfilling additional goals
MarCO CubeSats, that will be released by the Insight mission to Mars during
the interplanetary transfer from the Earth. The objectives of the CubeSats will be to
monitor and record the Entry, Descent and Landing (EDL) telemetry of the Insight
probe, and to relay that information back to Earth. Given the high amount of escape
velocity, MarCO CubeSats will not be inserted into Martian orbit [25].
understanding of asteroidal environments and will yield key information for future
human asteroid explorers [27].
Lunar Flashlight will look for ice deposits and identify locations where
resources may be extracted from the lunar surfaces. It will use lasers to reflect
sunlight and illuminate permanently shadowed craters at the lunar poles. A
spectrometer will then observe the reflected light to measure the surface water ice.
The EM-1 mission (and the SLS in general) will deploy 13 6U CubeSats.
Another fundamental aspect of the CubeSat ecosystem is that they enable the design
and deployment of mission architectures involving a great number of spacecraft for
a considerably smaller budget when compared to traditional assets and
constellations. In addition, the availability of components enables mass production
strategies that are currently not considerable when dealing with bigger systems1.
Interesting cases of CubeSat constellations are here presented that are currently
disrupting the spacecraft and the space data market.
PlanetLabs is a constellation of CubeSat to be deployed to LEO, designed for
Earth Observation (EO). It is constituted of several 3U CubeSats that are usually
deployed on piggyback launches [28]. The company exploits the great scalability
of the CubeSat technology to perform unprecedented EO, with over a hundred
satellites in operations. In 2017, the company performed a record-breaking launch
of 88 satellites [29].
1
Possibly the sole case of medium-sized satellite constellation up to 2017 is OneWeb
constellation.
2.3 Application scenarios 23
OneWeb is one of the most ambitious constellation projects that are currently
under development. It features a total of 648 operational satellites in 18 orbits at
1200 kilometres of altitude. Each small satellite will weigh between 175 and 200
kg in mass. The mission objectives are to provide internet broadband connectivity
with a worldwide coverage [33].
These concepts are not always adhering to the Small Satellite or CubeSat standards,
but are nonetheless interesting as they share a similar philosophy: reducing sizes to
enable new mission architectures.
Mars Helicopter, a concept developed by NASA JPL, highly resembles the
CubeSat form factor. The helicopter would be used to pinpoint interesting targets
on the Martian surface, effectively tripling the rover driving speed [34].
• Orbit management
• Propulsion
• Communication management
• Electrical power management
• Thermal management
It has to be noted that the different algorithms constituting the FSW might be
physically located in different subsystems: the AOCS software might run in the
AOCS board, the COMSYS software on the COMSYS board and so on, depending
on the location of the different microprocessors or microcontrollers.
The C&DH for typical small satellite missions, especially for CubeSat, traditionally
features standardized characteristics. Among these, an Operating System (OS), that
has the objective of handling the low-level interfaces with typical components of a
processing board: storage, RAM and ROM, interrupts, and so on. Typical OS for
Small Satellites and CubeSats are: Linux, RTEMS, VxWorks, FreeRTOS, Salvo
[37]. The OS software is generally divided into layers (Figure 14). In order to
streamline the process of development of Small Satellite projects, the mission
developers are moving towards coding applications in the higher layers of the
architecture, leaving lower level coding to the Original Equipment Manufacturer
(OEM). The C&DH middle to higher layers include decision-making algorithms,
time management, command processing, engineering and science data storage, and
higher-level communication functions. In general, C&DH is the coordinating core
of all the on-board processing, apart from some localized data management.
Data is managed and stored on specific memories, that in the C&DH for Small
Satellites assume the form of SD and microSD cards, that are now reaching very
promising levels of performance, storage capacity and reliability: extended
temperature ranges, radiation and magnetic field resistance.
3.1 Overview of Flight Software 27
Planning and Scheduling (P&S) is one of the most important tasks when operating
a space mission: the generation of detailed desired optimized timeline of spacecraft
activities. These activities can sometimes be based on complex modelling of the
spacecraft environment and of the expected behaviour: examples of these are the
Hubble Space Telescope and the Kepler mission. Once the schedule is defined, it is
uploaded to the spacecraft and executed in a time-tagged way. In general, the
definition of the activities is performed not only for the nominal path, but alternate
branches of off-nominal conditions are also foreseen and generated. Interestingly,
the definition of the timeline of operations is a process as time-dependent as the
execution of the operations itself: in certain cases, the look-ahead period can reach
several months to one year. Historically, long term operations definition is
performed to constrain the choice of medium-term to immediate operation
definition in given periods of the mission (Sun-Earth-Spacecraft geometry is an
important factor [23]). On the medium term, events such as the South Atlantic
Anomaly entry/exit or similar events are accounted for. On the short term, final
detailed scheduling to precision of seconds is defined using the most accurate
available data. The process of operation definition has traditionally been very
iterative. A considerable progress has been made with the intent of making this
process more flexible and efficient, yet some inefficiencies and complex modelling
are unavoidable.
Command loading is one of the fundamental functions that most, if not every,
spacecraft mission has performed at least once. In general, this activity has become
straightforward. It consists in converting the P&S outputs into specific commands
understandable by the spacecraft FSW. Automation is increasing in this domain.
In general, nearly all spacecraft engineering analysis and calibration functions have
been performed on ground. These include attitude-sensors alignment and
polynomial calibrations, battery depth of discharge and state-of-charge analysis,
communications margins evaluations and so on. There does not seem to be a clear
cost difference if these functions are performed on ground or on board. In addition,
science data processing and calibration have been nearly exclusively a ground
system responsibility for two main reasons: limited on-board computational
capabilities of rad-hard processors and a bias in the scientific community that
insisted on having all the scientific data downlinked to ground. There is still a strong
opinion that science data might not be processed as thoroughly on board as it is on
ground, and that science data users often process the same data multiple times using
different algorithms, calibrations and so on, even years later after the data were
downlinked.
It is still advisable to design missions with autonomy levels that do not force
the science users to rely on decision taken only on-board, but rather offer the option
to receive processed data or instead the complete set of acquired data.
spacecraft have more and more the capability of taking advantage of the strengths
inherent of a RT software system in direct contact with the flight hardware.
The most striking characteristics of the FSW with respect to its ground
operation counterpart are:
• Reaction times
• Completeness
• No delayed information
Only the FSW of a spacecraft directly located in situ at the mission can instantly
access flight HW measurements, process the information and act in real-time.
On the other hand, previous approaches have assigned more importance to the
Ground Segment of a space mission, thanks to more powerful ground computers
that have allowed the Mission Control to execute complex schedule optimization
algorithms using highly complex predictive models. Even if the computational
power of both the ground-based systems and the spacecraft ones is increasing, and
somewhat narrowing, improvement potentialities exist also on the Ground Segment
[38], [39].
Chapter 4
Mission Autonomy
The present chapter present and discusses Mission Autonomy and its
management. On-board autonomy management addresses all the aspects of the
functions performed by the spacecraft that give the capability to fulfil mission
objectives (by performing certain operations) and to survive critical situations
without relying on ground segment intervention.
• Self-configuring
• Self-healing
• Self-optimizing
• Self-protecting
Table 3 How the three levels are defined among different entities
Self-
DARPA/ISO’s directing
Intelligent Future NASA’s
autonomic and self-
Machine Communication science
information managing
Design Paradigms mission
assurance system
potential
Knowledge
Reflection Mission plane Science Autonomous
plane
Management
Routine Cyber plane Mission Self-aware
control plane
Hardware Command
Reaction Data plane Autonomic
plane sequence
WMAP 2000 1 4 4 -
This period saw the first efforts into standardizing FSW, and the appearance of the
first automatic actions performed by a spacecraft. In particular, earliest efforts in
automating operations came on the HEAO series of spacecraft, with some
automatic functions such as pointing control, limited failure detection, stored
commanding and telemetry generation. Additional commanding capabilities
included the now standard absolute-timed, relative-timed and conditional
commands. Limit checking as FDIR was also implemented, with automatic mode
transition to pre-programmed safe modes. On the Solar Maximum Mission (SMM),
an embryo of autonomous target identification and acquisition capability was
implemented, that would be later refined into Hubble Space Telescope (HST).
SMM processing algorithms could detect solar flares, and re-program spacecraft
pointing to observe the phenomenon. This characteristic was also present in the
Orbiting Solar Observatory-8, launched in 1975: it could steer its payload platform
independently to perform observation of its targets.
The evolution of on-board pointing capabilities can be seen just by looking at
the pointing independence of the two spacecraft, HEAO-1 and HEAO-2: the first
one relied on attitude reference updates every twelve hours based on ground attitude
determination. The follow-on spacecraft, two years later, already possessed the
capability to compute its own attitude reference update, based on ground-supplied
guide-star reference information, a capability also implemented in SMM. HEAO-2
could, in addition, periodically go through a weekly target list.
38 Mission Autonomy
The 1980 saw the launch of larger, more expensive and more sophisticated
spacecraft. Among these, some famous spacecraft such as the HST and Compton
Gamma Ray Observatory (CGRO) were actually launched in the 1990s, but were
scheduled to be launched earlier.
HST featured automatic safe mode options and improved FDIR checks; and the
first appearance of “message based” architecture between two processors, that
would coordinate when searching a new observation target. Moreover, it has to be
noted that many of the advanced FDIR functions of the HST were added to the
spacecraft after launch, in response to problems experienced inflight.
4.4.3 1990-2000
4.4.4 2000s
Among the new capabilities implemented on spacecraft in the 2000s, true lost-in-
space capabilities can be highlighted, along with even more improved model-based
failure detection. In general, the trend observed is moving towards the
implementation of SI acting as spacecraft controllers themselves, deciding
autonomously the science schedule with respect to planned and unplanned
observations.
Additional experiments in autonomous formation flying have been performed.
Spacecraft under development (such as the James Webb Space Telescope), are
implementing advanced features such as on-board event-driven scheduling, with a
flexible implementation that allows to move through observation targets as soon as
they are available, without forcing any observation if anomalies or unfavourable
conditions appear.
Developments in spacecraft constellation and formation flying are currently
driving the effort in mission autonomy research. Another important driver is the
40 Mission Autonomy
independence of SIs with respect to the spacecraft pointing. Finally, innovative, AI-
driven small spacecraft are being flown [20], [45].
During the execution of nominal mission operations, four levels of autonomy have
been defined:
• Execution mainly under real-time ground control
• Execution of pre-planned mission operations on-board
• Execution of adaptive mission operations on-board
• Execution of goal-oriented mission operations on-board
These autonomy level, and their features, are summarized in the following
table.
Event-based autonomous
Execution of adaptive mission operations
E3
operations on-board Execution of on-board
operations control procedures
Concerning mission data management, the following autonomy levels have been
defined:
• Essential mission data used for operational purposes can be stored on-
board
• All mission data can be stored on-board (science data and housekeeping
data)
Storage on-board of all mission data, i.e. the As D1 plus storage and
D2 space segment is independent from the retrieval of all mission
availability of the ground segment data
Failures are a fundamental aspect of each space mission, and the correct
management of expected and unexpected failures is often the line between a
successful mission and an unsuccessful one. Generally speaking, the approach
towards the management of failures is the Failure Detection, Isolation and Recovery
(FDIR) approach. In this scope, failures are managed in the following way:
• They are detected (on-board or on ground) and are reported to the
relevant subsystems/systems and to the Mission Control
• They are isolated, that is the propagation of the failure among other
components/subsystems/systems is inhibited
• The functions affected by the failure are recovered, to allow for mission
continuation
4.6 The need of Autonomy 43
These levels are described more into details in the following table.
Missions are currently being planned and proposed that consider tens and
hundreds of assets in the space segment. In order to avoid excessive cost of
operations, the most promising way is to reduce the operators-to-spacecraft ratio.
An important conclusion can be drawn from the last statement: mission operations
design, and the operators themselves, need to work at a higher level of abstraction
and be able to monitor and control multiple spacecraft simultaneously.
Another key reason to implement advanced mission autonomy software is the fact
that, for certain types of missions, the communication between the MC and the
spacecraft takes minutes, if not hours. In these architectures, the mission risks
increase because the monitoring of the spacecraft cannot be performed in real-time
(or near real-time).
On the same side, another issue impedes the correct fulfilment of space
missions: for those mission whose objectives are to study randomly appearing
events (for example a comet plume, or a forest fire), the decision time for a human
operator is often too long to update correct observation commands to the spacecraft.
In this case, the communications delays might be small, but decision-making delays
are added, and the result is still a poorly performing mission. Autonomy can play
an important role in these cases, because it enables real-time decision-making and
a corresponding action can be taken to observe the desired phenomenon. Challenges
in this application include the definition of rules to manage the observation
schedule, to understand whether it’s more important to interrupt current objective
(to perform the observation of the newly appeared event) or to ignore the event and
continue with the objective in place. An example of this feature is the Swift mission,
for which one of the instruments has software functions that determine whether a
new observation has high priority, and if so, commanding of the spacecraft can be
executed to continue the observation.
without interruption for long periods of time, determining on its own the best
strategies to acquire new data, and to downlink the stored data once a passage is
available.
Additionally, there might be missions where complete autonomy may not be
the best solution, or that different periods may require different levels of autonomy.
In this scenario, adjustable autonomy can be implemented. The adjustment can be
performed autonomously by the system, depending on the conditions, or on request
by the MC to help the spacecraft accomplish current objectives, or to override the
on-board intelligence to perform manual commanding. With adjustable autonomy,
it is mandatory to have a well-designed Ground Segment and a robust operation
management to work flawlessly with the on-board software.
Chapter 5
Artificial Intelligence
The definitions of this field of computer science are numerous, due to the fact that
the field has evolved quickly through the years, and defining with a univocal set of
words a field this vast is certainly open to opinions and different point of view. In
the literature, eight typical definitions are accepted, each one carrying slightly
different meaning and emphasizing certain aspects of the field. An interesting table
is provided in [47], and is presented here entirely:
48 Artificial Intelligence
to this training algorithm, and these successes contributed to create the third distinct
approach to the study and development of AI applications.
At last, the evolution of the research around AI separated into two distinct efforts:
researching on effective network architectures and algorithms, and research on
reaching precise modelling of the biological neurons and their group architectures.
The latest direction of development of AI are towards an embrace of the scientific
methodology that is the standard in other research fields. AI research must now
undergo rigorous empirical experiments, and the results must be analysed
statistically for their importance. Shared repositories of test data and code made it
possible to replicate experiments with ease.
This, coupled with refinements on the tools available to the AI researchers (such as
the Bayesian networks and improved training algorithms) allowed AI algorithms to
reach significant results in fields traditionally dominated by statistics, pattern
recognition, machine learning and so on.
1995-present, towards Skynet
Huge successes in the various fields of AI have contributed to the affirmation of
this branch of CS. Despite these successes, in the latest years, a particular research
effort has taken back momentum and is now expanding: the strive towards the
“whole agent”. Furthermore, previously isolated fields of AI have now been joined
together, comparing and sharing each other’s results: it is a fact that sensory systems
(vision, sonar and speech recognition) cannot deliver reliable information about the
environment. For this reason, reasoning and planning systems must be able to
handle uncertainty. In addition, another consequence of the agent perspective is that
AI has been drawn into much closer contact with other fields, such as control theory
and economics, that also deal with agents.
More exotic research directions (that, on the other side, share similar intents with
initial efforts in AI research) are considering the emulation of the human-level
intelligence, or more in general the development of an Artificial General
Intelligence, that would implement an universal algorithm for learning and acting
in any environment.
Finally, in recent years, an important paradigm shift has begun to appear: thanks to
the increased availability of data, scientists and researchers are becoming less picky
about the choice of the algorithm, with respect to careful definition and construction
of the datasets involved in the application. Examples of this can be found in
54 Artificial Intelligence
Philosophy Mathematics
Economics Neuroscience
requirements are usually a great concern and execution times are often not practical.
On a similar note, depth-first search suffers from similar issues and are both not
optimal search methods. A decent solution is represented by iterative deepening
search, which tries to combine the benefits of breadth- and depth-first searches: this
method is the preferred uninformed search method when the search space is large
and the depth of the solution is not known. More exotic searching is represented by
the bidirectional search, where two searches are performed, one from the root node
and one backwards from the goal. An improvement over uninformed search is to
perform informed searches, when possible. The improvement comes from the fact
that evaluating the current state allows to introduce efficiency in the exploration:
best-first search is one example, greedy variant introduces a choice based on
preferring the expansion of the node closest to the goal, considering that that node
will be the most likely to lead to a solution. The currently most widely known form
of best-first search is the A* search, that combines the information of the cost to
reach the node and the cost to get from the node to the goal. Memory bounded
versions of the introduced algorithms exists as well (recursive best-first search and
simplified memory-bounded A*).
survival of the fittest law of nature. In particular, the search for an optimal solution
is done by encoding the single solution as a single individual: it will then evolve in
successive generations of the population, converging to the optimal solution.
Behaviours such as reproduction, mutations, parenthood, natural selection and
elitism are defined and are essential for the success of the algorithm.
A note to these algorithms: it must be said that the strategies can vary when we
deal with problems in which the agent possesses sensors, and the strategies are
different in the case of a fully observable world, a partially observable one, and a
non-observable one.
Adversarial search
One of the key characteristics of Adversarial Search problems is that they deal
with competitive environments, such as games. Most of the times, real-life games
are quite difficult, if not impossible, to solve completely. One of the most important
reasons is because of the dimension of the problem. The average branching factor
of chess is 35, with games that can reach 50 moves per player. In such cases,
defining the optimal move is unfeasible. Several techniques exist to facilitate the
decision during games, such as pruning, that allows the algorithm to ignore portions
of the search tree, evaluation functions to approximate the true utility of a state
without a complete search, and strategies to deal with imperfect information.
Famous algorithms in this case are minimax for decision making, alpha-beta
pruning for removing large parts of a search tree, and in some cases, table lookup
for games states which solutions are known a-priori thanks to human knowledge
and experience. Even in this case, distinctions are possible when we consider games
ruled by chance or not, and games where the information is perfect or imperfect.
constraints, common ones being node, arc, path and k-consistency. Searches can be
performed tracking backwards (backtracking) and methods exist to choose the best
variable to explore during a backtracking search.
Logical agents are a category of agents that use formal logic to take decisions
and perform actions in their world. Logic is the key element in the behaviour of the
agent, and is characterized by the presence of a syntax, semantics, knowledge-base,
and an inference procedure.
Introducing the concept of time when dealing with uncertain reasoning requires
the introduction of more versatile reasoning tool, that have been widely used in the
last decades: the Markov processes and models. Inference models need to be
updated to take into account the dynamic environments. Powerful algorithms to
consider are also Kalman Filters, Dynamic Bayesian Networks, particle filtering
and more.
Artificial Intelligence 63
Learning
The concept of learning is one of the fundamental aspects of AI, and defines a
category of algorithms that are known as Machine Learning. Learning means that
the performance of an agent will improve on future actions after observing the
surrounding world. There are three key reasons for why a developer would prefer
learning algorithms over hard-coded software: first, not every situation the agent
will be in can be predicted by the designer; second, changes over time are difficult
to predict; third, for certain problems, the direct implementation of a solver
algorithm is too hard and automatic learning represent the only viable solution to
the implementation of an agent. In general, four topics are shared among different
learning algorithms and problems: there is a component of an algorithm to be
improved; the agent possesses prior knowledge; data is represented in a specific
way; a feedback action provides guidance during learning. When a specific
algorithm needs to learn from its surrounding world, three main learning algorithms
are available to the designer, and will be discussed later: reinforcement learning,
supervised learning, unsupervised learning.
Among the algorithms that can learn from examples, decision trees can be cited.
When learning from examples, one question appears soon: when an algorithm
64 Artificial Intelligence
learns something, how can we be sure that our learning algorithm has produced a
hypothesis that will predict the correct value for previously unseen inputs? How
many elements in the dataset do we need to get a good hypothesis? What hypothesis
space should we use? An interesting principle of computational learning theory
states as follows: “any hypothesis that is seriously wrong will almost certainly be
found out with high probability after a small number of examples, because it will
make an incorrect prediction. Thus, any hypothesis that is consistent with a
sufficiently large set of training examples is unlikely to be seriously wrong: that is,
it must be probably approximately correct”. Algorithms built on this principle are
called Probably Approximately Correct Learning (PAC-learning). Classification
algorithms (linear, linear with regressions, linear with hard threshold, and so on)
can be considered.
Several other methods consider prior knowledge during learning: in this case,
the effects of knowledge representation and learning are joined together. Current-
best-hypothesis search and Least-commitment search are examples of these
algorithms. Efforts are also being spent in developing methodologies to extract
general knowledge from specific examples. Several different types of learning have
been developed, including Explanation-based learning, Relevance-based learning,
Knowledge-based inductive learning, Inductive logic programming.
Reinforcement learning
Machine reading holds a predominant spot in this section: the intent is to build
a system to extract knowledge from written text that can work with no human input
of any kind: a system that could define and fill in its database. In general, it is
necessary to define not only a system to parse and grasp the knowledge, but also to
explore the actual human behaviour. Several algorithms have been used so far for
different problems related to this problem: treebank is useful to learn a new
grammar, CYK algorithm can learn sentences in a context-free language, lexicalized
PCFG allows us to represent connections between words that are more frequent wrt
66 Artificial Intelligence
others. The same algorithms and concepts can also be applied to speech recognition
problems.
Perception
Complex systems
Fault Detection, Isolation and Recovery – Fault Detection systems have been
developed using several categories of AI algorithms, ranging from model-based
applications (which can be considered on the border of AI), to Fuzzy Logics and
Neural Networks [48].
Game playing – The art of gaming has always been a field where AI research
has been focused on, since the earliest decades of the diffusion of these algorithms.
Traditionally, each game saw the development of a specific tailored algorithm, and
the common long-term goal has always been to challenge and beat the world top
players in each discipline. IBM’s DEEP BLUE has set a keystone event in the game
of chess, beating world champion Garry Kasparov in an exhibition match. Games
such as Scrabble, Go and Jeopardy all saw the top players being beaten by AI in the
following years, with Go and Jeopardy games being one of the most challenging
efforts because of game complexity and size of possibilities during the game.
Robotic vehicles – The SoA for civilian robotic vehicles (cars, trucks) has
considerably improved in the last decade. Several car manufacturers are now testing
their autonomous vehicles on roads open to normal civilian traffic (Tesla, Google,
Volvo cars, Scania trucks) [50]–[54]. Concerning non-civilian robotic vehicles,
companies are developing interesting applications for quadruped robots (Boston
Dynamics, DARPA). Excellent examples of applications are also to be found in
interplanetary robotics systems, such as NASA Mars Science Laboratory [55].
Deep Belief Networks – Matlab code for training Deep Belief Networks.
The software list is not comprehensive of all the development and product
efforts in the different available languages. Updated information can be found at
[56].
70 Artificial Intelligence
• Deep Learning
• Pattern Recognition
Event Detection
• Support Vector Machines
• ...
• Genetic Algorithms
Tradespace • Simulated Annealing
Exploration • Normal-Boundary Intersections
• ...
Some methods could be represented by more than one category, for example, but
in general, grouping the different methods by similarity in terms of functionality is
one of the most effective approaches. The map presented is not meant to be
exhaustive.
The chosen family of algorithm to perform event detection on a spacecraft has been
Artificial Neural Networks. Before digging into the characteristics and different
types of algorithms that fall under the ANN category, it is important to state some
of the characteristics that made ANN a good candidate for this type of problems:
• Generalization: a trained network can provide good results even on
never-before-seen inputs, provided that they are similar to those the
network has been trained on
• Experience: a network, similarly to human behaviours, is able to learn
thanks to the knowledge that is fed into it
• Ability to deal with linear and non-linear functions, and has multi-
variable capabilities
• Robustness in presence of noise, disturbances and degradation.
Generally, the performance of a network degrades gracefully under
adverse operating conditions
• Performances can be better than a human counterpart, even if the
knowledge with which the network is trained comes from the human
expert
As with other types of AI, training and execution of ANN does not follow
traditional approaches, and the definition of the application behaviour is not
implemented through conventional programming.
ANNs have been introduced with the intent of modelling the processing
capabilities of biological nervous systems: millions of interconnected cells, each
one of them being a complex machine in which incoming signals are collected,
processed and routed in several directions (the neuron). From a computational
speed point of view, the common neuron is thousands of times slower than our state
of the art electronic logic gate: despite this, the human brain is able to achieve
complexity of problem solving that is yet unmatched by computers.
74 Artificial Intelligence
Figure 20 shows the structure of an artificial neuron with n inputs. Each input
channel i can transmit a real value xi. A primitive function f computed in the body
of the abstract neuron can be selected arbitrarily. Usually the input channels are
associated to a weight, that means that the incoming information is multiplied by a
weight that somehow defines how “important” that information is compared to the
others. The collected signals are integrated at the neuron and f is evaluated. ANNs
are in this sense a network of primitive functions, even if different models of ANNs
differ mainly in the assumptions about the functions used, the pattern of connection,
and the information transmission timing. The aggregating function g is usually the
addition.
Feedforward The earliest appearance of ANN, and the network with the most
basic behaviour: the information moves only in the forward direction, from the
input nodes, through the hidden nodes, to the output ones. There are no cycles or
loops in the network.
Radial Bases Functions are a type of ANNs that uses Radial Basis Functions
(RBFs, a function that has a distance criterion with respect to centre reference) as
activation functions. The basic idea behind RBF networks is that a predicted target
value of an item is likely to be similar to other items that have close values of the
predictor variables.
76 Artificial Intelligence
Modular Biological studies have shown that the human brain is characterized
not by a single, huge, network, but as a collection of small networks, in which
several small networks cooperate or compete to solve problems.
One of the peculiar characteristics of ANNs is that they can be trained to mimic
model behaviours: the weights that multiply each input signal will be updated until
the output from the neuron is similar to the model used as a reference during the
training. Generally speaking, training is an adaptive algorithm that is used to match
the output of an ANN to a reference model. The algorithm iteratively compares the
output of the network to the model, and by applying a corrective action on the
network weights and biases, the output is adapted to match the desired one. The
training is generally based on previous experience, although methods that modify
the parameters of the network exist. Three types of learning algorithms have been
developed.
Among the different training algorithms available, the three most common ones
(that are also those available on Matlab®) are:
Given a specific training algorithm, two approaches exist that regulate how the
data is fed to the training algorithm: in offline training (or batch), the complete
dataset is fed to the training algorithm; in online training, the training algorithm
updates the weights and biases of the network every time a new sample is fed to the
training algorithm. Typically, online training is characterized by a slower
convergence speed, also because of likely timings limitations in acquiring new
samples. On the other hand, they are particularly useful when the memory available
on the application does not allow to store complete datasets, but instead each sample
used in the training must be forgotten before a new sample can be obtained and
used.
An example of rule, very common and probably the simplest one, is the so-
called production rule:
• Uncertain evidence
• Uncertain connection between evidences and conclusions
• Value values
A KBS that employs the data-driven mode uses the available information (the
facts) and generates as many derived facts as it can. Outcomes of this process can
be either satisfying or not, as the output is often unpredictable and the system might
generate innovative solutions to a problem or wasting time generating irrelevant
information. A typical usage of the data-driven is for problems of interpretation
where data must be analysed. A goal-driven system, on the other hand, is
appropriate when a more focused solution is required. The process employed by a
goal-driven IE is to start from the given goal and trying to trace the information
back to the current status of the application (therefore generating the plan), or
assessing that no possible path is obtainable from the given goal back to the current
status.
Expert Systems (ES) are a type of KBS designed to manage and use expertise in a
particular, specialized, domain. An ES is intended to act as a human expert who can
be consulted on a range of problems that fall within his or her domain of expertise.
Artificial Intelligence 81
Typically, the user of an ES will enter into a dialogue in which he or she describes
the problem (such as the symptoms of a fault) and the ES offers advice, suggestions
or recommendation. In other applications, the ES is directly configured by the
expert to act automatically, replacing the expert in taking actions driven by the
stored knowledge. Additionally, depending on the application, the ES might be
required to justify the current line of actions: an Explanation Module is often added
to the ES to help with this purpose.
When an ES is programmed but no knowledge is stored, it is called Expert
System Shell: in principle, it should be feasible to develop an ES shell, build up a
KB, effectively obtaining an ES. However, all domains are different, and it is
difficult to build a shell that adequately handles the various applications. Generally
speaking, ES shells are not suited for embedded applications.
Fuzzy Sets are a mean of reducing how strict these boundaries are. The theory
of Fuzzy Sets expresses imprecisions quantitatively by introducing characteristic
membership functions that can assume values between 0 and 1 corresponding to
degrees of membership of a variable value to a condition, from being “not a
member” to a “full member”. The degree of membership is sometimes called the
possibility that a certain value is described by the membership function. The key
differences between a Crisp and a Fuzzy set are:
5.7.3.2 Fuzzification
To recall the earlier example on temperature, a temperature of 150°C could be
considered 0.99 high, and 0.01 medium, while a temperature of 51°C could be
considered 0.30 high and 0.70 medium. The process of deriving these possibility
values for a given value of the variable is called fuzzification.
Examples of membership functions are shown in Figure 23: they can assume
different shapes, and the most suitable shape of the membership functions and the
number of the fuzzy sets depend on the particular application.
5.7.3.3 Defuzzification
When designing an application that employs a Fuzzy Logic -based algorithm,
after defining the input variables and their membership functions, it is necessary to
continue the design process downstream to the output of the application. When a
control action or a decision is computed using Fuzzy Logic, the value of the action
will still be expressed in fuzzified values. In order to compute back a crisp, clear
value, the next process to perform is the defuzzification. Defuzzification takes place
in two steps:
Evolutionary Strategies (ES) are a type of algorithm that reaches optimal solutions
by applying mutation and selection operators [71], and can be successfully
employed even with populations numbers as low as two individuals. The selection
of individuals is performed only on fitness rankings and not on the actual fitness
values.
As with Machine Learning, the number of algorithms gravitating into the domain
of Evolutionary Algorithms is enormous, with hundreds of algorithms and even
more variations (Figure 25).
Artificial Intelligence 85
Hybrid Particle
Backtracking Search Different Search Swarm Optimization Multi-objective bat Flower Pollination
Optimization Algorithm and Gravitational Algorithm Algorithm
Sarch Algorithm
League
Wind Driven Grey Wolf Generative
Cuckoo Search Championship
Optimization Optimization Algorithms
Algorithm
Tabu Search
Continuous Scatter Big-band Big Crunch
Lloyd's Algorithm Firework Algorithm Continuous
Search Optimization
Optimization
Genetic Algorithms are the most used and known type of Evolutionary Algorithm,
to the point that the whole category is sometimes confused with GA. They owe their
diffusion to the numerous field of application they’ve found: parameters
optimization, financial prediction, scheduling, telecommunication, computer
drawing, datamining, bioinformatics and so on.
GAs are powerful search algorithms: they explore the solution space quickly in
search of optimal solutions [73]. GAs encode the decision variables (or input
parameters) of the problem into an array that represent a full solution [74]. Each
array assumes the characteristics of a chromosome and represents an individual
solution among the population. The position in the chromosome of each gene is
called locus, while its value is called allele. There are two encoding classes:
genotype and phenotype. Genotype denotes the ensemble of all the genes of an
individual, while the phenotype denotes the group of all the visible features and
characteristics of the individual. A fitness function is the “grading system” that is
86 Artificial Intelligence
used to evaluate the fitness of an individual in the problem considered. Unlike other
optimization techniques, the fitness function of GAs may be defined in
mathematical terms, or as a complex computer simulation, or even in terms of
subjective human evaluation. Operators are used to regulate the evolution of the
population. The three genetic operators commonly used are: selection, crossover,
mutation [75].
Neural Networks can be used for different types of applications, and each
category of NN excel in one or more specific domain. This chapter focuses on
performing Event Detection (ED) during the mission: the applications presented
here refer to detection of external mission events. Event detection, in particular
external events related to payload observation, is a fundamental characteristic of
highly autonomous spacecraft.
6.1 Background
In the previous chapter of this thesis, several applications of Artificial Intelligence
to increase the mission autonomy have been introduced, and it was shown how
enhanced autonomy could specifically benefit the nano- and small-satellite
missions. Making the spacecraft autonomous is a topic of paramount importance
especially for missions beyond LEO, where the current limited autonomy
capabilities are a severe stopper for the diffusion of nanosatellite interplanetary
missions. Examples of these missions are those targeted to the Moon, Near Earth
Asteroids (NEAs), Mars and Jupiter, with his satellite Europa. These destinations
have already been chosen by space agencies as ideal candidates for CubeSats and
nanosatellites missions that will reach them in the near future as a secondary
payload of traditional flagship missions. Moreover, CubeSat missions have been
proposed and are being developed by NASA and ESA as part of their space
90 Case Study: Event Detection with Neural Networks
One of the reference mission for this research was given by COPINS, which
was a secondary payload of the ESA AIM mission. AIM was one of the two
92 Case Study: Event Detection with Neural Networks
spacecraft of the AIDA joint effort of ESA and NASA, aiming to perform an
asteroid characterization and redirection experiment. ESA was providing the
monitoring/observing spacecraft (AIM), while NASA was supposed to launch the
impactor probe (DART) that would collide with the secondary body of the system
[89]. The COPINS mission consists of multiple CubeSats (up to two 3U platforms)
carried to the asteroid by AIM, which will deploy the nanosatellites at 10 km from
the secondary body surface, up to one month before the impact of DART. The
objectives of the CubeSat mission are to provide scientific support to the AIM
primary mission, either by repeating one or more of the main spacecraft's
measurements, by performing additional science measurements, or by recording
and taking pictures of the impact event. In addition, the CubeSats will also perform
technological demonstrations, such as satellite interlink communication. The
communications of the COPINS with Earth are relayed through the AIM spacecraft.
The architecture of this mission is definitely complex, as numerous challenging
elements are included in the scenario: four or more satellites joint operations, inter-
satellite links, limited data rates, and peculiar environment (for example, low and
irregular gravity field, which makes the orbit control critical). Given the complexity
of the mission architecture and concept of operations, increasing the COPINS
autonomy would be beneficial to the entire mission, and for this reason this mission
has been chosen as a test case for the developed algorithm. For the purpose of the
research, it is assumed that the COPINS’s payload objective is to detect the impact
of DART on Didymoon (the secondary body of the Didymos binary system, other
times referred to as moonlet) and to determine the changes in the physical properties
of the asteroid surface. Since the COPINS-Earth communication is characterised
by the fact that the main spacecraft serves as relay, the amount of data that can be
sent to Earth by COPINS and the possibility to command COPINS from Earth are
both affected by the availability of AIM. The autonomous detection of the impact
event would enable:
Criterion Value
Parameter Value
The final number of layers and neurons per layer is the result of the analysis
performed over a set of possible network architectures. To select the most
performing network, a statistical analysis over all the possible architectures
compatible with the main requirement (compatibility with the CubeSat C&DH
performances) was performed. In particular, networks with one or two hidden layers
were tested, up to a maximum number of neurons of 15 for the first layer, and of 10
96 Case Study: Event Detection with Neural Networks
for the second layer. Figure 30 shows the average performance of each network
cluster for a two-hidden layer architecture: each dot represents the average of
architecture performance in function of number of neurons in the second layer. The
average is calculated over 4500 simulations (300 simulations for number of neuron
in the first layer, spanning from 1 to 15). The result of the analysis confirms that for
a binary classification problem, networks with one hidden layer show the best
performances on average [92]. Figure 31 illustrates performances of networks with
a single hidden layer in function of the number of neurons in the layer in the form
of boxplots. Boxes represent data from second and third quartile, while whiskers
cover data in first and fourth quartile. Samples are considered outliers when their
distance is greater than 1.5 times the interquartile range, and they are represented
as dots. The red line represents the performance median. For each architecture, 300
simulations have been run. From this graph, it is possible to deduct that networks
with more than 4 neurons are suitable for the final architecture, as boxes are
condensed into the median line.
Figure 30 Performance trends for networks with two hidden layers. Each
dot represents a cluster of networks with 1 to 15 neurons in the first layer, and
the X-axis number of neurons in the second layer.
The number of 10 neurons for the hidden layer size was chosen as a good
compromise between complexity of the network and associative memory [93]. As
Case Study: Event Detection with Neural Networks 97
the learning ability of a network increases with number of neurons, a margin was
taken to consider inherent uncertainty of early mission design stage, thus selecting
10 neurons instead of 5, which is the minimum acceptable number.
expelled. For the applications considered, networks with a total number of 100
neurons where used. In general, no optimization was performed on the networks
presented for this second application.
The Didymos binary system is modelled as defined in the literature by the Didymos
Reference Model [94]. The main body is represented as a fairly regular spheroid of
roughly 800m in diameter, while the secondary body (of which no radar or optical
images are available to date) is modelled as a bumpier, rubble-pile like body,
elongated in the direction towards the main body of the system (Figure 32).
The shape and the plume event have also been modelled in blender®. The
characteristics of the object as matched to resemble common rubble-pile asteroids.
100 Case Study: Event Detection with Neural Networks
The asteroid is set on a slow rotation on all the three axes, and the jet is emitted
from a randomly chosen location on the asteroid surface (Figure 35).
From the information summed up in Table 12, it is evident that the ANN cannot
be trained on ground using actual images of the celestial bodies, as they do not exist
(concerning the comet 67P model, the application is developed forcing the
acquisition of the pictures in situ). Using a training dataset extrapolated from
models of the asteroid would be risky, as the network would get trained on a specific
shape of the asteroid that might result different from the actual shape: the possibility
exists that the impact will not be identified due to incorrect training of the algorithm.
Moreover, several conditions will likely be different from those simulated on
ground, especially with regards to the surface features (for example areas of
different composition) and light/shadowed areas (for example different crater
patterns).
102 Case Study: Event Detection with Neural Networks
The proposed solution for the training task takes into account the mission
scenario and concept of operations. As the spacecraft will reach its final orbit before
the event to be detected, it is possible to define a sequence of manoeuvres and
operations that allow the spacecraft to construct the training dataset directly in-situ,
either acquiring pictures of the foreseen impact area on Didymoon, or collecting
pictures of the comet prior to plume events.
For a feed-forward network, considering the connections from the input to the
hidden layer, they are directly mapped to the input data: in this sense, for an image,
each pixel would be directly assigned to several weights. This means that, during
the training to identify the event, the weights need to be raised for the pixels that
would change during the event. This operation is done automatically during the
training. In the proposed case studies, the only missing piece is indeed the collection
of post-event images to construct a two classes dataset for the training.
For the impact event, since the coordinates of the impact on the asteroid are
known, it is possible to artificially super-impose a pattern of debris-like shapes to
force the weights update in particular areas of the image, as shown in Figure 37. As
shown in [89] the physical properties of asteroid’s surface upper layer strongly
influence the characteristics of ejecta. Shape, opacity and granularity of the overlay
are chosen accordingly to information found in literature to reflect the dynamics of
the event to be observed. Two geometries, rectangular and truncated cone, were
considered to assess the role of overlay shape in the algorithm performance.
Case Study: Event Detection with Neural Networks 103
For the plume event, since the coordinates are not known a-priori, the training
approach must consider a set of probable locations. The overlay approach is
performed for several directions of generation of a plume. Moreover, as the comet
body is rotating in the camera frame, the generalization must be carried out both for
the plume direction and for the rotational state of the body underneath (Figure 40).
Case Study: Event Detection with Neural Networks 105
Figure 41 Trained weights for the plume detection problem. The uniform
grey areas around the centre of the image are a result of having removed
constant lines throughout the dataset
106 Case Study: Event Detection with Neural Networks
6.6 Results
6.6.1 Performance considerations
The impact event has been simulated and tested from two capturing points
(depending on the position and orientation of the observing spacecraft around the
asteroid). In the first point, both bodies of the asteroid binary system are in the field
of view of the satellite, with the main body in the background (Figure 42). In the
second case, only the moonlet is in the field of view of the satellite, with the dark
sky in the background (Figure 43). For both cases, a video of the impact has been
realized, with a framerate of 25 frames per second. Frames of the post-impact
evolution were then selected for the testing of the algorithm. The algorithm has
been developed and tested in a Matlab/Simulink® environment, by using datasets
generated via the blender® asteroid model.
Case Study: Event Detection with Neural Networks 107
The simulations show the effectiveness of the ANN developed, as the images
are correctly classified by the algorithm in the appropriate categories (Figure 44 and
Figure 45).
the camera on board the satellite. The range of the oscillation tested is ±12 degrees,
with steps of 1 degree in the up-down pointing. To overcome the issue of
oscillations affecting the detection of the impact event, the solution implemented
includes images with different framing in the training dataset. In this case, the
network is trained to compensate for the pointing uncertainties (Figure 46 and
Figure 47).
Figure 48 Confusion matrices for one body and two bodies simulations
with disturbances. Class 1 represents the impact event, Class 2 represents the
no-impact images
The plume ED problem was constituted by a dataset of 1600 images used during
training, divided in the following way: 98% for training, 1% for validation and 1%
for testing. An additional dataset composed of 400 images was used for testing, and
the ANN performance was measured on the test dataset. Figure 49 shows the
confusion matrix for the 67P plume event.
The algorithm has then been validated by evaluating its performance on real
images taken by the Rosetta mission, showing plume events as experienced by the
spacecraft. The detection of the events was successful, as seen in Figure 50.
6.6.4 Review
The applications presented in this chapter provide clear examples of both the
usefulness and the applicability of NN in the domain of event detection for space
applications. On the other hand, the decision on which architecture is the most
efficient and effective in performing different tasks needs to be object of further,
deeper, investigation. Despite this, some insights can be already drawn from the
research performed, and this can help towards the objective of pre-selecting NN
architectures in relation with the problem to be solved. Finally, it has to be noted
that the purpose of this thesis was mainly to perform feasibility analysis: for this
reason, a comparison between the detection capabilities of NN and other ML
algorithms needs to be performed. If the usefulness and performances of heavy
architectures (such as Convolutional NN used to solve image classification
problems) is well established, the research on NN for space applications, and in
particular for embedded ones, needs to be expanded to reach a similar level of
heritage.
Review Comments
Parameter
Benefits Training-defined Behaviour – The behaviour of the system
can be implemented to match the desired outcomes by training,
and not by hard-coded programming.
7.1 Background
The topic of failure detection on Small Satellites is certainly vast and would
require a complete PhD thesis on its own. This chapter deals with the problem of
detecting failures on components of the AOCS by using a domain of AI called
Expert Systems (ES). In particular, the specific category of ES here presented is
that of the Fuzzy Logics, and the actuator to which the algorithm is applied are the
Magnetic Torquers (MT). The presented case study can be considered a feasibility
study, but already demonstrates two results:
• The Fuzzy Logics are powerful and can be configured to perform failure
detection
• The expert knowledge is effectively represented by the FL and the
functioning of the algorithm represents the reasoning that the expert
would perform
Magnetic Torquers are a very common and reliable actuator used to control attitude
for LEO CubeSats as they are cheap, they consume a low amount of power and are
typically low weight. They are typically employed in two configurations: coil
(Figure 51) and rod (Figure 52).
Case Study: Failure Detection with Expert Systems 115
where Tcontrol is the 3x1 control torque vector, mb is the 3x1 magnetic torque
dipole moment and Bb is the 3x1 EMF vector expressed in body axis.
where N is the number of coils, I the current flowing in them, A the area
inscribed by the coils and na the unit vector perpendicular to the plane of the coils.
The main specification for a MT is usually the maximum dipole moment, which is
a function of the number of coils, the amount and direction of the current that flows
into the coils, and the area of the MT.
Despite the MT being a reliable hardware, they can be subject of failures and
these events have very peculiar and recognizable characteristics. MT can fail in four
different ways (Figure 54):
Five input variables were defined in the application, and are intrinsic variables that
characterize the problem under analysis:
• MT current: the value of the current that flows into the MT. This value
is straightforward to obtain, as the Analog-to-Digital Converter (ADC)
current sensor is a common component
Case Study: Failure Detection with Expert Systems 119
These five variables are able, along with the output variables and the
corresponding rules, to define an Expert System able to correctly identify which
type of failure is present on the torquers.
• current: negative (less than -0.01), zero (between -0.01 and 0.01),
positive (greater than 0.01) (Figure 57)
• current derivative: monitored only when zero (between -0.00003 and
0.00003)
• error: monitored only when zero (between -0.2 and 0.2)
• current second derivative: monitored only when zero (between -0.01
and 0.01)
• estimated LOE: monitored only when zero (between -0.00002 and
0.00002)
7.5.2 Rules
For a Hard-Over failure, that is constituted by a linear trend of the current value,
the derivative of the current is constant. Since each HO failure can be characterized
by a different constant value of the derivative, this particular variable is not
Case Study: Failure Detection with Expert Systems 121
meaningful. Continuing, since the derivative is constant, the second derivative must
be zero. This reasoning is meaningful: it means that the fuzzy set will have to
monitor the second derivative and to be able to distinguish between a zero and a
non-zero value. A possible rule can also be defined: if the second derivative is not
zero, then the failure can probably be a LOE (where the current value and their
derivative is changing over time).
After this reasoning, the Hard-Over behaviour is still undefined: the second
derivative must be zero, but this is not sufficient to correctly identify the HO.
Another rule defined for the HO is obtained by checking that the value of the current
derivative is not zero. If it is zero, then we would be in presence of a Lock in Place
failure (derivative being zero means the output current is constant).
Continuing with these types of reasoning lead to a set of rules and a set of
membership functions that completely represent the expert knowledge on the
problem in a computational form.
With just a set of five rules, the complete set of failures of the MT can be
detected. The rules are:
• if the current is zero AND the current derivative is zero AND the error
is NOT zero AND the estimated LOE is not zero then failure is float
• if current is NOT zero AND the current derivative is zero AND the
error is NOT zero AND the estimated LOE is NOT zero then the failure
is lock in place
• if current derivative is NOT zero AND the error is NOT zero AND the
current second derivative is zero AND the estimated LOE is NOT zero
then failure is hard-over
• if the error is NOT zero AND estimated LOE is zero then failure is loss
of efficiency
• if the error is zero then NO failure is present
It has to be noted that, for this specific case study, the number and the
complexity of the rules is low: for different applications, more complex and more
numerous rules can be expected.
122 Case Study: Failure Detection with Expert Systems
7.6 Results
At each sampling step of the on-board software, it is possible to obtain all the five
input variables (except for the starting steps where no derivatives exist), and at each
step it is possible to evaluate all the defined rules in the system (Figure 58).
Figure 59 Output of the Expert System: from the left, unfiltered, basic and
medium filters applied. Each step represents a different value of the output
variables, therefore represents a different failure detected
Case Study: Failure Detection with Expert Systems 123
7.6.1 Review
Review Comments
Parameter
Benefits Knowledge Implementation – The knowledge transfer from
an operator to the program can be performed in a structured
way, without hard-coded programming of the behaviour of the
system.
8.1 Background
The purpose of multi-attribute tradespace exploration is to capture decision-
makers preferences and use them to generate and evaluate a multitude of system
designs, while providing a common metric described in a stakeholder friendly
language. To achieve this, the Multi Attribute Utility Theory (MAUT) is employed
for the aggregation of the preferences from all the stakeholders. MAUT is widely
used in the fields of economics, decision analysis, and operational research. It
postulates that people make decisions based on value estimates of personally-
chosen reference outcomes. Decision-makers interpret each outcome in terms of
some internal reflected value, or utility, and they act in order to maximize it. In the
case of multiple attributes, an elegant and simple extension of the single attribute
utility process can be used to calculate the overall utility of multiple attributes and
their utility functions [97], [98]. There are two key assumptions for using this
approach:
(1)
The values of each ki give a good indication of the importance of each attribute
(i.e. a kind of weighted ranking) and are bounded between 0 and 1. The scalar K is
a normalization constant that ensures the multi-attribute utility function has a zero
to one scale [99]. Despite the attractiveness of an axiomatically-based decision
model, empirical evidence shows that people do not obey expected utility theory in
daily decision-making due to systematic biases in their thinking. For this reason,
the logic flow of the method involves the definition of stakeholder attributes,
Case Study: Tradespace Exploration with Genetic Algorithms 127
context variables and design variables [100]. Once those elements are defined is
possible to develop system performance and value models, aiming to evaluate the
multi attribute utility and the costs involved in the project life cycle.
When applying the MAUT to a particular problem, the effects on the utility
given by the different attributes are highlighted. In this case, a Multi Attribute
Tradespace Exploration (MATE) analysis is obtained. Given the complexity and
the variety of different possible choices during the conceptual phase of a space
mission, this technique is particularly suitable for assuring that all the various
options have been considered, including programmatic and technical aspects, such
as manufacturability, assembly, operations, and physical architecture choices.
vector dimension can reach sizes of more than 30 design variables, adding up to a
solution space in the order of billions of different architectures. It is therefore
unfeasible to evaluate all the possible solutions [107].
In complex problems, such as the conceptual design of a space mission can be, the
number of solutions that form the design space can reach. It is therefore mandatory
to exploit structured and efficient ways to explore the design space and evaluate the
solutions, in order to keep the computational cost and the exploration duration
acceptable. Depending on how the problem is constructed in the first place, several
different exploration methods exist, that can move through the space both in case
of a continuous space and in the case of a discrete one. Examples of these methods
can be genetic algorithms for discrete problems, or simulated annealing for
continuous ones [73], [108], [109]. The present work explores the use of genetic
algorithms (GA), performing an exploration type called guided random search
[110]. These types of algorithms are inspired by the selection process of nature,
which causes the stronger individuals to survive in a competitive environment. In
nature, each member of a population competes for food, water and territory, and
also strives for attracting a mate. It is obvious that the stronger individuals have a
better chance for reproduction and creating offspring, while the weaker performers
130 Case Study: Tradespace Exploration with Genetic Algorithms
have lesser chances of producing offspring. Consequently, the ratio of the strong or
fit individuals will increase in the population, and overall, the average fitness of the
population will evolve over time. Offspring created by two fit individuals (parents)
has a potential to have a better fitness compared to both parents: the resulting
individual is called super-fit offspring. By this principle, the initial population
evolves to a better suited population to their environment in each generation [111].
The DV is the vector that describes a specific solution and that contains all the
information needed to define a particular mission concept. It is composed of 36
variables that store information about several aspects of mission architecture and
system design.
The DV structure was defined by analysing the mission goals and by selecting
both mission and system technical domains that are critical during the preliminary
design of a space mission.
Operations Lifetime, - 2
mothership/daughtership
interactions
Table 16 shows a summary of all the categories that were included in the DV.
The first column shows the category, while the second and third one list all the
different parameters that were included in each category. Finally, the last column
condenses the information in a number, which represents the total number of
parameters for each category.
The key part of the research relies on the algorithm that, from the definition of the
DV, creates each solution during the exploration.
The approach used involves GA to solve an integer problem: each parameter in
the DV is associated to an integer that represents the number of alternatives for the
specified parameter. The number of possible alternatives is defined by each domain
expert. For example, the event reaction parameter in the Autonomy category has a
value of 3 associated with it: this means it can assume three different configurations,
134 Case Study: Tradespace Exploration with Genetic Algorithms
For the parameters in the third column, the approach is similar but each
parameter corresponds directly to an equipment category. For this, a CubeSat
component database was implemented. Four mandatory parameters were included
for each component in the database: mass, power, cost and size. Other parameters
were added, and are especially useful since they can be later used to verify the
compliance to the requirements, or to compute the fitness value. For example, the
Camera parameter can assume a value from 1 to 4 that corresponds to a specific
COTS equipment found in the database: a CMOS camera; a basic spectrometer; a
high-performance spectrometer; a CCD camera.
Lastly, the fitness of each individual of the population is computed using (1).
As introduced earlier, genetic algorithms are the key technology used to explore the
tradespace. The configuration of the algorithm was as follows: initial population
was set at 540 individuals, crossover fraction was set at 0.95 (meaning that to the
remaining 0.05 the mutation operator was applied) and the elite population fraction
was set at 0.35 (Figure 63). The selection was made with tournaments. Infeasible
solutions, for example those that violated the requirements, were discarded and the
population was re-initialized randomly for the removed individuals.
Case Study: Tradespace Exploration with Genetic Algorithms 135
Selection Tournament -
Population size has been chosen to be 15 times the number of variables in the
DV, as a balance between smaller populations (increase in the convergence speed)
and bigger ones (higher chances of having more optimal solutions in the initial
population) [113]. Crossover fraction was chosen at 0.95: this choice resulted in a
greater effect of the reproduction dynamics with respect to the mutation ones.
Mutation fraction was chosen to be 0.05, thus applying the mutation function only
to the population that did not reproduce. Elite population was set at 0.35, meaning
that the 35% of the new generation is formed by individuals picked from the old
generation. The selected value ensures a balance between effectiveness of the
search (lower elite population fractions) and survival of fit individuals (higher elite
population fractions).
8.5 Results
The investigation on methodologies to improve and automate the space mission and
spacecraft design is a vast effort, branching out into many fields of science and
engineering. The proposed research obtains several important results towards the
design of space missions that provide higher utility to the stakeholders, by being
more optimized and not bound to the stagnancy of conservative mission design
approaches. These improvements are obtained through innovations in three aspects
of the mission design:
• exploring the alternative concepts thoroughly and more efficiently
thanks to the MATE and GA approach
• considering the availability of certain highly standardized components
thanks to the component database included in the algorithm architecture
• ensuring effective final solutions that comply with high-level
requirements
Case Study: Tradespace Exploration with Genetic Algorithms 137
Depending on the dimension of the design vector and the ranges of the considered
variables, the number of solutions forming the tradespace can well surpass the order
of billions. In the presented case, 36 variables add up to more than 1017 different
solutions. When, for each solution, a utility function must be evaluated, it is evident
that the problem becomes computationally expensive.
The use of guided random search strategies, implemented with GA, allows the
exploration and the discovery of the optimal solutions without evaluating the fitness
function for all the individuals, but only for a restricted set. Figure 64 shows several
plots of a limited set of the solution space for this problem, that give a glimpse of
the shape of the whole tradespace. As shown in the figure, the MATE and GA
implementation optimizes the search to define the pareto front for the analysed
problem.
Figure 64 Solution spaces (100k points): from the left, cost-size-utility, size-
utility and cost-utility plots
requirement is not met. In this way, selection dynamics will remove the unwanted
solution.
8.5.5 Review
Review Comments
Parameter
Benefits Unbiased exploration - solutions are discovered and analysed
without interference with previous knowledge or
methodologies, even those that would be hardly detectable by
human operators.
Analysis speed – solutions are processed and analysed much
faster with respect to a human operator.
Conclusions
The thesis presents the results of three years of PhD research on Mission
Autonomy for Small Satellite missions. In particular, the key focus of the research
was exploring the capabilities and potentialities of Artificial Intelligence to
innovate and improve the autonomy level of the future missions, both interplanetary
and Earth orbiting. Several reasons motivate the selection of the domain, the
methods, and the case studies, and they can be understood considering the
background of the research group this research was carried out in.
In the last decade, the panorama has changed: technology has evolved, and
more daring missions have been proposed and are now under development, with
improved payloads, communication technologies and propulsion systems. For these
missions, the CubeSat standard and, in general, the modified approach to small
spacecraft and mission design, have a noticeable effect most of the domains
involved. One key area is left behind: operations do not seem to scale by scaling
the technology, and little effort has been spent into disrupting and innovating how
operations are designed and managed for small satellite missions. Nevertheless,
Small Satellites platforms are the best candidate to demonstrate new concepts for
mission operations, as they possess the required flexibility and they welcome
innovative technologies (even if with a suboptimal TRL). Moreover, the category
of small satellites was selected thanks to a higher average computational capability
and to development approaches more comparable with traditional embedded
approaches.
flown. The present work, in addition, aims at raising awareness on the topic of
Mission Autonomy and innovative operations design.
The last case study presented aims at improving another area of mission design
and development: the preliminary design:
[1] R. Sandau, “Status and trends of small satellite missions for Earth
observation,” Acta Astronaut., vol. 66, no. 1, pp. 1–12, 2010.
[2] NASA, “Small Spacecraft Technology State of the Art,” no. February, pp.
1–197, 2014.
[7] Planet Labs, “Planet Labs Specifications : Spacecraft Operations & Ground
Systems,” 2015.
[9] “Fly Your Satellite! CubeSats phoned home / CubeSats - Fly Your Satellite!
/ Education / ESA mobile.” [Online]. Available:
http://m.esa.int/Education/CubeSats_-
_Fly_Your_Satellite/Fly_Your_Satellite!_CubeSats_phoned_home.
[Accessed: 05-Jun-2017].
Available:
https://www.nasa.gov/directorates/heo/home/CubeSats_initiative.
[Accessed: 05-Jun-2017].
[17] CalPoly, “Cubesat design specification, rev 13,” The CubeSat Program,
California Polytechnic State University. p. 42, 2014.
[24] “News | JPL Selects Europa CubeSat Proposals for Study.” [Online].
References 149
Available: https://www.jpl.nasa.gov/news/news.php?feature=4330.
[Accessed: 06-Jun-2017].
[29] “Planet Launches Satellite Constellation to Image the Whole Planet Daily.”
[Online]. Available: https://www.planet.com/pulse/planet-launches-
satellite-constellation-to-image-the-whole-planet-daily/. [Accessed: 08-Jun-
2017].
[31] “Constellation of small satellites set to improve the skill of weather forecasts
| Spire.” [Online]. Available:
https://spire.com/company/insights/news/constellation-small-satellites-set-
improve-skill-w/. [Accessed: 08-Jun-2017].
[65] L. A. Zadeh, “Fuzzy sets,” Inf. Control, vol. 8, no. 3, pp. 338–353, Jun. 1965.
[66] C. C. Lee, “Fuzzy Logic in Control Systems: Fuzzy Logic Controller, Part
II,” IEEE Trans. Syst. Man. Cybern., vol. 20, no. 2, 1990.
[70] C. W. Ahn, “Practical genetic algorithms,” Stud. Comput. Intell., vol. 18, pp.
7–22, 2006.
[85] C. Koch and S. Ullman, “Shifts in Selective Visual Attention: Towards the
Underlying Neural Circuitry,” in Matters of Intelligence, Dordrecht:
Springer Netherlands, 1987, pp. 115–141.
[89] P. Michel, A. Cheng, M. Küppers, and P. Pravec, “Science case for the
Asteroid Impact Mission (AIM): A component of the Asteroid Impact &
Deflection Assessment (AIDA) mission,” Adv. Sp. Res., vol. 57, pp. 2529–
2547, 2016.
[92] A. A. Hopgood, Intelligent systems for engineers and scientists. CRC Press,
2012.
[104] R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,”
Proc. Sixth Int. Symp. Micro Mach. Hum. Sci., pp. 39–43, 1995.
[106] P. Abell et al., “Asteroid Impact & Deflection Assessment ( Aida ) Mission,”
no. May, 2012.
[108] Gwo-Ching Liao and Ta-Peng Tsao, “Application of a fuzzy neural network
combined with a chaos genetic algorithm and simulated annealing to short-
term load forecasting,” IEEE Trans. Evol. Comput., vol. 10, no. 3, pp. 330–
340, 2006.
[113] S. Gotshall and B. Rylander, “Optimal population size and the genetic
algorithm,” Proc. Genet. Evol. Comput. Conf., pp. 1–5, 2000.
Appendix A – Interesting images
acquired through the research
The first operation performed was the creation of a cube, from the submenu
"create".
The next step is to add the "Modifiers" from the corresponding submenu. After
selecting "Add Modifiers" the first of them will be "Subdivision Surface". Soon
after, under the heading "Subdivision", the values "View" and "Render" will be
brought to the upper limit, i.e. six. The result of this operation is shown in Figure
76.
Appendix B - Asteroid modelling on blender® 163
The emission of the plume starts at frame 150 and ends at 180 (despite the
emitted particles continue to be still visible for 200 frames).