Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/358040622

OPS-SAT Spacecraft Autonomy with TensorFlow Lite, Unsupervised Learning,


and Online Machine Learning

Preprint · March 2022

CITATIONS READS

0 1,411

7 authors, including:

Georges Labrèche Tom Mladenov


European Space Agency SES Engineering
19 PUBLICATIONS 50 CITATIONS 17 PUBLICATIONS 40 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Georges Labrèche on 23 January 2022.

The user has requested enhancement of the downloaded file.


OPS-SAT Spacecraft Autonomy with TensorFlow Lite,
Unsupervised Learning, and Online Machine Learning

Georges Labrèche David Evans Dominik Marszk


European Space Agency (ESA) European Space Agency (ESA) European Space Agency (ESA)
Robert-Bosch-Straße 5 Robert-Bosch-Straße 5 Robert-Bosch-Straße 5
64293 Darmstadt, Germany 64293 Darmstadt, Germany 64293 Darmstadt, Germany
georges.labreche@esa.int david.evans@esa.int dominik.marszk@esa.int

Tom Mladenov Vasundhara Shiradhonkar Tanguy Soto


European Space Agency (ESA) Terma GmbH European Space Agency (ESA)
Robert-Bosch-Straße 5 Europaarkaden II, Bratustraße 7 Robert-Bosch-Straße 5
64293 Darmstadt, Germany 64293 Darmstadt, Germany 64293 Darmstadt, Germany
tom.mladenov@esa.int vash@terma.com tanguy.soto@esa.int

Vladimir Zelenevskiy
Telespazio Germany GmbH
Europaplatz 5
64293 Darmstadt, Germany
vladimir.zelenevskiy@telespazio.de

Abstract—OPS-SAT is a 3U CubeSat launched on December 18, outlooks. Results from training a FDIR model to protect the
2019, it is the first nanosatellite to be directly owned and spacecraft’s camera lens against direct exposure to sunlight are
operated by the European Space Agency (ESA). The spacecraft presented, achieving balanced accuracies ranging from 85% to
is a flying platform that is easily accessible to European 99% from models trained with the Adagarad RDA, AROW, and
industry, institutions, and individuals for rapid prototyping, NHERD online ML algorithms in multi-dimensional input
testing, and validation of their software and firmware spaces using photodiode diagnostics data stream as training
experiments in space at no cost and no bureaucracy. Equipped data.
with a full set of sensors and actuators, it is conceived to break
the “has never flown, will never fly” cycle. OPS-SAT has The ability to train models in-flight with data generated on-
spearheaded many firsts with in-orbit applications of Artificial board without human involvement is an exciting first that
Intelligence (AI) for autonomous operations. stimulates a significant rethink on how future missions can be
designed.
AI is of rising interest for space-segment applications despite
limited technology demonstrators on-board flying spacecrafts. TABLE OF CONTENTS
Past missions have restricted AI to inferring models trained on
the ground prior to being uplinked to a spacecraft. This paper 1. INTRODUCTION ....................................................... 2
presents how the OPS-SAT Flight Control Team (FCT) breaks 2. BACKGROUND ......................................................... 2
away from this trend with various AI solutions for in-flight
autonomy. Three on-board case studies are presented: 1) image 3. IMAGE CLASSIFICATION ........................................ 3
classification with Convolutional Neural Network (CNN) model 4. IMAGE CLUSTERING ............................................... 6
inferences using TensorFlow Lite, 2) image clustering with 5. ONBOARD MACHINE LEARNING ............................ 8
unsupervised learning using k-means, and 3) supervised
learning to train a Fault Detection, Isolation, and Recovery 6. SUMMARY .............................................................. 13
(FDIR) model using online machine learning algorithms. APPENDICES .............................................................. 14
A. MODEL C TRAINING RESULTS ........................... 14
CNN inference with TensorFlow Lite running on-board the
spacecraft showcases in-space application of an industry B. CAMERA FDIR ALGORITHM .............................. 14
standard open-source solution originally developed for C. SEPP RESOURCE MONITORING ......................... 14
terrestrial edge and mobile computing. Furthermore, the REFERENCES ............................................................. 15
solution is “openable” with an inference pipeline that can be
constructed from crowdsourced trained models. This BIOGRAPHY ............................................................... 16
mechanism enables open innovation methods to extend on-
board ML beyond its original mission requirement while
stimulating knowledge transfer from established AI
communities into space applications. Further classification is
achieved by re-using an open-source k-means algorithm to
cluster images in groups of “cloudiness” and initial results in
image segmentation (feature extraction) have promising
978-1-6654-3760-8/22/$31.00 ©2022 IEEE
1. INTRODUCTION 2. BACKGROUND
OPS-SAT is a 3U CubeSat launched on December 18, 2019. A spacecraft’s processing power is bottlenecked by the
It is the first nanosatellite to be directly owned and operated limited CPU and RAM of its On-Board Computers (OBC).
by ESA. The spacecraft is a flying platform that is easily This has constrained opportunities to leverage AI for in-flight
accessible to European industry, institutions, and individuals, applications. The rapid development of powerful processors
enabling rapid prototyping, testing, and validation of their as readily available commercial off-the-shelf OBCs or
software and firmware experiments in space at no cost and no payloads is changing the landscape of the spacecraft
bureaucracy. The spacecraft is equipped with a full set of computing environment and creating new opportunities for
sensors and actuators including a camera, GNSS receiver, the space-segment to develop and deploy AI platforms. In a
star tracker, reaction wheels, high speed X-band and S-band 2011 editorial, McGovern et al. issued a call to action to “(1)
communication, laser receiver, software defined radio actively develop new machine learning concepts and
receiver, and a processor with a reconfigurable FPGA at its methods that can meet the unique challenges of the space
heart. The OBC is a GOMSpace NanoMind A3200 with an environment; (2) identify novel space applications where
AVR32 MCU set at a clock frequency of 32 MHz. Powerful machine learning can significantly increase capabilities,
processing capabilities are provided by its two Satellite robustness, and/or efficiency; and (3) develop appropriate
Experimental Processing Platform (SEPP) on top of which evaluation and validation strategies to establish confidence in
the Angström Linux distribution is installed. The SEPP the remote operation of these methods in a mission-critical
payloads each carry an Altera Cyclone V SX SoC module setting” [4]. Since then, only a few spacecraft missions have
with an ARM dual-core (Cortex-A9) that has an 800 MHz flown AI technology demonstrators all of which have
CPU clock and 1 GB DDR3 RAM. However, due to recent restricted their experiments to model inference.
hardware degradation, the SEPP RAM allocation has been
reduced to 512 MB. Heritage
NASA’s Earth Observing-1 (EO-1) mission, launched on
Conceived to break the “has not flown, will not fly” cycle,
November 21, 2000, operationalized classifying cryosphere
OPS-SAT has spearheaded many firsts. One of which is a features and detecting tiny traces of sulfur deposition glaciers
new paradigm to on-board software by introducing “apps” in with Support Vector Machines (SVM) as well as cloud
space. These apps can be “easily developed, debugged,
detection with Random Decision Forest (RDF) and Bayesian
tested, deployed, and updated at any time without causing any
Thresholding (BT). Models were trained on the ground and
major problem to the spacecraft” [1][2]. The operating
used for inference on-board a Mongoose M5 processor that
system, processing power, and memory capacity makes it
ran at 12 MHz (for ~8 MIPS) with 128 MB RAM [5].
possible for the SEPP to re-use and run open-source software
as well as adopt agile methodologies to software engineering
The Intelligent Payload Experiment (IPEX), a 1U cubesat
[3]. The spacecraft’s powerful compute capability provides
launched by NASA on December 6, 2013, demonstrated ML
an enabling platform to operationalize AI for autonomous in-
applications on a platform that broke away from traditional
flight operations.
computational constraints. It carried two processors that each
ran a Linux operating system. To support the primary flight
AI is of rising interest for space-segment applications despite software, it was equipped with a 400 MHz Atmel ARM9
limited technology demonstrators on-board flying
CPU with 128 MB RAM, and 512 MB flash memory. As a
spacecrafts. Past missions have restricted AI to inferring
payload, it carried a Gumstix Earth Storm computer-on-
models trained on the ground prior to being uplinked to a
module which included an 800 MHz ARM CPU, with 512
spacecraft. This paper presents three case studies on how the
MB RAM, and 512 MB NAND flash [6]. SVM and RDF
OPS-SAT FCT breaks away from this trend and uses on- classification techniques were demonstrated on-board IPEX
board AI solutions for in-flight autonomy. Section 2 provides for image classification. However, as was the case with EO-
background on the motivation behind the experiments and
1, the ML models used on IPEX were trained on the ground
summarizes past AI capable missions. Section 3 elaborates
prior to launch.
on OPS-SAT’s first use of in-flight AI in the SmartCam app
for image classification with CNN model inferences using
ESA trained the PhiSat image classifier for cloud detection
Tensorflow Lite. Section 4 describes how further on one of two federated 6U FSSCat CubeSats that were
classification is achieved with unsupervised learning by launched on September 3, 2020. Simulated data based on
running a k-means algorithm for image clustering. Section 5
Sentinel-2 satellite observations were used for pre-launch
describes the OrbitAI app for unsupervised learning and
training of a model that runs on the spacecraft’s Myriad 2
dives into a use case that reads photodiode sensor data as a
Vision Processing Unit (VPU) [7]. The processor provides a
stream of training data for online machine learning to train a nominal 600 MHz operation performance [8].
FDIR model. Section 6 provides a summary that concludes
with an outlook on future missions.

2
3. IMAGE CLASSIFICATION for synergistic development that de-risks and accelerates
The SmartCam software on the OPS-SAT spacecraft is the adopting AI in future missions. Over 150 experiments have
first use of AI by ESA for autonomous planning and registered to run on OPS-SAT in response to an open call and
scheduling on-board a flying mission. It was originally the SmartCam app builds on top of this successful open
developed to optimize downlink bandwidth utilization by innovation method to attract AI experimenters through
autonomously discarding bad images acquired from the crowdsourcing. This format also addresses latent skepticism
spacecraft’s camera [9]. The software has since developed of using open methods as an added value to spacecraft
geospatial capabilities to autonomously capture pictures operations.
when the spacecraft is above areas of interest, thus
eliminating the need for operators to plan and schedule image Manning et al. [12] demonstrated and benchmarked running
acquisition operations. The SmartCam app is presented in TensorFlow Lite ML model inference on a prohibitive space-
this section as an experiment to make space operations grade embedded platform and in doing so presented a cursory
“openable” by means of crowdsourcing image classification overview of AI concepts, computing challenges, and benefits
problems. This is made possible with the experiment’s for space applications. As such, a background on these
scalable image classification pipeline designed to ingest ML subjects is not provided in this section. Focus is instead
models trained by third-party contributors. CNN models can placed on applying these AI concepts on OPS-SAT as
be uplinked to the spacecraft as tflite files and chained into an implemented for autonomous image classification with the
ML inference pipeline that sequentially classifies and SmartCam app.
subclassifies acquired pictures. Branching rules are
configurable based on each model’s inference output. Image Image Classification
classification is a common ML use-case and a wealth of Figure 1 shows example thumbnails classified as either
thumbnails downlinked from the spacecraft is available to “Earth”, “Edge”, or “Bad”. Mindful of the spacecraft’s data
experimenters as training data. The application runs an image budget, an image classification model was trained to
classifier developed with TensorFlow Lite, a framework that autonomously discard “Bad” images from being downlinked.
“enables low-latency inference of on-device machine The toolkit developed to train, validate, and test the image
learning models with a small binary size and fast classifier is available and documented in [13]. It is based on
performance” [10]. To the authors’ knowledge, this is the first transfer learning and uses the make_image_classifier tool
instance in which the TensorFlow framework is used on- included in the TensorFlow Hub library.
board a spacecraft. The first on-board TensorFlow Lite
inference occurred on November 8, 2020.

The SmartCam app operationalizes on-board ML inference


to provide daily support for OPS-SAT operators. Beyond
this, and in line with the mission’s concept, the platform
seeks to extend its capability to experimenters so that space
AI can be made easily accessible to industry, institutions, and a arth b dge c ad
individuals. TensorFlow is a powerful and versatile Figure 1. Sample Earth, Edge, and Bad thumbnail images
framework that has spearheaded countless innovations in acquired by OPS-SAT’s on-board camera. (a) is white-
terrestrial applications of AI by enabling rapid prototyping balanced, (b) and (c) are unprocessed. Credit: ESA.
with easy modeling and intuitive high-level APIs. Its
availability on-board a spacecraft sets the groundwork to Transfer Learning
introduce similar dynamism for space applications.
A dataset of several thousands of downlinked thumbnails is
insufficient as CNN training data. Transfer learning consists
Space segment AI is still in its nascent stages and stands to
of building on top of a generic model to reduce the amount of
benefit from developments in other scientific domains. AI’s training inputs required. Several template models with
interaction with other fields of science is examined by learned features that represent the visual world are available
Kusters et al. [11] in which the importance of
to fine-tune into specific image classifiers. The generic model
interdisciplinarity is emphasized:
used to train the classifier predictions shown in Figure 1 is
the feature vectors of images with the MobileNet V2 CNN
“AI systems have complex life cycles, including data
architecture trained using the ImageNet image dataset
acquisition, training, testing, and deployment, ultimately [14][15].
demanding an interdisciplinary approach to audit and
evaluate the quality and safety of these AI products or
TensorFlow Hub
services.”
TensorFlow Hub is a repository of trained and re-usable ML
Deploying an industry standard framework – with strong models ready to bootstrap into custom models deployable
industry heritage – on-board a spacecraft broadens its anywhere. The make_image_classifier tool is used to train
accessibility to AI communities established outside of the image classifier models [16]. No coding is required as the tool
space sector. In this regard, OPS-SAT serves as a platform only expects sample images for each target labels (i.e.,
3
“Earth”, “Edge”, and “Bad”). Obtaining more robust models may also be used as a quality filter to discard correct but low
may involve customized training, in-depth model analysis, confidence predictions which result from low quality images.
fine-tuning, and model optimization techniques that are
Table 2. Confusion matrix.
beyond the scope of this paper.
Predicted**
Training Edge Earth Bad

Actual*
All image thumbnails downlinked from the spacecraft are Edge 29 1 2
hosted in the OPS-SAT Community Platform [17]. Table 1 Earth 0 128 13
disaggregates the 4,705 thumbnails used to train the model Bad 3 9 994
into Training (60%), Validation (15%), and Testing (25%) * Expected labels included in the test data set.
sets labeled as either “Edge” (2.6%), “Earth” (12%), or ** Labels predicted by the image classifier.
“Bad” (85.4%). Inconsistent attitude control during OPS-
SAT’s commissioning phase led to an unbalanced dataset
Table 3. Classification metrics.
with the on-board camera capturing a disproportionate
number of “Bad” images. Edge Earth Bad
Table 1. Distribution of thumbnail sets Edge (2.6%), Precision 0.906 0.928 0.985
Earth (12%), and Bad (85.4%) to train the image Sensitivity* 0.906 0.908 0.988
classification model. Specificity** 0.997 0.990 0.913
F1-Score 0.906 0.918 0.987
Training Validation Testing Total * The true positive rate of a label.
(60%) (15%) (25%) (100%) ** The true negative rate of a label.
Edge 73 18 32 123
Earth 339 85 141 565 Figure 2 show the impact that different prediction confidence
Bad 2,409 602 1,006 4,017 threshold values have on the number test images that are
Total 2,821 705 1,179 4,705 discarded for not surpassing the threshold. The higher the
threshold the greater the number of images is discarded as not
Each training step samples a batch size of 32 and the epoch meeting a high enough level of prediction confidence from
is 10 i.e., the number of training iterations over the entire the classifier. The dotted vertical line represents the threshold
training set. of 48% that maximizes both discarding incorrectly classified
images and preserving those that are correctly classified.

Figure 2. Number of images with prediction confidences below a given threshold. The y-axis is on a logarithmic scale.

The resulting model’s label prediction performance is The image classifier trained with TensorFlow’s out-of-the-
summarized with a confusion matrix in Table 2 and the box training tools result in a robust model with consistent
classification metrics in Table 3. The balanced accuracies are prediction accuracy across all three labels. The disk footprint
approximately 95% for each label. is 8.95 MB for the TF Lite model file and 5.34 MB for the
model inference binary program files implemented with the
Incorrect classifications against the test data set occur with TensorFlow Lite Inference C API. The Python code that
predictions of less than 48% confidence. This confidence interfaces with the on-board camera and the image classifier
threshold is applied to flag when misclassification is is 62 KB.
statistically probable. The threshold is carefully selected to
minimize misinterpreting correct classifications with low
prediction confidences as misclassifications. The threshold
4
Spatial Awareness coastal waters which can be used as training data by Earth
The SmartCam app uses the GEOS Geometry Engine Open Observation experts to train Model B in Figure 3 to
Source [18] for geospatial awareness so that image subclassify “Earth” images as in Figure 4 to either “Land”,
acquisition can be autonomously triggered when the “Coast”, or “Sea”.
spacecraft is above pre-defined areas of interest. These
locations are described as map polygons and/or multi-
polygons in a GeoJSON file that can be updated to collect
geo-targeted pictures for future models with custom training
data needs (e.g., images of ice shelves or rainforest rivers).

Classification Pipeline
a and b oast c Sea
Given the S-band uplink rate of up to 256 kbit/s, operators
can load a new model on-board the spacecraft with a few Figure 4. Sample Land, Coast, and Sea thumbnail images
passes over the ground station. As new thumbnails are acquired by OPS-SAT’s on-board camera. (a) and (b) are
downlinked over the course of the mission, the model can be white-balanced, (c) is unprocessed. Credit: ESA.
retrained on the ground to either replace an older less Within OPS-SAT’s flight control team, models A and C were
performant version on-board the spacecraft or to extend its trained by different spacecraft operators, using different
classification capabilities with additional labels. However, training data sets, to satisfy separate operational
training a monolithic model may be ill-suited for high requirements.
precision classification. Considering this, the SmartCam app
can run multiple models in a sequence to support hyper-
Crowdsourcing
specialized inferences across a classification and
subclassification pipeline. This is illustrated in Figure 3 Two basic arguments are taken from Szajnfarber et al. [19] to
where an image inferred as “Bad” by Model A is an input for support extracting better solutions from the crowd than from
Model C to determine the general pointing direction of the internal experts: right-tail sampling and unearthing distant
camera when the “Bad” picture was acquired. If Model C expertise. Right-tail sampling is illustrated in Figure 5 where
cannot determine whether the direction is “Nadir” or “Space” the best open solution from the crowd is greater than the
then it settles for the generic parent label of “Bad”. average closed solution from the experts. However, the best
solution from the experts is still greater than the best solution
from the crowd and the crux of the problem rests in
identifying the actual best available solution. Unearthing
distant expertise refers to introducing novel solutions that
may be contributed by outsiders as they problem-solve with
different disciplinary perspectives outside the context in
which internal experts are entrenched.

Figure 3. Simple classification pipeline. Red arrows


represent image inputs and black arrows are inferences.
Model C was experimented with on the ground as a means
for the SmartCam app to be used as a tool to investigate target
pointing attitude control problems. Training and performance
metrics obtained for this model are available in Appendix A.
As more diverse types of “Bad” thumbnails are collected, the
model will be further trained to infer more fine-tuned Figure 5. Right tail sampling for better contributions
pointing information (e.g., nadir bottom left, nadir bottom from the crowd. Reproduced from Szajnfarber et al. [19].
right, nadir left, nadir right...). This demonstrates the A potential classification pipeline is presented in Figure 6 to
flexibility and scalability of the classification pipeline to illustrate how crowdsourcing can contribute to refining the
enable using ML to support operational needs that did not SmartCam app’s image classification capabilities.
drive the original design of the SmartCam app.

ML for image classification is a broad discipline to which AI


communities from different fields can contribute their own
expertise in precision classification. For instance, the
Sentinel-2 mission acquires optical imagery over land and
5
Conclusion
OPS-SAT has spearheaded the operational use of the
TensorFlow framework for autonomous decision-making on
a flying spacecraft. The SmartCam app significantly reduces
the involvement of ground operators with respect to image
acquisition scheduling and downlinking operations.
Although the use of TensorFlow is made possible by the
powerful processing capabilities of its SEPP payload, similar
configurations can be implemented elsewhere as powerful
on-board computers designed to mitigate radiation risks are
becoming more available as commercial off-the-shelf
products. Furthermore, radiation hardened FPGA devices can
serve as processing alternatives for on-chip ML inferences on
small resource constrained processors [21]. Experimental
results by Pitsis et al. [22] demonstrate a competitive
performance of throughput, latency, and energy consumption
with FPGAs over typical GPU technology. Space-grade
compute capabilities allow for ML frameworks developed for
earthbound embedded systems to be transferred to in-orbit
space applications. The TensorFlow inference API is not
Figure 6. Image classification pipeline extend with limited to only processing images as inputs. Any type of data
crowdsourced models. collected on-board the spacecraft can be consumed by the
As previously mentioned, Models A and C were developed API thus significantly widening ML applications for
separately within the flight control team by internal and spacecraft autonomy such as in guidance, navigation, and
contractor operators. Models B, D, E, and F are examples of control (GNC) or novelty detection in processing science
potential contributions that can be crowdsourced to AI data. Beyond ML model inference, access to housekeeping
enthusiasts across different fields within or outside the space and sensor data has proven to be a treasure trove of inputs for
sector. Measurable performance constraints are required to the purpose of in-orbit training and is presented in Section 5
prioritize inference speed and accuracy from which the with the OrbitAI experiment. This sets a practical
scalability of the pipeline depends on to avoid bottlenecks for groundwork towards developing intelligent systems that
rapid-fire shooting image acquisition operations. By way of adapt to unexpected situations or new environments brought
example, the “Land” image in Figure 4(a) can be about by mission extensions.
subclassified as “Land_Agri” to label it as farmland. Rather
than approaching this classification with a single model for The potentials of ML are best understood by established AI
direct inference from “Earth” to “Land_Agri”, it is communities already familiar with navigating its vast
decomposed into solvable subproblems across three potential technological landscape. The flexible and scalable inference
crowdsourced models along the “Earth” → “Cloudy_Not” → capabilities developed on-board OPS-SAT are designed to
“Land” → “Land_Agri” inference graph path. The proposed stimulate knowledge and know-how transfer from mature
models and labels are purely illustrative, the “openability” of terrestrial applications into the space sector. A productive
the system allows for user-defined labels to be inferred on an path towards crowdsourcing is developed by decomposing a
as-needed basis of custom image acquisition campaigns. complex system into “openable” subproblems that will form
the basis of a competition to crowdsource training ML
The SmartCam’s approach to crowdsourcing exemplifies the models for enhanced and fine-tuned spacecraft autonomy.
productive path described by Szajnfarber et al. [19] for
extending the applicability of open methods so that a full 4. IMAGE CLUSTERING
system problem is decomposed into isolated “openable” After completing the classification pipeline, further
subproblems. Models B, E, and F are self-contained and can classification can be achieved by the SmartCam app with k-
be trained in isolation from each other by different AI means image clustering. To the authors’ knowledge, this
communities with at most performance constraints and a feature is a first with respect to executing unsupervised
common test data set for integration and quality assurance learning on-board a flying spacecraft. This first on-board run
purposes. Crowdsourcing ML models to meet decomposed of the training algorithm occurred on June 26, 2021. The
image classification needs is set to be trialed through a image clustering logic is implemented as a stand-alone C++
competition hosted on the Kelvins platform maintained by application that is invoked by the SmartCam app when image
ESA’s Advanced Concept Team [20]. The submissions will clustering is enabled [23]. The k-means algorithm itself is a
serve operational needs as well as a knowledge transfer generic open-source C++ solution forked from GitHub [24].
mechanism to develop new classification capabilities in OPS- The image processing code used to read, resize, and write the
SAT. image files, pixel by pixel, is a single-file public domain C
library that is also forked from GitHub [25].

6
Process and Results New cluster centroids can be recalculated using new batches
Image clustering in the SmartCam app is a 3-step process: of collected training images at any moment via the SmartCam
data collection, training, and prediction. The app acquires app’s configuration file. This introduces the potential for
multiple images within its program loop which are then interesting use cases to be explored, such as planning targeted
decoded so that their pixel values are written into a CSV file. image acquisition campaigns to train specific types of
The CSV file collects these values as training data until a pre- clustering models. For instance, perhaps a clustering model
configured threshold number of images has been reached. could be trained with images of ice shelves so that they may
The number of pixels to process and their value range are be autonomously detected and categorize based on their
reduced by resizing, grayscaling, and pixel normalizing the formation, fragmentation, and distribution.
images in a memory buffer prior to writing the pixel values
to the CSV file. For the purposes of this experiment, Image Segmentation
grayscaled images resized to 20x20 pixels produce almost Although not deployed to the spacecraft, noteworthy initial
identical cluster groupings when compared with using the results are obtained when using the same k-means algorithm
unaltered RGB thumbnail images as training data inputs. The for image segmentation (feature extraction). This section
former approach has the clear advantage of a much faster presents results from running and evaluating this experiment
execution time and lower resource utilization footprint. Once on the flatsat. Processing loads and execution times were not
the threshold number of training images has been collected, measured but observed to be low when operating on the
the cluster centroids are calculated given a pre-configured k- thumbnail images generated by the SmartCam app. Figure 8
value. The cluster centroid values are written in a CSV file reveals promising results obtained for cloud detection with
which is read by the k-means algorithm to predict which k=2. Furthermore, the cloud coverage percentage can easily
cluster an acquired image will be assigned to. be obtained by counting the number of pixels in both
extracted feature and calculating the ratio, e.g., ~59% for (a)
Running k-means on-board OPS-SAT with k=4 and a and ~2.7% for (d) in Figure 8. A potential operational value
training data threshold of at least 100 random images has on-board the spacecraft would be to autonomously prioritize
resulted in an operationally valuable autonomous cloud image downlinking by ranking levels of cloudiness or
filtering mechanism despite no planning or systematic discarding excessively cloudy images when comparing their
approach to collecting training data. The resulting 4 clusters cloud coverage percentage against a threshold.
tend to group images as either (a) not cloudy, (b) cloud, sea,
and/or ice, (c) partially cloudy, or (d) completely cloudy and Flatsat experiments on the Engineering Model (EM) for
overexposed. Example images for each of these clusters are larger k-values k=4 and k=7 suggests that image
show in Figure 7. segmentation could be used on-board the spacecraft to
evaluate sea level depths or cloud thickness, as seen in Figure
9 and Figure 10 respectively. Future work in this area could
focus on parameter refinements or experiments with variant
training algorithms to achieve more accurate delineations.

a b

Figure 7. Sample clustered images downlinked from the


spacecraft for k-means image clustering with k=4. (a), (b),
and (c) are white-balanced and (d) is unprocessed. Credit: c d
ESA.
Figure 8. Results from the flatsat for k-means image
segmentation (feature extraction), k=2. Credit: ESA.
7
bandwidth constraints limit the diversity of scenarios that can
be represented by the amount of data that can be persisted.
Real-time access to data is thus a fundamental prerequisite to
on-board training. Training robust models requires a wealth
of data: housekeeping and sensor measurements are a
treasure trove of training inputs waiting to be unlocked.
a b c
The advantages of AI capable spacecrafts have been
Figure 9. Sample results from the flatsat for k-means sea convincingly demonstrated in past missions where
level depth detection. Credit: ESA. autonomous on-board decision-making has supported ground
operators in manners that significantly reduced operational
costs and effort all while increasing operations and science
uptime [5][6][26]. Targeted on-board AI applications have
succeeded in automating scheduling, planning, classifying
acquired payload data, and detecting events of scientific
interest. The OrbitAI experiment seeks to operationalize in-
flight training, particularly within higher dimension input
a b c spaces, to allow more complex models to be trained given a
continuous stream of available on-board data. This introduces
Figure 10. Sample results from the flatsat for k-means new areas in which spacecraft AI can be applied by means of
cloud thickness detection. Credit: ESA. training updatable models that can autonomously self-adjust
to complement or even optimize traditional methods to
Conclusion developing and deploying complex systems such as FDIR or
The SmartCam app’s use of k-means has been limited to guidance, navigation, and control mechanisms. This sets a
experimental runs without optimization, refinements, or any precedent towards developing intelligent systems that adapt
sort of systematic training data collection strategy. Despite to unexpected situations or new environments, such as with
this, the initial results are promising with regards to the mission extensions.
operational value of on-board unsupervised learning.
The NanoSat MO Framework
5. ONBOARD MACHINE LEARNING The OPS-SAT mission introduced the “apps” software
This section presents OrbitAI, an experiment on the OPS- paradigm in space by developing and operationalizing the
SAT spacecraft that demonstrates how on-board ML can be NanoSat MO Framework (NMF). The NMF is a software
achieved given the availability of high processing power and framework for nanosatellites built on top of the CCSDS
high-level access to a structured stream of sensor and Mission Operations (MO) services standard. This standard
housekeeping data. Past missions have restricted their “facilitates not only the monitoring and control of the
experience with AI to inferring models that were trained on nanosatellite software applications, but also the interaction
the ground prior to being uplinked to a spacecraft. The with the platform” [1]. Figure 11 illustrates the NMF
OrbitAI experiment pushes the envelope by breaking away ecosystem with respect to how it interacts with the space and
from this trend and shifting ML training from a ground ground segments.
activity to an autonomous in-flight operation.

To the authors’ knowledge, OrbitAI is the first time that AI


models are trained on-board a flying spacecraft. The first
supervised on-board training occurred on April 16, 2021.
Multiple online machine learning algorithms are used to train
several models emulating the behavior of an existing FDIR
subsystem that protects the spacecraft’s camera lens against
exposure to direct sunlight. Emulating a traditional FDIR
algorithm allows supervised training of models that can then
be evaluated against expected outputs to verify and validate
the methods used.

Models that are trained on the ground to be uplinked to a


spacecraft for on-board inference are limited by the lack of
training data that is representative of in-space behavior. This
gap can be addressed by generating data on the ground to
simulate the space environment, albeit non-exhaustively and
inaccurately. Alternatively, data can be collected and
Figure 11. NMF interactions on-board and to the ground.
downlinked from a spacecraft, in which case storage and
8
NMF apps are written in Java using the Software NMF so that OPS-SAT apps on-board the spacecraft have
Development Kit (SDK). The NMF Supervisor grants access real-time access to these parameters. The solution was to
to a set of platform services that interface with the expose the OBSW data pool parameters through a proxy
spacecraft’s payload instruments. These services allow mechanism in the NMF Supervisor. Along with platform
experimenters’ apps to interact with the spacecraft via a high- services, the Supervisor exposes its set of MO monitoring and
level API that also provides access to structured data which control services like the parameter service. NMF apps can
abstracts complexities of raw data collected in a space consume the Supervisor's parameters without directly
environment. Custom actions and parameters can be exposed interfacing to the OBC. All the logic is handled by the
allowing for the ground segment to monitor and interact with Supervisor which consumes the OBSW aggregation service
the app. to fetch requested parameters. The data pipeline is shown in
Figure 12.
The OrbitAI app is a “hybrid” NMF app in that it consists of
two applications acting in concert. An NMF app that polls Over 10,800 different parameters have thus been made
data and feeds it as training inputs to a ML socket server, accessible to NMF apps. With this mechanism in place all
hereinafter referred to as the ML Server. The app is data pool parameters become potential ML training data
developed using the NMF Java library and the ML Server in inputs. However, there is a theoretical limit of querying up to
C++ for low computation and memory footprint. The app 800 parameters per second when a 10 second data cache is
spearheaded developing new NMF functionality to include enabled. Disabling the cache reduces the query rate limit to
the access to OBSW data pool parameters, thus significantly 80 parameters per second.
extending the range of data accessible to experimenters. This
was achieved in collaboration with the FCT and NMF
developers during the development of OrbitAI.
Providing real-time access to OBSW data pool parameters
poses security, performance, and technical challenges:

(a) The data pool is not maintained in the SEPP but in


the OBC where the OBSW lives. Opening a direct
and unsupervised communication channel to access
the OBC introduces new risks to the mission.

(b) The OBC is far less powerful than the SEPP and it
is not conceivable to have multiple apps querying it Figure 12. OrbitAI app deployment on-board OPS-SAT.
at the same time without restrictions. The app’s validation runs on the EM were benchmarked. The
results are presented in Appendix C. The two CPUs plots
(c) There is limited Controller Area Network (CAN) correspond to the SEPP’s dual-core processor. The load plots
bandwidth, and the Maximum Transmission Unit give an overall indication of the SEPP usage (CPU, memory,
(MTU) is 256 bytes when sending messages to disk, net I/O), and the memory plots show the experiment’s
OBC, with no fragmentation protocols. NMF apps RAM usage. The noticeable spikes represent the times in
run on the SEPP and communication between the which the experiment ran, approximately one minute for each
SEPP and the OBS is via the CAN bus, shown in run. The resource consumption clearly shows that the app has
Figure 11. a light processing footprint. The ML Server has a small
memory footprint and the architectural design of the training
(d) Rather than having a data pool parameter service, mechanism results in low CPU usage relative to the platform.
the OBSW has a service that only allows fetching a
collection of parameters within pre-defined groups
Training a FDIR Model
called aggregations.
OPS-SAT’s payload suite includes 3 optical devices: star
Data Pool Parameters as Training Data tracker, camera, and optical receiver. Their lenses are
sensitive to sunlight and thus require a protection mechanism
An On-Board Software (OBSW) data pool of thousands of for situations in which they are pointed towards the sun. The
parameters is constantly updated in the OBC. The diversity spacecraft’s attitude control system cannot always guarantee
of this data ranges from power outputs to raw sun sensors accurate pointing. For instance, incidents in which the camera
measurements and are an untapped resource as training inputs is pointing towards the sun instead of nadir during image
for on-board ML. ESA makes this data accessible to the acquisition operations can occur. These events may
ground segment but only after they are downlinked and contribute to degrading the camera’s sensors. A protection
loaded on the WebMUST web platform [27]. Access to these mechanism exists as part of the FDIR software implemented
parameters is further constrained in that only a handful are in the spacecraft’s OBSW running in the OBC. The algorithm
available for bounded time periods during which the FCT uses the traditional method of monitoring measured values
enables their collection via a stream of telemetry packets. The against predefined limits. It is this mechanism that OrbitAI
OrbitAI project has implemented the necessary updates to the
9
seeks to emulate to evaluate on-board training and validate The timer gives the opportunity for the spacecraft to resolve
the selected training methods. The FDIR algorithm outputs a itself through spacecraft attitude and/or orbit position
camera on/off state that is used as the target label during changes in case the problematic situation is only transient.
supervised training. The timer is stopped when the threshold is no longer
exceeded. Once turned off, the device stays off unless
The spacecraft is equipped with 6 photodiodes that measure explicitly tele-commanded to be turned back on.
the sun elevation angles on each surface. These
measurements range from 0° to 90°. The former indicates no Given the sun elevation angles as training data, a binary
sunlight on the surface and the latter direct sunlight with the classification model can be trained with supervised learning
sun positioned at the zenith with respect to the surface from to predict a camera on/off (+/−) label. Although using only
which the photodiode measurement is taken. The photodiode PD6 as training inputs would in theory suffice, the
Ids as well as their location on the spacecraft and their data experiment seeks to showcase ML training capabilities in n-
pool parameter names are listed in Table 4. Henceforth, dimensional input spaces rather than just a single dimension.
references to the photodiode Ids, or simply to “PD”, will refer Consequently, PD6 is transformed into higher dimensional
to the sun elevation angle measurement and PD6 being that spaces to generate more training inputs. This is illustrated in
of the surface on which the camera is located. Figure 13: 1-dimension input space of x = PD6 in (a) is
transformed to 2-dimensions in (b) with (x, y) = (PD6, PD62),
Table 4. Photodiode Ids, locations, and parameter names.
and to 3-dimensions in (c) with (x, y, z) = (PD6, PD62, PD63).
OPS-SAT body Data pool Training in a 5D input space is also experimented with such
Photodiode Id that PD1 through PD5 are used as training data instead of
frame surface parameter name
PD1 +X CADC0884 PD6. This serves to experiment with training a redundant
PD2 -X CADC0886 model in case of photodiode failure i.e., using PD1 through
PD3 +Y CADC0887 PD5 as alternative inputs in case of PD6 failure.
PD4 -Y CADC0890
PD5 +Z CADC0892 For demonstration purposes, SVM is the preferred technique
PD6 -Z* CADC0894 in that it is effective in high dimensional spaces, memory
efficient, and works well when there is a clear separation
* The surface on which the camera is located (Nadir).
between label groups i.e., classes. Applying the
transformations described in Figure 13 in such a way that
The OBSW’s FDIR algorithm powers off an optical device if
involves polynomial combinations introduces high
the measured sun elevation angle on the surface with the
computational costs.
optical device exceeds a certain threshold. The thresholds
consider the Field-of-Views (FOV) of the optical devices
In traditional SVMs, these transformations are avoided by
with some safety margins. From Equation 1, a 60° camera
applying “kernel tricks.” However, with continuous real-time
elevation threshold, noted as CET, is determined by
access to data, inputs are iteratively available in a sequential
considering a FOV of 21° to which a 9° margin is included.
order thus eliminating the need for data persistence and batch
processing that would be constrained by the spacecraft’s
CET = Zenith – (FOV + margin) = 90 – (21 + 9) = 60° (1)
processing and storage capabilities. Transformations to
higher dimensions can thus be calculated on the fly without
The flow chart in Appendix B details the FDIR algorithm
the need for kernel tricks since the transformation functions
used to protect the camera from exposure to sunlight. It is
can be applied to one input at a time rather than on a large
periodically invoked from the OBSW’s FDIR monitoring
batch of persisted data. Online machine learning algorithms
task. After exceeding the sun elevation angle threshold, a turn
are best suited for this type of real-time training as they do
off timer is triggered rather than immediately turning off the
not require data persistence.
device.

Figure 13. Training data in (a) 1D input space, (b) 2D input space, and (c) 3D input space. PD6 is used for the
sun elevation angle measured by the photodiode for the spacecraft surface on which the camera is located.

10
A selected method is the Online Passive-Aggressive training scenarios. Results are shown in Table 5. Elevation
Algorithm (PA) [28] as it is a generalization of SVM for the angle values used as training data are grouped in a batch size
purpose of online learning for binary classification. Its of 88 ranging from 0.7 to 1.3 radians at 0.01 steps. Evaluated
“construction can be viewed as finding a support vector epochs are from 1 to 4. Balanced accuracies are calculated by
machine based on a single example while replacing the norm comparing the models’ predicted labels against expected
constraint of SVM with a proximity constraint to the current labels for elevation angle inputs ranging from 0 to 1.57
classifier” [28]. All three variants of PA are evaluated i.e., radians at 0.01 step giving a batch size of 158.

Table 5. Balanced accuracies of predictions made by models trained on-ground.


Input 1D 2D 3D
Space x = PD (x, y) = (PD, PD2) (x, y) = (PD, PD2, PD3)
Epochs 1 2 3 4 1 2 3 4 1 2 3 4
PA 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500
PA-I 0.500 0.500 0.500 0.500 0.500 0.500 0.552 0.614 0.500 0.700 0.767 0.795
PA-II 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500
Adam 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500
RDA 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.529
AROW 0.500 0.500 0.500 0.500 0.500 0.662 0.733 0.757 0.681 0.833 0.914 0.943
SCW 0.505 0.505 0.505 0.505 0.505 0.505 0.681 0.781 0.505 0.690 0.752 0.786
NHERD 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.567 0.743 0.786 0.805

PA, PA-I, and PA-II. For the sake of providing comparative Poor predictions at ~50% balanced accuracy is observed for
analysis, other online learning algorithms are also used to all models trained in 1D feature space regardless of the
produce models trained with the same inputs. However, no number of training epochs. This supports resorting to
evaluation was made to determine if transformations to applying transformation functions to move the input space
higher dimension input spaces is a properly suited approach into higher dimensions for the sake of training models that
to producing n-dimensions training inputs for these result in greater prediction balanced accuracies. However, the
alternative training techniques. These additional algorithms highest balanced accuracies were not obtained with the PA
are: Adam [29]; Adagarad RDA [30]; AROW [31]; SCW methods selected with the assumption that they would be best
[32]; and NHERD (with full diagonal covariance) [33]. suited to benefit from dimension transformations. Better
Their implementations within OrbitAI consist of re-using balanced accuracies are only observed as a function of
open-source code taken from MochiMochi: an online number of training epochs for PA-I, AROW, and SCW in 2D
machine learning library developed in C++ to which minor and 3D feature spaces as well as for Adagarad RDA and
enhancements were made with respect to saving and loading NHERD in 3D feature space. The highest balanced
the trained models [34]. The ability to save and load models accuracies observed after 4 epochs are ~78% with SCW in
makes it possible to split training across separate runs of the 2D input space and ~94% with AROW in 3D input space.
OrbitAI app. Training can be started during an orbit, paused, Figure 14 shows the increase in balanced accuracies as a
and resumed in later orbits on different days. Hyperparameter function of number of epochs for up to 10 epochs.
optimization is outside the scope of this experiment thus the
selected values are those used in the examples documented in
the library’s source code repository [34].

The online learning algorithms evaluated in this experiment


trained models in the 1D, 2D, 3D and 5D input spaces. All
training input data and serialized models are archived in [35]
and used to calculate classification metrics to evaluate model
performances. On-ground training is purely for the purpose
of evaluating different training algorithms through
comparative analysis and are not uplinked to the spacecraft. a b

Evaluating Online Training Algorithm Figure 14. Balanced accuracy prediction improvement as
a function of number of training epochs.
On-ground training with generated data is performed to
validate the choice of training techniques based on the The ground results indicate that, given diverse enough
obtained balanced accuracies. Furthermore, it serves to training input, models with over 90% balanced accuracy can
evaluate what can be expected from in-flight training. This be quickly obtained with SCW in a 2D and 3D input space
exercise is restricted to 1D, 2D, and 3D input space due to the and with AROW in a 3D input space.
significant effort required to generate valid combinations of
5 photodiode measurements to simulate 5D input space
11
In-Flight Training ground simulations that were presented in Table 5. AROW
The PA, PA-I, and PA-II algorithms were disabled due to delivers excellent results after only a few training runs. The
serialization issues identified during experiment validation model’s confusion matrix is presented in Table 7 for
on the EM flatsat. This eliminated the risk of an unhandled inferences on the same exhaustive dataset used to calculate
error preventing the models from being loaded to resume the balanced accuracies presented in Table 6. The model’s
training rather than being re-initialized at every run. This was sensitivity is 99%, specificity is 79%, precision is 90%, and
deemed an acceptable loss given the classification metrics of F1 score is 95%.
the PA algorithms obtained in Table 5. Table 7. Confusion matrix for AROW 3D.
In-flight training was conducted with Adam, RDA, AROW, Predicted*
SCW, and NHERD in 1D, 2D, 3D, and 5D input spaces, OFF ON

Actual**
resulting in a total of 20 models. During the first app run the OFF 43 11
slowest, fastest, and average execution times of a model
update were 1.2 μs, 966 μs, and 5.6 μs, respectively. All logs, ON 1 104
training data, and serialized models were persisted and
downlinked for on-ground analysis and are available in [35]. * Labels predicted by the image classifier.
As a testament to the flexibility of the system, the downlinked ** Expected labels included in the test data set.
serialized models proved to be a useful recovery resource
when the on-board model files were corrupted due to the Other models only have a balanced accuracy of
OrbitAI app being repurposed as a data collection tool to approximately 50% despite the 3,500 iterations of training
support an investigation unrelated to the app’s training data requests. The app logs reveal a lack of diversity in the
experiment. The latest downlinked models were uplinked training inputs that account for the cases in which the camera
back into the spacecraft thus restoring the experiment to its is exposed to sunlight. It is expected that continued training
most recent valid state from which training could resume as beyond the scope of the results presented in this paper will
planned. improve these models. The training inputs collected during
the app’s six runs were insufficient to evaluate models trained
In-flight training occurred over the course of six runs of the in 5D input space. Continued training will also occur beyond
OrbitAI app. The first two ran 850 iterations of training data the scope of this paper so that the performance of these
requests over the course of approximately 71 minutes. models may be studied.
Subsequent runs were reduced to 450 iterations over the
course of 45 minutes to focus data collection during the sun- Conclusion
illuminated segments of the spacecraft’s orbits. The first Training on the ground relies on simulated or telemetry data
training run initialized the model files and updated at every that only represent a snapshot of how a spacecraft behaves in
iteration as new training data was fetched. Subsequent runs space. The obtained models are frozen to their pre-launch
further updated the model files. state, unable to adapt with new inputs as the mission
progresses. In-flight training overcomes these weaknesses
The serialized models were downlinked and the balanced with the advantage of real-time access to data that is
accuracies of their predictions calculated on the ground with continuously updated and representative of the space
an exhaustive sun elevation angle dataset ranging from 0 to environment.
1.57 radians in steps of 0.01. Although this could have been
evaluated on-board with the OrbitAI app’s inference mode, it The OrbitAI experiment is a proof-of-concept demonstrating
was not done to prioritize training the models given the how in-flight ML training capabilities is possible by
limited amount of schedule slots available for the app to run. combining high processing power, easy access to OBSW data
The results are presented in Table 6. pool parameters, and the use of lightweight online machine
learning algorithms that eschew the need of persisting large
Table 6. Balanced accuracies of predictions made by amounts of training data. The following are suggested future
models trained in-flight. work to iterate on top of this framework:
1D 2D 3D
Adam 0.495 0.495 0.495 (a) Stress test querying data pool parameters.
RDA 0.495 0.495 0.495 (b) Evaluate training models for multi-label
AROW 0.495 0.561 0.891 classification.
SCW 0.500 0.500 0.500 (c) Evaluate training complex models with higher
NHERD 0.495 0.495 0.665 dimension inputs and hyperparameter optimization.
(d) Experiment training with dynamic hyperparameters
that change during different stages of the training.
A balanced accuracy of 89% is achieved with AROW when
(e) Link model predictions with autonomous decision-
training in 3D input space. Ranking second is the NHERD
making.
model at 67%, also trained in 3D input space. These models
significantly outperform the rest, as expected based on the

12
(f) Evaluate data persistency to allow training with non-
online algorithms that use batch multi-epoch
training.
(g) Expand and integrate ML capabilities into the NMF
Java ecosystem by re-using existing open-source
general purpose ML libraries such as JSAT [36].
(h) Integrate unsupervised learning algorithm.
(i) Propose a standard for the use of on-board AI and
ML as part of Consultative Committee for Space
Data Systems (CCSDS) Mission Operations (MO)
Services [37].

On-board ML training on OPS-SAT significantly broadens


the field of research in AI for space applications. The ability
to train updatable models introduces flexible AI with a
system that can adapt when given new that is information
processed as on the fly training inputs. This is particularly
interesting for producing robust models in case the trained
input spaces contain unexpected data such as high noise or
new measurements inherent to extended mission parameters.
The models trained in this experiment only used 6 training
data inputs out of over 10,800 data pool parameters available
on OPS-SAT. With each parameter available as a potential
training input, on-board ML training at much higher
dimension input spaces can be explored to develop more
complex models to support operational and experimental
needs of future missions.

6. SUMMARY
The benefits of a computing environment that enables
building on top of open-source software are evident and gives
the space segment access to an ecosystem of technologies that
has matured through decades of industry development in
web, edge, and mobile software engineering for terrestrial
applications. Adopting these technologies in OPS-SAT offers
the FCT unprecedent creativity in exploring ML for
autonomous decision-making on-board a flying mission.
These AI solutions significantly reduce the day-to-day effort
required by operators to satisfy mission objectives while
contributing innovative new ideas and maximizing scientific
returns. The ability to train models in-flight with data
generated on-board without human involvement is an
exciting first that stimulates a significant rethink on how
future missions can be designed.

13
APPENDICES C. SEPP RESOURCE MONITORING

A. MODEL C TRAINING RESULTS


Table 8. Distribution of thumbnail image sets Nadir
(18.2%), Bad (14.4%), and Space (67.4%) to train the
image classification model (Model C).
Training Validation Testing Total
(64%) (16%) (20%) (100%)
Nadir 352 88 109 549
Bad 278 69 87 434
Space 1,298 324 406 2,028
Total 1,928 481 602 3,011
Figure 15. CPU-0.
Table 9. Confusion matrix (Model C).
Predicted*
Nadir Bad Space
Nadir 102 7 0
Actual

Bad 5 71 11
Space 0 0 406
* Labels predicted by the image classifier.
** Expected labels included in the test data set.

Table 10. Classification metrics (Model C).


Nadir Bad Space
Precision 0.953 0.910 0.974 Figure 16. CPU-1.
Sensitivity* 0.936 0.816 1.000
Specificity** 0.990 0.986 0.944
F1-Score 0.944 0.861 0.987
* The true positive rate of a label.
** The true negative rate of a label.

B. CAMERA FDIR ALGORITHM

Figure 17. Load.

Figure 18. Memory.

14
REFERENCES J., Othmani, A., Palpanas, T., Komorowski, M., Loiseau,
[1] Coelho, C., Koudelka, O., & Merri, M. (2017). NanoSat P., Moulin Frier, C., Nanini, S., Quercia, D., Sebag, M.,
MO framework: When OBSW turns into apps. 2017 Soulié Fogelman, F., Taleb, S., Tupikina, L., … Wehbi,
IEEE Aerospace Conference, 1-8. F. (2020). Interdisciplinary Research in Artificial
Intelligence: Challenges and Opportunities. Frontiers in
[2] Coelho, C. B. W., Cooper, S., Koudelka, O. F. S., Merri, big data, 3, 577974.
M., & Sarkarati, M. (2017). NanoSat MO Framework:
Drill down your nanosatellite's platform using CCSDS [12] Manning, J., Langerman, D., Ramesh, B., Gretok, E.,
Mission Operations services. In Proc. International Wilson, C., George, A., Mackinnon, J., & Crum, G.
Astronautical Congress. (2018). Machine-Learning Space Applications on
SmallSat Platforms with TensorFlow.
[3] Evans, D., Labrèche, G., Mladenov, T., Marszk, D.,
Shiradhonkar, V., & Zelenevskiy, V. (2022). Agile
Development and Rapid Prototyping in a Flying Mission [13] Labrèche, G. (2020). GitHub Repository – OPS-SAT
with Open-Source Software Reuse On-Board the OPS- SmartCam v2.1.2. Retrieved April 05, 2021, from
SAT Spacecraft. AIAA SciTech Forum 2022. https://github.com/georgeslabreche/opssat-
https://doi.org/10.2514/6.2022-0648 smartcam/tree/v2.1.2

[4] McGovern, A., & Wagstaff, K. L. (2011). Machine [14] Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., &
learning in space: extending our reach. Mach Learn 84, Chen, L. C. (2018). Mobilenetv2: Inverted residuals and
335–340 linear bottlenecks. In Proceedings of the IEEE
conference on computer vision and pattern recognition
[5] Wagstaff, K. L., Altinok, A., Chien, S. A., (pp. 4510-4520).
Rebbapragada, U., Schaffer, S. R., Thompson, D. R., &
Tran, D. Q. (2017). Cloud filtering and novelty detection [15] TensorFlow Hub (n.d.). Feature vectors of images with
using onboard machine learning for the EO-1 spacecraft. MobileNet V2 trained on ImageNet (ILSVRC-2012-CL.
In Proc. IJCAI Workshop AI in the Oceans and Space. Retrieved April 06, 2021, from
https://tfhub.dev/google/tf2-
[6] Chien, S., Doubleday, J., Thompson, D. R., Wagstaff, K. preview/mobilenet_v2/feature_vector/4
L., Bellardo, J., Francis, C., ... & Piug-Suari, J. (2017).
Onboard autonomy on the intelligent payload [16] TensorFlow (2020). GitHub Repository – Tensorflow
experiment cubesat mission. Journal of Aerospace Hub make_image_classifier v0.9.0. Retrieved April 05,
Information Systems, 14(6), 307-315. 2021, from
https://github.com/tensorflow/hub/tree/r0.9/tensorflow_
[7] Giuffrida, G., Diana, L., de Gioia, F., Benelli, G., Meoni, hub/tools/make_image_classifier
G., Donati, M., & Fanucci, L. (2020). CloudScout: A
deep neural network for on-board cloud detection on [17] ESA OPS-SAT Community Platform. (n.d.). Retrieved
hyperspectral images. Remote Sensing, 12(14), 2205. April 05, 2021, from https://opssat1.esoc.esa.int/

[8] Agarwal, S., Hervas-Martin, E., Byrne, J., Dunne, A., [18] GEOS Geometry Engine Open Source. (n.d.). Retrieved
Luis Espinosa-Aranda, J., & Rijlaarsdam, D. (2020). An April 19, 2021, from https://trac.osgeo.org/geos/
Evaluation of Low-Cost Vision Processors for Efficient
Star Identification. Sensors, 20(21), 6250. [19] Szajnfarber, Z., & Vrolijk, A. (2018). A facilitated
expert‐based approach to architecting “openable”
[9] Labrèche, G., Evans, D., Mladenov, T., Marszk, D., complex systems. Systems Engineering, 21(1), 47-58.
Shiradhonkar, V., & Zelenevskiy, V. (2022). Artificial
Intelligence for Autonomous Planning and Scheduling [20] Kelvins - ESA's Advanced Concepts Competition
of Image Acquisition with the SmartCam App On-Board Website. (n.d.). Retrieved April 05, 2021, from
the OPS-SAT Spacecraft. AIAA SciTech Forum 2022. https://kelvins.esa.int/
https://doi.org/10.2514/6.2022-2508
[21] Blacker, P., Bridges, C. P., & Hadfield, S. (2019, July).
[10] TensorFlow (2021). GitHub Repository – TensorFlow Rapid prototyping of deep learning models on radiation
Lite v2.4.1. Retrieved April 05, 2021, from hardened CPUs. In 2019 NASA/ESA Conference on
https://github.com/tensorflow/tensorflow/tree/v2.4.1/ten Adaptive Hardware and Systems (AHS) (pp. 25-32).
sorflow/lite IEEE.

[11] Kusters, R., Misevic, D., Berry, H., Cully, A., Le Cunff, [22] Pitsis, G., Tsagkatakis, G., Kozanitis, C., Kalomoiris, I.,
Y., Dandoy, L., Díaz-Rodríguez, N., Ficher, M., Grizou, Ioannou, A., Dollas, A., ... & Tsakalides, P. (2019, May).
Efficient convolutional neural network weight
15
compression for space data classification on multi-fpga [35] Labrèche, G. (n.d.). GitHub Repository – OPS-SAT
platforms. In ICASSP 2019-2019 IEEE International OrbitAI. Retrieved April 21, 2021, from
Conference on Acoustics, Speech and Signal Processing https://github.com/georgeslabreche/opssat-orbitai
(ICASSP) (pp. 3917-3921). IEEE.
[36] Raff, E. (2017). JSAT: Java Statistical Analysis Tool, a
[23] Labrèche, G. (n.d.). GitHub Repository – K-Means- Library for Machine Learning. In Journal of Machine
Image Clustering. Retrieved October 15, 2021, Learning Research (Vol. 18, pp 1-5).
https://github.com/georgeslabreche/kmeans-image-
clustering [37] Mission Operations Services Concept. CCSDS 520.0-G-
3. Green Book. Issue 3. December 2010.
[24] Labrèche, G. (n.d.). GitHub Repository – DKM: A
generic C++11 k-means clustering implementation,
forked from genbattle/dkm. Retrieved October 15, 2021,
from https://github.com/georgeslabreche/dkm BIOGRAPHY
Georges Labrèche enjoys
[25] Labrèche, G. (n.d.). GitHub Repository – stb, forked designing and running experiments
from nothings/stb. Retrieved October 15, 2021, from on the OPS-SAT Space Lab from
https://github.com/georgeslabreche/stb which he has positioned himself as
a leading figure in operationalizing
[26] Nayak, P.P., Kurien, J., Dorais, G., Millar, W., Rajan, K., the use of artificial intelligence and
Kanefsky, B., Bernard, E., Gamble, B., Rouquette, N., in-flight machine learning for
Smith, D.B., Tung, Y., Muscoletta, N., & Taylor, W. spacecraft on-board autonomy.
(1999). Validating the DS-1 Remote Agent Experiment. Prior to joining the European
Space Agency’s Space Operations
[27] Silva, J., & Donati, A. (2016). WebMUST evolution. In Centre (ESOC) as a Spacecraft Operations Engineer with the
14th International conference on space operations (p. OPS-SAT Flight Control Team he worked on the SherpaTT
2433). active suspension system rover at the Robotics Innovation
Center of the German Research Centre for Artificial
[28] Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Intelligence (DFKI). He also contributed to the FSSCat/Ф-
& Singer, Y. (2006). Online passive aggressive sat-1 Earth Observation mission during his placement at the
algorithms. ESA Centre for Earth Observation (ESRIN). He received his
B.S. in Software Engineering from the University of Ottawa,
[29] Kingma, D. P., & Ba, J. (2014). Adam: A method for Canada, his M.A. in International Affairs from the New
stochastic optimization. arXiv preprint arXiv:1412.6980. School University in New York, NY, and his M.S. in
Spacecraft Design from Luleå University of Technology in
[30] Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive Kiruna, Sweden.
subgradient methods for online learning and stochastic
optimization. In Journal of machine learning research,
12(7). David Evans manages the OPS-
SAT Space Lab projects at the
[31] Crammer, K., Kulesza, A., & Dredze, M. (2009, European Space Agency. He was
December). Adaptive Regularization of Weight Vectors. previously the spacecraft control
In NIPS (Vol. 22, pp. 414-422). centre manager at EUTELSAT, the
world’s third largest
[32] Wang, J., Zhao, P., & Hoi, S. C. (2012). Exact soft telecommunications satellite
confidence-weighted learning. arXiv preprint operator. He is the holder of
arXiv:1206.4612. several patents on housekeeping
telemetry compression and the
[33] Crammer, K., & Lee, D. D. (2010). Learning via author of the popular “Ladybird
gaussian herding. In Advances in neural information Guide to Spacecraft Operations” lecture courses.
processing systems, 23 (pp. 451-459).

[34] Labrèche, G. (n.d.). GitHub Repository – MochiMochi,


forked from olanleed/MochiMochi. Retrieved April 21,
2021, from
https://github.com/georgeslabreche/MochiMochi/tree/or
bitai

16
Dominik Marszk has been Tanguy Soto graduated with a M.S
passionate about technology and in computer science and AI at
programming from an early age. Sorbonne University, Paris in 2019.
From elementary school, he was He worked for a year with the
involved in creating and playing robotics company Brain
MMO games followed by the Corporation located in San Diego,
hardware and firmware USA. Since then, he has been
modifications of the first modern working at the European Space
smartphones. He graduated from Agency in Darmstadt, Germany.
microelectronics at WETI after First as a Young Graduate Trainee
which he joined the Young Graduate Trainee internship at and now a Data System Engineer.
ESA in 2016 and has since been at the Agency. He deals with His work focuses on the next generation of mission control
ground systems engineering at the European Space system infrastructure like EGS-CC or on innovative on-board
Operations Center. software framework such as the Nanosat MO Framework.

Tom Mladenov received his M.S. in Vladimir Zelenevskiy graduated


Electronics Engineering from KU with a dual master’s degree in
Leuven University in Leuven, Computer Science from RWTH
Belgium in 2018. He worked as Aachen in Germany and the
Telescope Instrumentation Engineer University of Trento in Italy. Since
at KU Leuven and was involved in 2015 he has been working on space
projects not limited to: the European projects in Darmstadt, developing
Extremely Large Telescope and the software, engineering systems, and
BlackGEM telescope array in Chili supporting launches for Aeolus,
and focused on cryogenic detector MetOp-C, and Galileo. He’s been a
control systems and CubeSat optical payloads. Prior to his part of the OPS-SAT Space Lab Flight Control Team since
position at KU Leuven, he was part of the ESA September 2020. He is responsible for day-to-day operations,
REXUS/BEXUS programme where he acted as Electronics mission planning, automation, and coordinating the
Engineer. As part of this project he designed RF circuitry for experimental activities
a solid-state diamond-based Quantum magnetometer, which
was later launched and installed in the International Space
Station in August 2021 (SpaceX CRS-23). After his time at
KU Leuven, he joined the European Space Agency as a YGT
Mission Operations Concepts Engineer in 2019. During his
2 years at ESA he was the Payload Operations Engineer for
the OPS-SAT Mission and responsible for Mission
Automation and payload commissioning of Europe's first
hardware/software laboratory in Low Earth Orbit.

Vasundhara Shiradhonkar has a


diverse experience in embedded
software engineering, wireless
communication, and process
automation. She is a member of the
OPS-SAT Space Lab Flight Control
Team and a single point of contact
for FPGA experimenters. In this
role, she contributes to on-board
software development, ADCS
experiments and supports FPGA experiments. She received
her B.Tech in Instrumentation engineering from India and
M.Sc. in Information and Communication Engineering at the
Technical University in Darmstadt, Germany.

17

View publication stats

You might also like