Professional Documents
Culture Documents
Performance and Scalability Analysis of Robotic Simulations in Ignition Gazebo
Performance and Scalability Analysis of Robotic Simulations in Ignition Gazebo
Performance and Scalability Analysis of Robotic Simulations in Ignition Gazebo
Abstract
1
Contents
I. Introduction 3
II. Methodology 3
III. Results 7
IV. Discussion 10
V. Conclusions 11
A Appendix 11
2
I. Introduction
In the rapidly evolving domain of robotics, the ability to test and validate robotic algorithms
and behaviors in simulated environments has emerged as an indispensable tool. Simulation
not only reduces costs but also accelerates the development cycle by offering a risk-free testing
ground. With the advent of advanced simulation platforms like Ignition Gazebo, researchers
and developers now have the capacity to delve deeper into intricate robotic scenarios that were
previously unattainable.
One of the foremost challenges in this space is the simulation of large swarms or groups of robots.
Whether it’s for exploring coordinated movements, collective decision-making, or large-scale
automated tasks, the demand for simulating vast numbers of robots in real-time, or near real-
time, is on the rise. However, with the increase in the number of robots, there’s an inherent
strain on computational resources, leading to a discrepancy between simulated time and real-
time, commonly referred to as the real-time factor. This metric is simply defined as:
The primary aim of this report is to investigate the performance of robotic simulations using
Ignition Gazebo. Specifically, the study seeks to uncover how the performance, as captured
by the real-time factor, is influenced by three primary variables: the number of robots in the
simulation, the number of computers participating in the simulation setup, and the resolution
of sensors on each robot. By addressing these questions, this report aims to provide a compre-
hensive starting point to understand performance expectations in future simulation projects,
according to the amount of robots involved and their sensor configuration.
II. Methodology
The software environment for the simulations is outlined in Table I. The simulations were run
on machines with Ubuntu operating system, utilizing ROS2 for robot sensor bridging, and
Ignition Gazebo for simulation.
Table I: Software summary.
Software Version
Operative System Ubuntu 22.04.3 LTS
Ignition Gazebo Fortress 6.15.0
ROS2 Humble Hawksbill
The tests are carried on an empty SDF world with a single ground plane. This implies that
this report does not contemplate computational load coming from world plugins, collisions of
inanimate objects, particle systems or any other world-building elements from the Ignition
library. The robot models used have a simple geometry and are equipped with a differential
drive plugin. Figure 1 shows an example of the robot model used.
3
(a) Standard View (b) Wireframe View
This section provides details on the hardware specifications of the machines utilized for the
experiments. As disclosed in the official Ignition documentation [1], the computers must be
within the same Local Area Network (LAN) to run the simulations. For reference, the hardware
specifications of each computer used in the setup are outlined in Table II.
Each robot is equipped with a conventional camera and a lidar sensor. The specific resolution
details vary with each experiment. Figure 2 shows a representation of the data captured by
each robot.
Regarding the sensor resolution experiments, these are segmented into three blocks:
Also, the experiments were performed with three resolution setups: high, medium and low. In
contrast, distribution experiments were all performed using both sensors and single resolution
configuration.
4
(a) Lidar Sensor (b) Camera Sensor
The following metrics remained constant for the lidar sensor across all experiments:
The metrics that changed across experiments are summarized in the Table III.
Similar to the lidar, the camera sensor parameters were modified depending on the experiment.
The following metrics remained constant for the camera sensor across all experiments:
The metrics that changed across experiments are summarized in the Table IV.
5
Table IV: Variable Camera Setup
This section describes the design of the experiments conducted to assess the performance
metrics in Ignition Gazebo simulations. The two main objectives were:
• Sensor Resolution Impact: To understand how varying sensor resolutions influence the
computational load and real-time performance.
To evaluate the sensor resolution impact, all the experiments were performed on the Edge019
machine, which has the most capable hardware to handle computational loads. The indepen-
dent variables evaluated in this set of experiments are:
• Sensor resolution: High, medium and low as described in Tables III and IV.
On the other hand, the distributed setup comparison experiments were carried using Edge020 as
the primary, and the remaining machines as secondaries. The independent variables evaluated
in this set of experiments are:
6
For these experiments, in general the only response measured was:
On all the experiments, 100 measurements of each dependent variable was performed in order
to obtain an appropriate estimate. Also, the simulated worlds were generated using ERB
templates [2] to guarantee consistency. The process for collecting data was the following:
1. Run a script to generate the world based on the number of robots, machines quantity
and sensor configurations desired.
2. Start the simulation.
3. Bridge all the robot sensors to ROS2, to ensure that sensor data is getting generated.
4. Start motion in a straight line, at constant speed for all robots.
5. Collect 100 measurements of the target metrics.
6. Finish the simulation.
Most of this process was automatized with scripts. Refer to Appendix A for an overview of
the implementation details in the accompanying code repository.
III. Results
The figures 3a, 3b and 3c show the CPU usage by number of robots. A common pattern
observed in all plots, is a sudden increase in CPU usage when the number of robots is 5, with
a posterior plateau of this metric.
(a) Only Lidar (b) Only Camera (c) Lidar and Camera
RAM usage was also a metric of interest. Figures 4a, 4b and 4c show the behaviour of this
metric across experiments. The lines show a linear increment correlated with the number of
robots. Also, the resolution seems to play an important factor that influences the slope of each
line.
7
(a) Only Lidar (b) Only Camera (c) Lidar and Camera
The role of the GPU is crucial in any simulation setup that involves the rendering of scenes.
On Ignition, the lidar sensor is specially optimized to leverage the GPU for generating data.
Similarly, for the camera to render high-fidelity, realistic 3D environments would not be a
possibility without the GPU. This includes textures, lighting, shadows, and other visual effects
that contribute to the realism of the simulation. Figures 5a, 5b and 5c show the GPU usage
in the experiments performed.
In contrast to CPU usage, the GPU plots reveal well differentiated curves by sensor resolution.
However, these plots also share a plateau behaviour, that starts about when 15 robots are
instantiated.
(a) Only Lidar (b) Only Camera (c) Lidar and Camera
The GPU memory consumption was also tracked. The corresponding plots are shown in Figures
6a, 6b and 6c. The response observed is an almost perfect linear relationship. The slope of the
curves seems to be strongly influenced by the sensor setup.
(a) Only Lidar (b) Only Camera (c) Lidar and Camera
8
III.E Real-time Factor by Sensor Impact
The real-time factor curves linked to the sensor setup experiment are shown in Figures 7a,
7b and 7c. The simulations suffers a slow degradation as the number of robots increase when
only the lidar sensor is present. However, once the cameras are incorporated, the degradation
becomes significant more quickly, being accentuated by the resolution setup.
(a) Only Lidar (b) Only Camera (c) Lidar and Camera
The performance of the simulation was also analyzed by the number of computers participating.
As disclosed in Tables III and IV, the sensor setup for this set of experiments was a mixture
of the low and medium resolution. The Figure 8 shows the results obtained by using 1, 3, and
4 different machines.
It becomes quite evident that the standalone version of the world outperforms the distributed
setups. An additional additional experiment was performed in one of the data points (3 Ma-
chines, 20 Robots) to further investigate the performance results.
Since the hardware utilization by Ignition has been well established at this point, it was nec-
essary to explore if the latency in the network could also explain the under-performance of
9
the results. This metric was measured using the ping command. 20 packets were send from
the primary Edge020 to the secondaries, the results are summarized in Table V. The metrics
showcase a healthy network with really low latency.
IV. Discussion
The findings from the report reveal intriguing aspects of simulation performance, in the context
of both distributed and standalone setups.
Impact of Camera Sensors: The study distinctly reveals that camera sensors have a more
significant impact on hardware resource usage and the degradation of the real-time factor com-
pared to other sensors like lidar. This observation highlights the considerable computational
demands of processing high-resolution visual data within the simulation. The camera sensors,
responsible for rendering detailed and complex visual scenes, present a substantial challenge
in terms of computational load. This is particularly notable in scenarios where the overall
hardware resources are not being maximally utilized, suggesting a specific intensity associated
with visual data processing in the simulation environment.
Distributed Setup Performance: The observation that distributed setups in Ignition Gazebo
perform worse than standalone modes is counter-intuitive to the general expectation of dis-
tributed computing. This finding aligns with similar experiences shared within the Gazebo
community [3], where users have reported better performance in standalone simulations com-
pared to distributed setups, with a notable decrease in real-time factor performance in dis-
tributed modes. This suggests that the distributed architecture of Ignition Gazebo may not be
yet fully optimized for efficient task distribution and synchronization across multiple computing
units, leading to performance degradation despite available computing power.
Given the study findings, there is an opportunity for Intel to contribute to the open-source
project of Ignition Gazebo, especially in optimizing hardware utilization. Despite the advanced
hardware capabilities available for the study, it is clear that the simulation does not fully
leverage them, particularly noticeable in scenarios with multiple robots and high-resolution
camera sensors. Intel’s expertise in hardware and software optimization could play a pivotal
role in enhancing Ignition Gazebo efficiency. This could involve developing more efficient data
processing algorithms, optimizing the interaction between the software and the underlying
hardware, or even contributing to the development of new features that better utilize the
10
available computational resources. Such contributions would not only benefit the Ignition
Gazebo community but could also align with Intel interests in advancing robotics technologies.
These insights highlight critical areas of potential improvement in Ignition Gazebo, particularly
in optimizing software-level efficiencies to better utilize available hardware capabilities and in
refining the distributed simulation architecture to achieve the expected performance gains from
distributed computing setups.
V. Conclusions
Hardware Resource Utilization: The study found that Ignition Gazebo simulations do not fully
utilize hardware resources, with none of the metrics (CPU, RAM, GPU, and GPU memory)
reaching their full capacity when real-time factor is low. This suggests inefficiencies in the
simulation management within Ignition Gazebo, indicating that performance bottlenecks might
be more related to the software architecture or simulation task processing rather than hardware
limitations.
Impact of Camera Sensors: Camera sensors significantly impact both hardware resource us-
age and the degradation of the real-time factor, more so than other sensors like lidar. This
underscores the high computational demands of processing high-resolution visual data within
simulations. The challenge is accentuated when the overall hardware resources are underuti-
lized, highlighting the intensity of visual data processing in these environments.
Distributed Setup Performance: Distributed setups in Ignition Gazebo were found to perform
worse than standalone modes, which is contrary to general expectations from distributed com-
puting. This points to possible inefficiencies in the distributed architecture of Ignition Gazebo,
particularly in task distribution and synchronization across multiple computing units, leading
to performance degradation.
References
[1] Open Robotics. Distributed Simulation. https://gazebosim.org/api/sim/8/distributedsimulati
html Accessed: 2023-10-16. 2023.
[2] Open Robotics. ERB Template. https://gazebosim.org/api/sim/8/erbtemplate.
html Accessed: 2023-10-18. 2023.
[3] Gazebo Community. Distributed simulation performance. https://community.gazebosim.
org/t/distributed-simulation-performance/483 Accessed: 2023-11-18. 2023.
A Appendix
11