Performance and Scalability Analysis of Robotic Simulations in Ignition Gazebo

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Performance and Scalability Analysis of

Robotic Simulations in Ignition Gazebo

Miguel Solı́s Segura


miguel.solis.segura@intel.com

Abstract

This report presents a comprehensive analysis of the performance and scalability of


robotic simulations using Ignition Gazebo, with a particular focus on its utilization of
hardware resources and the efficiency of its simulation processes. Through a series of me-
thodical experiments, the study evaluates key performance metrics such as CPU, RAM,
GPU, and GPU memory usage under varying simulation conditions. Special attention
is given to the impact of camera sensors, which are found to significantly affect both
hardware resource consumption and the real-time factor of the simulations. Contrary
to expectations, distributed setups in Ignition Gazebo display reduced performance com-
pared to standalone modes, indicating potential inefficiencies in the software’s distributed
architecture. These findings offer valuable insights into the current limitations of Ignition
Gazebo, particularly in its underutilization of hardware capabilities and the need for op-
timized software architecture. The report highlights the opportunities for improvements,
suggesting avenues for future work including the development of efficient data processing
algorithms and enhancements in software-hardware integration.

1
Contents

I. Introduction 3

II. Methodology 3

II.A Simulation Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

II.B Hardware Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

II.C Sensor Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

II.C.1 Lidar Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

II.C.2 Camera Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

II.D Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

III. Results 7

III.A CPU Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

III.B RAM Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

III.C GPU Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

III.D GPU Memory Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

III.E Real-time Factor by Sensor Impact . . . . . . . . . . . . . . . . . . . . . . . . . 9

III.F Real-time Factor by Distributed Setup . . . . . . . . . . . . . . . . . . . . . . . 9

III.F.1 Latency in the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

IV. Discussion 10

V. Conclusions 11

A Appendix 11

2
I. Introduction

In the rapidly evolving domain of robotics, the ability to test and validate robotic algorithms
and behaviors in simulated environments has emerged as an indispensable tool. Simulation
not only reduces costs but also accelerates the development cycle by offering a risk-free testing
ground. With the advent of advanced simulation platforms like Ignition Gazebo, researchers
and developers now have the capacity to delve deeper into intricate robotic scenarios that were
previously unattainable.

One of the foremost challenges in this space is the simulation of large swarms or groups of robots.
Whether it’s for exploring coordinated movements, collective decision-making, or large-scale
automated tasks, the demand for simulating vast numbers of robots in real-time, or near real-
time, is on the rise. However, with the increase in the number of robots, there’s an inherent
strain on computational resources, leading to a discrepancy between simulated time and real-
time, commonly referred to as the real-time factor. This metric is simply defined as:

elapsed time in simulation


real-time factor = (1)
elapsed time in real life

The primary aim of this report is to investigate the performance of robotic simulations using
Ignition Gazebo. Specifically, the study seeks to uncover how the performance, as captured
by the real-time factor, is influenced by three primary variables: the number of robots in the
simulation, the number of computers participating in the simulation setup, and the resolution
of sensors on each robot. By addressing these questions, this report aims to provide a compre-
hensive starting point to understand performance expectations in future simulation projects,
according to the amount of robots involved and their sensor configuration.

II. Methodology

II.A Simulation Software

The software environment for the simulations is outlined in Table I. The simulations were run
on machines with Ubuntu operating system, utilizing ROS2 for robot sensor bridging, and
Ignition Gazebo for simulation.
Table I: Software summary.

Software Version
Operative System Ubuntu 22.04.3 LTS
Ignition Gazebo Fortress 6.15.0
ROS2 Humble Hawksbill

The tests are carried on an empty SDF world with a single ground plane. This implies that
this report does not contemplate computational load coming from world plugins, collisions of
inanimate objects, particle systems or any other world-building elements from the Ignition
library. The robot models used have a simple geometry and are equipped with a differential
drive plugin. Figure 1 shows an example of the robot model used.

3
(a) Standard View (b) Wireframe View

Figure 1: Robot model used in simulation.

II.B Hardware Specifications

This section provides details on the hardware specifications of the machines utilized for the
experiments. As disclosed in the official Ignition documentation [1], the computers must be
within the same Local Area Network (LAN) to run the simulations. For reference, the hardware
specifications of each computer used in the setup are outlined in Table II.

Table II: Hardware Summary

Machine CPU GPU RAM


Kadasd606 Intel Core i7-6950X 3.00-GHz Nvidia GeForce GTX 1080 8 GB 15 GB
Edge019 Intel Core i9-13900K 3.00-GHz Nvidia GeForce RTX 4080 16 GB 62 GB
Edge020 Intel Xeon E-2286M 2.40-GHz Nvidia GeForce GTX 1060 6 GB 31 GB
Edge021 Intel Core i9-9980HK 2.40-GHz Nvidia GeForce GTX 1060 6 GB 31 GB

II.C Sensor Setup

Each robot is equipped with a conventional camera and a lidar sensor. The specific resolution
details vary with each experiment. Figure 2 shows a representation of the data captured by
each robot.

Regarding the sensor resolution experiments, these are segmented into three blocks:

1. Each robot equipped only with a camera sensor.


2. Each robot equipped only with a lidar sensor.
3. Each robot equipped with both sensors.

Also, the experiments were performed with three resolution setups: high, medium and low. In
contrast, distribution experiments were all performed using both sensors and single resolution
configuration.

4
(a) Lidar Sensor (b) Camera Sensor

Figure 2: Sensor data captured.

II.C.1 Lidar Parameters

The following metrics remained constant for the lidar sensor across all experiments:

• Scan angle: 160 degrees


• Range minimum value: 8 cm
• Range resolution: 1 cm
• Update rate: 10 Hz

The metrics that changed across experiments are summarized in the Table III.

Table III: Variable Lidar Setup

Experiment Samples per Degree Range Maximum (m)


Resolution High 10 100.0
Resolution Medium 4 30.0
Resolution Low 2 10.0
Distributed 4 10.0

II.C.2 Camera Parameters

Similar to the lidar, the camera sensor parameters were modified depending on the experiment.
The following metrics remained constant for the camera sensor across all experiments:

• Horizontal field of view: 60 degrees


• Clip near: 10 cm
• Clip far: 100 m

The metrics that changed across experiments are summarized in the Table IV.

5
Table IV: Variable Camera Setup

Experiment Width (px) Height (px) Update Rate (Hz)


Resolution High 1280 960 60
Resolution Medium 800 600 30
Resolution Low 320 240 15
Distributed 320 240 30

II.D Experimental Design

This section describes the design of the experiments conducted to assess the performance
metrics in Ignition Gazebo simulations. The two main objectives were:

• Sensor Resolution Impact: To understand how varying sensor resolutions influence the
computational load and real-time performance.

• Distributed Setup Comparison: To compare the real-time factor in standalone versus


distributed computational setups.

To evaluate the sensor resolution impact, all the experiments were performed on the Edge019
machine, which has the most capable hardware to handle computational loads. The indepen-
dent variables evaluated in this set of experiments are:

• Sensor resolution: High, medium and low as described in Tables III and IV.

• Sensors used: Only camera, only lidar and both.

• Number of robots: 1, 5, 10, 15 and 20.

As well, the observed responses were:

• CPU usage: Percentage of all cores utilization.

• RAM usage: Percentage of used RAM from total available.

• GPU usage: Percentage of GPU utilization.

• GPU memory usage: Percentage used from the total available.

• Real-time factor: As described in Equation 1.

On the other hand, the distributed setup comparison experiments were carried using Edge020 as
the primary, and the remaining machines as secondaries. The independent variables evaluated
in this set of experiments are:

• Number of robots: 1, 5, 10, 15, 20, 25, 30, 35 and 40.

• Number of machines: 1, 3 and 4.

6
For these experiments, in general the only response measured was:

• Real-time factor: As described in Equation 1.

On all the experiments, 100 measurements of each dependent variable was performed in order
to obtain an appropriate estimate. Also, the simulated worlds were generated using ERB
templates [2] to guarantee consistency. The process for collecting data was the following:

1. Run a script to generate the world based on the number of robots, machines quantity
and sensor configurations desired.
2. Start the simulation.
3. Bridge all the robot sensors to ROS2, to ensure that sensor data is getting generated.
4. Start motion in a straight line, at constant speed for all robots.
5. Collect 100 measurements of the target metrics.
6. Finish the simulation.

Most of this process was automatized with scripts. Refer to Appendix A for an overview of
the implementation details in the accompanying code repository.

III. Results

III.A CPU Usage

The figures 3a, 3b and 3c show the CPU usage by number of robots. A common pattern
observed in all plots, is a sudden increase in CPU usage when the number of robots is 5, with
a posterior plateau of this metric.

(a) Only Lidar (b) Only Camera (c) Lidar and Camera

Figure 3: CPU usage by number of robots.

III.B RAM Usage

RAM usage was also a metric of interest. Figures 4a, 4b and 4c show the behaviour of this
metric across experiments. The lines show a linear increment correlated with the number of
robots. Also, the resolution seems to play an important factor that influences the slope of each
line.

7
(a) Only Lidar (b) Only Camera (c) Lidar and Camera

Figure 4: RAM usage by number of robots.

III.C GPU Usage

The role of the GPU is crucial in any simulation setup that involves the rendering of scenes.
On Ignition, the lidar sensor is specially optimized to leverage the GPU for generating data.
Similarly, for the camera to render high-fidelity, realistic 3D environments would not be a
possibility without the GPU. This includes textures, lighting, shadows, and other visual effects
that contribute to the realism of the simulation. Figures 5a, 5b and 5c show the GPU usage
in the experiments performed.

In contrast to CPU usage, the GPU plots reveal well differentiated curves by sensor resolution.
However, these plots also share a plateau behaviour, that starts about when 15 robots are
instantiated.

(a) Only Lidar (b) Only Camera (c) Lidar and Camera

Figure 5: GPU usage by number of robots.

III.D GPU Memory Usage

The GPU memory consumption was also tracked. The corresponding plots are shown in Figures
6a, 6b and 6c. The response observed is an almost perfect linear relationship. The slope of the
curves seems to be strongly influenced by the sensor setup.

(a) Only Lidar (b) Only Camera (c) Lidar and Camera

Figure 6: GPU memory usage by number of robots.

8
III.E Real-time Factor by Sensor Impact

The real-time factor curves linked to the sensor setup experiment are shown in Figures 7a,
7b and 7c. The simulations suffers a slow degradation as the number of robots increase when
only the lidar sensor is present. However, once the cameras are incorporated, the degradation
becomes significant more quickly, being accentuated by the resolution setup.

(a) Only Lidar (b) Only Camera (c) Lidar and Camera

Figure 7: Real-time factor by number of robots.

III.F Real-time Factor by Distributed Setup

The performance of the simulation was also analyzed by the number of computers participating.
As disclosed in Tables III and IV, the sensor setup for this set of experiments was a mixture
of the low and medium resolution. The Figure 8 shows the results obtained by using 1, 3, and
4 different machines.

Figure 8: Real-time factor by number of robots using multiple distributed setups.

It becomes quite evident that the standalone version of the world outperforms the distributed
setups. An additional additional experiment was performed in one of the data points (3 Ma-
chines, 20 Robots) to further investigate the performance results.

III.F.1 Latency in the Network

Since the hardware utilization by Ignition has been well established at this point, it was nec-
essary to explore if the latency in the network could also explain the under-performance of

9
the results. This metric was measured using the ping command. 20 packets were send from
the primary Edge020 to the secondaries, the results are summarized in Table V. The metrics
showcase a healthy network with really low latency.

Table V: Observed Latency in Distributed Network

Secondary Min (ms) Max (ms) Average (ms)


Edge19 0.467 0.908 0.696
Kadasd606 0.261 0.689 0.560

IV. Discussion

The findings from the report reveal intriguing aspects of simulation performance, in the context
of both distributed and standalone setups.

Under-utilization of Hardware Resources: A notable observation is the under-utilization of


hardware resources, with none of the metrics (CPU, RAM, GPU, and GPU memory) reaching
even 50% of their full capacity to compensate for degradation in real-time factor. This under-
utilization suggests inherent inefficiencies within Ignition Gazebo’s simulation management. It
implies that the bottleneck for simulation performance might lie not in the hardware capabilities
but potentially in the software architecture or the way the simulation tasks are processed and
managed.

Impact of Camera Sensors: The study distinctly reveals that camera sensors have a more
significant impact on hardware resource usage and the degradation of the real-time factor com-
pared to other sensors like lidar. This observation highlights the considerable computational
demands of processing high-resolution visual data within the simulation. The camera sensors,
responsible for rendering detailed and complex visual scenes, present a substantial challenge
in terms of computational load. This is particularly notable in scenarios where the overall
hardware resources are not being maximally utilized, suggesting a specific intensity associated
with visual data processing in the simulation environment.

Distributed Setup Performance: The observation that distributed setups in Ignition Gazebo
perform worse than standalone modes is counter-intuitive to the general expectation of dis-
tributed computing. This finding aligns with similar experiences shared within the Gazebo
community [3], where users have reported better performance in standalone simulations com-
pared to distributed setups, with a notable decrease in real-time factor performance in dis-
tributed modes. This suggests that the distributed architecture of Ignition Gazebo may not be
yet fully optimized for efficient task distribution and synchronization across multiple computing
units, leading to performance degradation despite available computing power.

Given the study findings, there is an opportunity for Intel to contribute to the open-source
project of Ignition Gazebo, especially in optimizing hardware utilization. Despite the advanced
hardware capabilities available for the study, it is clear that the simulation does not fully
leverage them, particularly noticeable in scenarios with multiple robots and high-resolution
camera sensors. Intel’s expertise in hardware and software optimization could play a pivotal
role in enhancing Ignition Gazebo efficiency. This could involve developing more efficient data
processing algorithms, optimizing the interaction between the software and the underlying
hardware, or even contributing to the development of new features that better utilize the

10
available computational resources. Such contributions would not only benefit the Ignition
Gazebo community but could also align with Intel interests in advancing robotics technologies.

These insights highlight critical areas of potential improvement in Ignition Gazebo, particularly
in optimizing software-level efficiencies to better utilize available hardware capabilities and in
refining the distributed simulation architecture to achieve the expected performance gains from
distributed computing setups.

V. Conclusions

Hardware Resource Utilization: The study found that Ignition Gazebo simulations do not fully
utilize hardware resources, with none of the metrics (CPU, RAM, GPU, and GPU memory)
reaching their full capacity when real-time factor is low. This suggests inefficiencies in the
simulation management within Ignition Gazebo, indicating that performance bottlenecks might
be more related to the software architecture or simulation task processing rather than hardware
limitations.

Impact of Camera Sensors: Camera sensors significantly impact both hardware resource us-
age and the degradation of the real-time factor, more so than other sensors like lidar. This
underscores the high computational demands of processing high-resolution visual data within
simulations. The challenge is accentuated when the overall hardware resources are underuti-
lized, highlighting the intensity of visual data processing in these environments.

Distributed Setup Performance: Distributed setups in Ignition Gazebo were found to perform
worse than standalone modes, which is contrary to general expectations from distributed com-
puting. This points to possible inefficiencies in the distributed architecture of Ignition Gazebo,
particularly in task distribution and synchronization across multiple computing units, leading
to performance degradation.

References
[1] Open Robotics. Distributed Simulation. https://gazebosim.org/api/sim/8/distributedsimulati
html Accessed: 2023-10-16. 2023.
[2] Open Robotics. ERB Template. https://gazebosim.org/api/sim/8/erbtemplate.
html Accessed: 2023-10-18. 2023.
[3] Gazebo Community. Distributed simulation performance. https://community.gazebosim.
org/t/distributed-simulation-performance/483 Accessed: 2023-11-18. 2023.

A Appendix

TODO Additional material such as code, datasets, or detailed procedures.

11

You might also like