Professional Documents
Culture Documents
6 Miori 2017
6 Miori 2017
6 Miori 2017
1 Introduction
Single-board computers (SBCs), such as Raspberry Pis (RPis), have received a
lot of attention recently. So far, many projects involving these computers have
been at a rather small scale, though. For example, students use them for school
projects or hobbyists implement hardware for everything from home automation
to wildlife monitoring [13]. While there have been attempts at increasing the
computational power of SBCs by combining them into clusters (see Sect. 2) and
many of the hardware issues faced by early SBC clusters, such as power supply
and casing, have been largely solved by now, the work on deploying middleware
or orchestration platforms on these clusters is just starting. However, if a cluster
does not provide some form of middleware or orchestration, it will be hard to
attract potential users, as only few of them will reconfigure, rewrite, or tailor
their software to match the specifications of a bare-metal SBC cluster.
Most of the existing projects utilize fairly low-level frameworks, such as the
Message Passing Interface (MPI) [7], which basically is a communication pro-
tocol. Additionally, the experiments in these projects are conducted from the
point of view of high performance computing (HPC). Given the fact that many
of the low-cost, power-saving SBCs that are available today show a rather poor
performance in terms of raw computing power, focusing on this aspect seems
curious to us, as Raspberry Pis will not replace HPC hardware when it comes to
c Springer International Publishing AG 2017
A. Calı̀ et al. (Eds.): BICOD 2017, LNCS 10365, pp. 153–159, 2017.
DOI: 10.1007/978-3-319-60795-5 16
154 L. Miori et al.
2 Related Work
One of the earliest RPi projects is a Beowulf cluster1 built at the Boise State
University consisting of 33 RPis running Arch Linux ARM with a master RPI
also having access to a network file system (NFS). The performance was tested
by computing the number pi using a Monte Carlo algorithm. Iridis-Pi is a clus-
ter consisting of 64 RPis built at the University of Southampton [5], running
Raspbian as the operating system. The LINPACK and HPL (high-performance
LINPACK) benchmarks were used to test the raw computing performance of the
cluster; for a more practically-oriented benchmark, the distributed file system
and map-reduce features of Hadoop were installed. While the system was usable,
it was very slow due to lack of RAM [5]. The Pi-Crust cluster, consisting of ten
Raspberry Pi 2 model B, went down a similar route, employing the Raspbian
operating system and also running the LINPACK benchmark [19]. Cloutier et al.
[4] go a step further by not only running LINPACK as a benchmark on their 32
node cluster, but also STREAM, which tests the memory performance. Finally,
Schot [15] deploys Hadoop on an eight-node RPi2 cluster. However, all of the
work described so far only looks at raw computing performance numbers.
The only work we found that comes close to our approach is the system
developed at the University of Glasgow, consisting of 56 RPis [18]. The authors
1
http://coen.boisestate.edu/ece/raspberry-pi/.
A Platform for Edge Computing Based on Raspberry Pi Clusters 155
The Bolzano Cluster. Since its inception in 2013 [1], the RPi cluster in Bolzano
has undergone some changes. Here we describe the configuration used for deploying
OpenStack Swift: Bobo. One of the biggest changes we made was the casing, as the
original one made from metal and plastic turned out to be slightly impractical. The
new, portable and modular container (shown in Fig. 1) is made from wood and can
house 32 to 40 nodes, a network switch, and the power supply. The RPis are screwed
onto several wooden trays that can slide out for assembly and maintenance (see
Fig. 1(b)). The cases themselves are stackable and a smaller version (“Bobino”) is
used by the Dresden University of Technology [16].
Currently, we are refitting some of the cases with Raspberry Pi 2 models; the
results presented here were done with the RPi 1 model B. As operating system
we use Raspbian, which is a port of the Linux distribution Debian optimized
for the ARMv6 instruction set, e.g. providing better performance for floating
point arithmetic [12]. With 3 GByte the original system image is quite heavy,
so we stripped out the entire GUI stack along with the X server to make it
a completely headless system. In this way we freed up more than 1 GByte of
memory. Another tweak relates to the BCM 2835 SoC (system on chip). The
RAM is shared between the CPU and GPU and, as we are running a headless
system, we configured the split so as to allocate the minimal amount possible to
the GPU, which is 16 MByte.
For the network configuration we assigned fixed IP addresses to every RPi
in the cluster. This not only made our own administration tasks easier, but
156 L. Miori et al.
also simplified the communication between the service endpoints (proxy, load
balancer, and other Swift nodes). For remote (root) access we enabled SSH,
creating a set of public and private SSL keys.
Scale-out Scale-out
12 12
throughput (operations/sec)
throughput (operations/sec)
10 10
8 8
6 6
4 4
2 2
0 0
3 4 5 6 3 4 5 6
number of nodes per type number of nodes per type
issues concerning the file system. We go into more details in the following. The
RPi 1 model B we used in our setup employs a 700 MHz ARM processor that
achieves roughly 0.04 GFlops (for double precision), which is not a lot compared
to CPUs used in laptops or desktops (for instance, an Intel i5-760 quadcore
CPU running at 2.8 GHz reaches around 45 GFlops for double precision, while
an RPi 2 and 3 (both model B) reach 0.30 GFlops and 0.46 GFlops, respectively,
for single precision). One approach for increasing the performance of an RPi is
to utilize the GPU, which can provide roughly 24 GFlops. While Broadcom has
released the technical (hardware) specification in 20134 , there is still not a stable
API for programming the GPU on RPis.
Currently, RPis use a 100 MBit/s ethernet connection, which is further ham-
pered by being combined with the USB connectors, i.e., all traffic going through
the ethernet connection and the USB ports is processed by a combined USB
hub/ethernet controller, sharing the USB 2.0 bandwidth of 480 MBit/s. For
that reason, the RPis in our cluster used SD cards for storage, as a solid stor-
age disk hooked up to a USB port would have lowered the network bandwidth
even further. With some tricks, the ethernet connection can be made faster, for
example by using a USB 3.0 Gigabit adapter.5 However, the gains are limited:
with a USB 3.0 Gigabit adapter the speed of the ethernet connection went up to
roughly 220 MBit/s. For the Raspberry Pi architecture we are stuck with this
bandwidth, an alternative would be to switch to a different platform. For exam-
ple, the Odroid XU4 offers USB 3.0 and a Gigabit ethernet connector, albeit at
a higher energy consumption.
installing applications such as ownCloud on top of this stack, we can even reach
the level of Software as a Service (SaaS). Nevertheless, at the moment we still
see shortcomings of SBC clusters when it comes to performance-related aspects.
While we think that the performance can still be improved by, for example,
tuning operating system or other parameters and switching to newer, more pow-
erful devices, the efficiency of these systems will never reach the level of server
farms run by large cloud computing providers. However, competing in this area
is not the goal of SBC clusters: we see their future in at the edge of the cloud,
supporting the overall infrastructure by processing and analyzing data locally.
In our future work we plan to investigate container architectures, such as
Docker [9], which can be used to distribute services and applications. While the
development of orchestration framework for Docker has started in the form of
Kubernetes and Mesos, there are still a lot of open questions when it comes to
deploying these frameworks on SBCs.
References
1. Abrahamsson, P., Helmer, S., Phaphoom, N., Nicolodi, L., Preda, N., Miori, L.,
Angriman, M., Rikkilä, J., Wang, X., Hamily, K., Bugoloni, S.: Affordable and
energy-efficient cloud computing clusters: the Bolzano Raspberry Pi cloud cluster
experiment. In: UNICO Workshop at CloudCom, Bristol (2013)
2. Basmadjian, R., De Meer, H., Lent, R., Giuliani, G.: Cloud computing and its
interest in saving energy: the use case of a private cloud. JoCCASA 1(1), 1–25
(2012)
3. Bunch, C.: OpenStack Swift, Raspberry Pi, 23 USB keys - aka GhettoSAN v2
(2013). http://openstack.prov12n.com/openstack-swift-raspberry-pi-23-usb-keys-
aka-ghettosan-v2/. Accessed Feb 2014
4. Cloutier, M.F., Paradis, C., Weaver, V.M.: Design and analysis of a 32-bit embed-
ded high-performance cluster optimized for energy and performance. In: Co-HPC
2014, pp. 1–8, New Orleans (2014)
5. Cox, S.J., Cox, J.T., Boardman, R.P., Johnston, S.J., Scott, M., O’Brien, N.S.:
Iridis-Pi: a low-cost, compact demonstration cluster. Clust. Comput. 17(2), 349–
358 (2014)
6. Dickinson, J.: OpenStack Swift on Raspberry Pi (2013).
http://programmerthoughts.com/openstack/swift-on-pi/. Accessed Feb 2014
7. Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A high-performance, portable imple-
mentation of the MPI message passing interface standard. Parallel Comput. 22(6),
789–828 (1996)
8. Helmer, S., Pahl, C., Sanin, J., Miori, L., Brocanelli, S., Cardano, F., Gadler, D.,
Morandini, D., Piccoli, A., Salam, S., Sharear, A.M., Ventura, A., Abrahamsson,
P., Oyetoyan, T.D.: Bringing the cloud to rural and remote areas via cloudlets. In:
ACM DEV 2016, Nairobi (2016)
9. Merkel, D.: Docker: lightweight Linux containers for consistent development and
deployment. Linux J. 2014(239) (2014)
10. Pahl, C., Lee, B.: Containers and clusters for edge cloud architectures - a technology
review. In: FiCloud 2015, Rome, pp. 379–386, August 2015
11. Porter, J.H., Nagy, E., Kratz, T.K., Hanson, P., Collins, S.L., Arzberger, P.: New
eyes on the world: advanced sensors for ecology. BioScience 59(5), 385–397 (2009)
A Platform for Edge Computing Based on Raspberry Pi Clusters 159