Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

eBook

OpenStack Made Easy


OpenStack made easy

What you will learn

Were entering a phase change Big software demands that infrastructure and This eBook will explain how Canonical and
operations personnel approach the challenge Ubuntu are uniquely positioned to facilitate
from traditional, monolithic of deployment, integration and operations the needs of modern, scalable, repeatable
scale-up software to multi-host, from a different perspective. This eBook implementations of cloud infrastructure
scale-out microservices. Welcome explains how we used to do things, why that based on OpenStack.
is no longer an economically viable approach,
to the age of big software. and what can be done to achieve technical
Many of the approaches that we discuss in this
eBook are also useful when addressing other
scalability without the large economic overhead
big software challenges in areas such as scale
of traditional approaches to modern software.
out applications and workloads, big data and
By reading this eBook you will also gain a deeper
machine learning.
understanding of why there is a perceived
complexity to the installation and operations
of OpenStack-based clouds. You will learn that
this perceived complexity does not originate
from the software itself, but rather the use
of outdated tools and methodologies used
to deploy them.

Tweet this
OpenStack made easy

About the author

Bill Bauman, Strategy and Content, Canonical,


began his technology career in processor
development and has worked in systems
engineering, sales, business development, and
marketing roles. He holds patents on memory
virtualisation technologies and is published
in the field of processor performance. Bill
has a passion for emerging technologies and
explaining how things work. He loves helping
others benefit from modern technology.

Bill Bauman
Strategy and Content, Canonical

Tweet this
OpenStack made easy

Contents

What is OpenStack 05 MAAS the smartest way to handle 16 Conjure-up multi-node OpenStack 23
bare metal deployment on your laptop
OpenStack challenges 06
Juju model-driven operations for hybrid 17 ZFS and software defined storage 24
Who is Canonical 08
cloud services
BootStack your managed cloud 25
OpenStack Interoperability Lab (OIL) 09
Autopilot the fastest way to build 19
Conclusion 27
Ubuntu OpenStack in production 10 an OpenStack cloud
CIOs guide to SDN, VNF and NFV 28
More than one cloud 13 Containers in OpenStack 20

The perception of difficulty with big 14 LXD pure container hypervisor 21


software like OpenStack
Fan networking network addressing 22
Operational intelligence at scale 15 for containers

Tweet this
5
OpenStack made easy

What is OpenStack

General overview Modular Core projects and more


OpenStack is a collection of open source From its inception, OpenStack was designed The core projects of OpenStack consist
software projects designed to work together to be modular and to be integrated with of Nova (compute), Neutron (networking),
to form the basis of a cloud. Primarily, it is used additional tools and plugins via APIs. You could Horizon (dashboard), Swift (object storage),
for private cloud implementations, but it can choose to use any single project from OpenStack Glance (image storage), and Keystone (identity).
be just as applicable for cloud service providers to accomplish a particular task, or several
Beyond the core projects, there are additional
to build public cloud resources. Its important of them, to build out a more complete cloud.
solutions and tools in the industry to enhance
to understand that OpenStack is not a single
Canonical integrates the projects, along the deployment, integration and daily operation
product, but rather a group of projects.
with additional components, into a fully of an OpenStack cloud.
fledged enterprise Cloud Platform known
as Ubuntu OpenStack.

Tweet this
6
OpenStack made easy

OpenStack challenges

Hardware configuration Hardware integration OpenStack installation


Most organisations still manage some Beyond the initial configuration, integration Another major obstacle to OpenStack success
hardware. After racking and connecting it, must happen. Network services must be set is the initial installation. The aforementioned
initial configuration must be done. Some up and maintained, including DHCP or static IP scripting approach is common, as are growing
use vendor tools, some write proprietary address pools for the host NICs, DNS entries, teams of expensive personnel.
scripts, others leverage ever-growing teams of VLANs, etc. Again, these integration tasks can
people. Some use a combination of all of these be accomplished with scripts, vendor tools or There are also OpenStack projects to perform
approaches and more. personnel, but the same potential issues arise installation, but they are often vendor-driven,
as with configuration. not neutral and lack feature completeness.
The issue with these approaches is Organisations that try to use them often
economic scalability. If you change hardware find themselves doing significant, ongoing
configuration in any way, you need to pay development work to make the project useful.
to add/modify an ever-growing collection of
scripts. If you change hardware vendor, you
need to add, configure and maintain a new
tool, while maintaining all previous hardware
management tools. If you add more servers,
you have to hire more people. None of this
scales with cloud economics.

Tweet this
7
OpenStack made easy

Additional challenges A scalable, practical approach


On-going challenges all lend to increasing A better approach, an easier approach, are OpenStack installation and integration
cost and decreasing economic scalability. vendor hardware and platform neutral tools. challenges are best solved by a thoughtful
Additional considerations include: Tools that include APIs for automation of not approach, using technologies designed for
just software, but your datacenter, as well. modern clouds. Legacy scripting technologies
Upgrades
Tools with graphical interfaces, designed might work now, but likely wont scale as your
Rebuilding with scalable cloud economics in mind. clouds needs change and grow. The same
goes for personnel.
New clouds Putting the intelligence of installation and
integration complexity directly into the tools This eBook will go into detail about the approach
Repeatable best practices
themselves is how you make OpenStack easy and tools that make OpenStack easy.
Scaling out and achieve economic scalability.

Reducing cost of consultants

Tweet this
8
OpenStack made easy

Who is Canonical

The company Market focus Choice


Canonical is the company behind Ubuntu, Canonical is focused on cloud scalability, Solutions from Canonical are hardware
the underlying Linux server platform for 65% economically, and technologically. That agnostic, from platform to processor
share of workloads on public clouds and 74% means focusing on density with containers, architecture and public cloud options. We
share of OpenStack deployments. We are the operational efficiency with application recognize that modern organisations require
leading platform for OpenStack with 55% modeling and financial scalability with flexibility and choice. The tools discussed
of all production OpenStack clouds based on cloud-optimized pricing. in this eBook that enable ease of use and
Ubuntu OpenStack*. decreased operational costs are designed
We have proven success supporting large scale
to work across all platforms and major
A founding member of the OpenStack cloud customers in production, with some
clouds, not just select partners.
Foundation, Canonical also has a long history examples given on the Ubuntu OpenStack
of interoperability testing between OpenStack, in production page of this eBook.
Ubuntu and partner technologies. Its
OpenStack Interoperability Lab (OIL) currently
tests over 3,500 combinations per month.

*Source: OpenStack User Survey 2016

Tweet this
9
OpenStack made easy

OpenStack Interoperability Lab (OIL)

Proven integration testing Sophisticated testing and Why OIL makes OpenStack easier
Canonical has a long history of interoperability integration processes OIL ensures the best possible user experience
testing between OpenStack and Ubuntu. when standing up your Ubuntu OpenStack
Our process tests current and future
The Openstack Interoperability Lab (OIL) is developments of OpenStack against current cloud and maintaining it.
the worlds largest Openstack interoperability and future developments of Ubuntu Server
By testing up to 500,000 test cases per month
and integration test lab. It is operated by and Server LTS.
you can run your Ubuntu OpenStack cloud and
Canonical with over 35 major industry hardware technologies from our partner eco-system
As our ecosystem has grown, weve expanded
and software partners participating. Each with greater ease and confidence.
it to include a wide array of guest operating
month we create and test over 3,500 cloud
systems, hypervisors, storage technologies,
combinations in the OIL lab. We could not Find OIL partners
networking technologies and software-
do this without the solutions described
defined networking (SDN) stacks.
in this ebook.

Tweet this
10
OpenStack made easy

Ubuntu OpenStack in production

Built on Ubuntu Deutsche Telekom


Almost all OpenStack projects are developed, Deutsche Telekom, a German
built and tested on Ubuntu. So its no surprise telecommunications company, uses Ubuntu
that Ubuntu OpenStack is in production at OpenStack as the foundation of a next- When I started working with OpenStack
organizations of all sizes worldwide. Over generation NFV (Network Functions it took 3 months to install. Now it takes
half of all production OpenStack clouds Virtualisation) infrastructure. Deutsche Telekom
only 3 days with the help of Juju.
are running on Ubuntu. leverages Canonicals tool chain even further,
using Juju as a generic Virtualised Network Robert Schwegler, Deutsche Telekom AG
To give you an idea of what organisations
Functions (VNF) manager. In this case, Juju
are doing with Ubuntu OpenStack, weve
is used to model and deploy both OpenStack,
highlighted a few here.
as well as the critical workloads running within
the Ubuntu OpenStack environment.

Tweet this
11
OpenStack made easy

Tele2 Walmart
Tele2, a major European telecommunications Walmart, an American multinational retail
operator, with about 14 million customers in 9 corporation, uses Ubuntu OpenStack as the
countries, has also built an NFV infrastructure foundation of their private cloud. One of the [Ubuntu] OpenStack met all the
on Ubuntu OpenStack. They have opted for key factors of scalability is economics. Here, performance and functional metrics
a BootStack cloud; a fully managed Ubuntu the economic scalability of Ubuntu OpenStack we set ourselves It is now the
OpenStack offer from Canonical. cannot be overlooked. While the technology defacto standard and we can adapt
is certainly designed to scale, its just as critical it to our needs.
BootStack dramatically reduces the time
that the methodologies for deployment and
it takes to bring OpenStack into production, Amandeep Singh Juneja, Walmart
billing are also designed to scale.
and allows Tele2 to focus their skilled
resources on telecoms solutions, and not
having to learn and update their skills to the
fast-paced changes of OpenStack.

Tweet this
12
OpenStack made easy

And many more...


NTT, Sky Group, AT&T, Ebay, Samsung and
many other organisations all represent
customers that have elected to build clouds on
Ubuntu OpenStack.
When we started our private cloud Were reinventing how we scale by
Scalable technology, scalable economics,
initiative we were looking for a sustainable becoming simpler and modular, similar
ease-of-use and reduced time to solution
are the primary reasons that so many
cost base that makes it effective and to how applications have evolved in
organisations choose Ubuntu OpenStack. viable at scale we needed a platform cloud data centers. Open source and
that was robust and a platform that OpenStack innovations represent a
brings innovation. Ubuntu OpenStack unique opportunity to meet these
helps us meet & realise those because requirements and Canonicals cloud
of the broad experience Canonical brings. and open source expertise make them
Will Westwick, Sky Group
a good choice for AT&T.
Toby Ford, AT&T

Tweet this
13
OpenStack made easy

More than one cloud

Value of repeatable operations


When building OpenStack clouds its important The reality of modern clouds is that there is no Telcos, media and broadcast companies and
to understand the need for repeatable static production cloud that is never upgraded enterprise organisations distribute operations
operations. or expanded to more than one cloud or rebuilt globally, with potentially thousands of smaller,
as part of a rolling upgrade. off-site operations centers. All need their
One of the common conceptions of building
own cloud to support localised and scalable
and operating a cloud is that you do it once Also, there is no one size fits all cloud.
infrastructure.
and its done. There is a tendency to put
Successful early cloud adopters have come
tremendous time and effort into designing Even smaller organisations build development,
to realize that remote locations may have
both the physical and software infrastructure test, staging and production clouds.
each their own small cloud infrastructure.
for what is to be a static production cloud.
For scalability and redundancy, even within Everyone needs to do these builds consistently,
Often there is little thought put into rebuilding
a single datacenter, they will end up building in a repeatable fashion, many times.
it, modifying it, or doing it many times over.
many, even dozens, of clouds.

Tweet this
14
OpenStack made easy

The perception of difficulty with


big software like OpenStack
Theres a perception that OpenStack is A modern look at the OpenStack perception The challenge of big software
difficult to install and maintain without of difficulty reveals that the best practices
expert knowledge. This perception largely for installation, integration and operations
stems from a flawed approach. OpenStack should be distilled into the software itself.
is big software which means it has so many The knowledge should be crowdsourced and
distributed components that no single person saved in bundles that encapsulate all of the
can understand all of them with expert operational expertise of the leading industry
knowledge. Yet, organisations are still looking experts so that it can be easily and repeatably
for individuals, or teams of people who do. deployed. That is what Canonical has done that
has made Ubuntu OpenStack so successful.
The larger the cloud, the more solutions run
on it, the more people they think they need. In the pages ahead we will show how this
This approach is not scalable economically practice has been adopted for both hardware,
or technically. with MAAS & Autopilot, as well as software,
In his keynote at the OpenStack Summit
with Juju.network performance or network
Austin 2016, Mark Shuttleworth Executive
addresses.
Chairman of Canonical and lead of the Ubuntu
project demonstrated how big software like
OpenStack can be fast, reliable & economic.

Watch Video

Tweet this
15
OpenStack made easy

Operational intelligence at scale

In order to scale, operational intelligence must In the next section, we introduce MAAS, Since containers are vital to system density
no longer be a function of number of skilled to manage bare metal hardware, Juju, to and return on investment, we will also discuss
operators, but rather a function of the right manage application design and deployment, how LXD, the pure container hypervisor, and
tools designed to focus on the right issues. and Autopilot, to completely automate Fan networking, play essential roles in solving
This is where Canonicals unique toolset makes the deployment and updates of an Ubuntu for server and IP network density.
Ubuntu OpenStack relatively easy compared OpenStack cloud.
to other offerings.
Additional tools and solutions are introduced,
Tools built specifically for big software like as well. For a development/test environment
OpenStack are the only way to achieve cloud conjure-up is ideal. It can deploy single-node or
economics in a private cloud. Adding personnel multi-node OpenStack with a single command
wont scale as your cloud grows, and using and a menu walk-through.
traditional scripting technologies requires
too many, and too frequent of updates within
a growing, dynamic cloud.

Tweet this
16
OpenStack made easy

MAAS the smartest way


to handle bare metal
Why MAAS? Hardware configuration Accessible
Hardware must still be installed in a datacentre. With MAAS, you only touch the power button MAAS provides a REST API, Web-based interface
The key to economic efficiency is to touch it as once. During the initial startup of a new server, and command line interface. It is designed
few times as possible. Installing and operating MAAS indexes it, provisions it, and makes it with automation and hardware-at-scale in
a bare metal OS at scale wont work if done by cloud ready. A catalog is maintained of not mind. Devops can even leverage it for bare
hand or custom scripts for every machine type. only the servers, but also the inventory of metal workload management.
devices available in them. This is a key aspect
MAAS stands for Metal as a Service. MAAS
of future provision automation by Autopilot. Integration
delivers the fastest OS installation times
on bare metal in the industry thanks Since theres an API, as well as a CLI, automation
Ongoing infrastructure operations
to its optimised image-based installer. tools like Juju, Chef, Puppet, SALT, Ansible,
Beyond initial configuration, MAAS also handles and more, are all easily integrated with MAAS.
ongoing physical IP and DNS management. That means legacy, scripted automation, like
A lights out datacentre, with a near-zero need Puppet and Chef, are easily integrated, whilst
for hands-on operations, is realized with MAAS. modern modeling tools, like Juju, can naturally
rely on MAAS for hardware information.

Learn more about MAAS at maas.io

Tweet this
17
OpenStack made easy

Juju - model-driven operations


for hybrid cloud services
Why Juju?
Its challenging to model, deploy, manage,
monitor and scale out complex services in
public or private clouds. As an application and
service modelling tool, Juju, enables you to
quickly design, configure, deploy and manage
both legacy and cloud ready applications.

Juju has been designed with the needs of


big software in mind. That is why it is not
only leveraged by Autopilot for OpenStack
installation and updates, but it can also be
used to deploy any scalable application. All Design Congure Deploy and manage
of this is possible from a web interface or with
a few commands.

Juju can be used to deploy hundreds of


preconfigured services, OpenStack, or your
own application to any public or private cloud.

Tweet this
18
OpenStack made easy

Web UI and Command Charms encapsulate best


Line Interface practices
Jujus user interface can be used with or Juju is the key to repeatable operations. Juju
without the command line interface. It uses Charms that encapsulate operational
provides a drag-and-drop ability to deploy intelligence into the software itself that is
individual software or complex bundles of being deployed. The best practices of the
software, like Hadoop, or Ceph, performing best engineers are encapsulated in Charms.
all the integration between the associated With Juju, you dont need an expert in every
components for you. OpenStack project, and an expert in every big
software application, like Hadoop, in order to
You have a graphical way to observe a
achieve operational excellence. All you need
deployment and modify it, save it, export it.
is an understanding of the application(s) once
All of this can be done at command line, as well.
its been deployed using the crowdsourced
operational excellence in Jujus Charms and
bundles.

Learn more about Juju at jujucharms.com

Tweet this
19
OpenStack made easy

OpenStack Autopilot the fastest


way to build an on-premise cloud
Why OpenStack Autopilot? A decision engine for your cloud Build Canonicals OpenStack
Many organisations find building a production While OpenStack Autopilot allows the user Reference Architecture
OpenStack environment challenging and are to manually determine hardware allocations,
The reference architecture that OpenStack
prepared to invest heavily in cloud experts its generally best left to the decision engine
Autopilot will automatically design for you
to achieve operational excellence. Just within it. The underlying infrastructure is
will accomplish maximum utilisation of the
like Juju and Charms, OpenStack Autopilot modelled by MAAS and shared with Autopilot.
resources given to the cloud.
encapsulates this operational excellence. Availability zones are automatically created
for you. OpenStack Autopilot is built for
As an integral feature of our Landscape
hyperconverged architectures. It will use
systems management software, OpenStack Since Autopilot is part of Landscape, integration
every disk, every CPU core, and dynamically
Autopilot combines the best operational with advanced systems monitoring like Nagios
spread load, including administrative overhead,
practices with the best architectural practices is readily accomplished.
equally across all of them. As your cloud is
to arrive at a custom reference architecture
upgraded or nodes are added, OpenStack
for every cloud.
Autopilot can make intelligent decisions as
to what to do with the new hardware resource
and where to place new workloads.

Learn more about Autopilot at


ubuntu.com/cloud

Tweet this
20
OpenStack made easy

Containers in OpenStack

Why containers? Why now? OpenStack containers made easy


Containers have many benefits. But there are As more workloads move to clouds like While container technology is extremely
two things they do extremely effectively. One, OpenStack, the economies of scale are compelling, there can be some difficulties in
package applications for easier distribution. affected not only by the right tools and the integration, operation and deployment. With
Thats an application container like Docker. right approach, but the right workload density the nova-lxd technology in Ubuntu 16.04, a
The other is to run both traditional and cloud- as well. We run more workloads on a given pure container OpenStack deployment is easily
native workloads at bare metal speed. Thats server than ever before. The fewer resources achieved. Nova-lxd provides native integration
a machine container, like LXD. Application a given workload needs, the greater the return for OpenStack with LXD machine containers.
containers can even run inside machine on investment for a cloud operator, That means that no extra management
containers, to potentially take full advantage public or private. software is needed to deploy both traditional
of both technologies. virtual machines as well as modern machine
containers from a native OpenStack API
or Horizon dashboard.

Tweet this
21
OpenStack made easy

LXD the pure container hypervisor

LXD, the pure container hypervisor, is the key Operational efficiency is furthered by the There are prebuilt LXD images for running
to delivering the worlds fastest OpenStack, as ability to live migrate services from one CentOS, Debian, OpenSUSE, Fedora, and
demonstrated at OpenStack Summit in Austin, physical host to another, just like legacy other Linux operating systems.
TX. It achieves the lowest latency and bare hypervisors, but with a pure container
Security is implicit with mandatory access
metal performance. hypervisor.
controls from Apparmor profiles. LXD pure
LXD helps enable a hyperconverged Ubuntu Upgrading a hosts LXD containers is as simple containers are as secure as Linux itself.
OpenStack. It deploys in minutes. Instances as upgrading the underlying OS (Ubuntu),
LXD can run virtually any Linux distribution
that run on top of OpenStack perform at bare migrating services off and back.
as a guest operating system. It doesnt require
metal speed. Dozens of LXD instances can be
You can even run LXD containers inside other special virtualisation hardware. It even allows
launched within that OpenStack cloud
LXD containers; all at bare metal speed, with you to deploy all of OpenStack inside another
in a matter of seconds.
no performance degradation. Traditional cloud, like on Amazon, for example.
When using LXD, an entire OpenStack virtual machines must run on bare metal and
Learn more about LXD at ubuntu.com/lxd
environment can be snapshotted in about cannot be run practically inside other VMs.
2 seconds.

Tweet this
22
OpenStack made easy

Fan networking network


addressing for containers
Why Fan networking? Network address
The density and performance of both machine expansion with Fan
containers (LXD) and application containers A much more elegant solution is Fan Fan networking is another example of how
(like Docker) are extremely compelling for networking. The Fan is an address expansion Canonical is taking a thoughtful, meticulous
modern cloud economics. But, their operation, technology that maps a smaller, physical approach to big software and OpenStack
specifically when it comes to network address space, into a larger address space deployments. Instead of shifting the burden
addressing, can be problematic. on a given host. It uses technologies built into of a given issue from one administrative
In the case of application containers, each the Linux kernel to achieve near-zero loss of domain to another, the issue is addressed at its
application, or binary, requires a unique network performance while providing unique core, using best practices and partnership with
IP address. With potentially hundreds IPs to hundreds or even thousands the open source software community.
of containers on any individual server, of container guests.
Learn more about Fan networking on on the
IP addresses are quickly depleted.
Ubuntu Insights blog.
While there are network addressing
workarounds available, like port-forwarding,
they just shift an administrative burden from
one technology, to another. In this case,
it is now port management.

Tweet this
23
OpenStack made easy

Conjure-up - multi-node OpenStack


deployment on your laptop
Why Conjure-up? Multi-node OpenStack using LXD Get started with conjure-up
In the past developing and testing software Since LXD containers are like virtual machines, Using conjure-up is easy if you already have
in an OpenStack environment has meant using each OpenStack control node service is Ubuntu 16.04. Its as quick as
OpenStack installers like DevStack. Whilst independent, even on a single physical
convenient, DevStacks monolithic architecture machine.
$ sudo apt install conjure-up
cant emulate a multi-node cloud environment.
Multiple physical machines are also an option,
$ conjure-up
Conjure-up is a command line tool exclusive to further mimic production environments
to Ubuntu 16.04 that enables developers to in development and test, without the
easily deploy real-world OpenStack on a single complications of an entire datacenter Learn more about conjure-up at conjure-up.io
laptop using LXD containers. of hardware.

Tweet this
24
OpenStack made easy

ZFS and software defined storage

ZFS makes better containers Container characteristics All clouds store data
ZFS accelerates LXD on Linux. Specifically, Critical aspects of a successful container Ubuntu Advantage Storage provides support
it provides: hypervisor are: for a number of software defined storage
solutions, all priced at cloud scale. Ceph object
Copy-on-write Density
storage is a popular technology that is readily
Snapshot backups Latency available within Ubuntu OpenStack and
provides massive scale-out storage for
Continuous integrity checking Performance
organisations of all sizes.
Auto repairs Fast, secure, efficient
Another unique advantage to Ubuntu
Efficient compression The features of ZFS make innovative and OpenStack and Ubuntu Advantage Storage
superior pure container technologies like is the CephDash, which provides real-time data
Deduplication analytics of Ceph deployments.
LXD even better.
All of these features improve the management Learn more about Ubuntu cloud storage at
and density of containers. ubuntu.com/storage

Tweet this
25
OpenStack made easy

BootStack - your managed cloud

Why BootStack?
Even with the most advanced tools and the
best teams, it can be a lot easier to get started
with some help from the experts that build
thousands of Ubuntu OpenStack clouds every
month.

BootStack (which stands for Build, Operate


and Optionally Transfer) is a managed service
offering that gets you an OpenStack private
cloud in a matter of weeks, instead of months.
Build Operate Optionally transfer

Tweet this
26
OpenStack made easy

Build, Operate and


Optionally Transfer
Canonicals cloud experts will design and With BootStack and BootStack Direct, it has
build an Ubuntu OpenStack cloud to your never been easier to instantiate an Ubuntu
specifications. The hardware can be hosted OpenStack cloud. Regardless of the BootStack
at your datacenter or a 3rd-party provider. offer you may choose, the cloud will still be
built with the aforementioned toolset in this
When you feel comfortable managing your
eBook, to best practices and reference
OpenStack environment, there is an optional
architecture standards, as defined by those
transfer of administrative ownership over
tools.
to your internal team.
Learn more about BootStack at
Another option is BootStack Direct, which
ubuntu.com/bootstack
includes training as Canonical builds out
your OpenStack cloud. Once the cloud is
operational, administration of the cloud is
directly transferred to your team.

Tweet this
27
OpenStack made easy

Conclusion

OpenStack may not be easy, but it doesnt have L


 XD pure containers hypervisor, ZFS and To learn more about a managed solution
to be difficult. The ease of OpenStack is in the Fan networking let you run traditional and for big data, download the datasheet
approach. Big software cant be tackled with cloud-native workloads at bare metal speed BootStack Your Big Data Cloud.
legacy tools and old fashioned thinking.
C
 onjure-up is the simplest way for developers If you want to start trying things out immediately,
With the right tools, OpenStack can be easy, to build a multi-node OpenStack deployment we highly encourage you to visit
and it can reap financial rewards for your on their laptop jujucharms.com
organisation:
B
 ootStack is the easiest way to stand up your
MAAS is the smartest way to handle production cloud and have it managed by the
bare metal worlds leading OpenStack experts
Juju enables easy model-driven operations If youre excited to hear more and talk
for hybrid cloud services to us directly, you can reach us on our
Autopilot is the fastest way to build Contact Us page.
an OpenStack cloud

Tweet this
Enjoyed this eBook? You might also
be interested in ...
CIOs guide to SDN, NFV and VNF
CIOs guide to SDN, NFV and VNF
eBook

CIOs guide to SDN, NFV and VNF


Why is the transition happening and why Read this eBook to:
is it important? Networking and communications
F
 amiliarise yourself with the three most
standards and methodologies are undergoing
popular terminologies today SDN, NFV,
the greatest transition since the migration
and VNF
from analogue to digital. The shift is from
function-specific, proprietary devices to Learn why the transition is happening
software-enabled commodity hardware.
U
 nderstand why its important for anyone
responsible for a network to understand and
embrace this emerging opportunity

L
 earn about the potential benefits, and some Download eBook
deployment and management solutions for
software-enabled networking

You might also like