Professional Documents
Culture Documents
Ebook Final
Ebook Final
Conclusion
Introduction
You’ve either already made your move to the cloud, or you’re thinking about it. But now what?
The benefits that come with moving to the cloud don’t just come about naturally. You’ve got to
do something: Use certain technologies that let you accomplish tasks in a fraction of the time it
took you before moving to the cloud.
The benefits that come with cloud computing have spawned a new age and new growth.
Gartner predicts that worldwide end-user spending on public cloud services is forecast to grow
20.7% to a total $591.8 billion in 2023. That’s higher than the 18.8% growth forecast in 2022.
This ebook is designed to help you understand the various technologies that will help you
accelerate your digital transformation. Welcome to CloudCentrix, your guide to understanding
the various cloud technologies that will help you transform your IT department.
About CloudCentrix
CloudCentrix is a concept developed to help organizations move (applications, data, or
systems) to the cloud and use it in ways that would suit them best. The CloudCentrix concept
was developed to help our customers understand the ways that various cloud technologies can
accelerate digital transformation.
After realizing that many of our clients had moved to the cloud without fully leveraging
technologies that could save their IT teams hours every workday, we created CloudCentrix to
help them understand the various technologies and corresponding courses that could benefit
them.
Adopting new technologies and processes is the first step to working well in the cloud. The
second step is training. Repeatedly, we’ve seen clients that have adopted various cloud
technologies but weren’t using them to their full capabilities, costing loads of unnecessary time.
This ebook doesn’t cover everything the cloud offers; rather, it gives you a good working
knowledge of the various types of technologies and processes you can begin to adopt to elevate
your IT teams. There’s no order in which you should read this book. You can read
chronologically or skip around as you see fit. The chapter divisions help you focus on the tools
and technologies best suited for your needs.
Read this ebook to unlock the full potential of the cloud.
The data center resources may be on-site, which you manage, or off-site and managed by a
third-party vendor. The computing resources are isolated and distributed through a secure
private network not shared with other clients.
Because not all cloud providers are created equal, organizations adopt a multicloud strategy to
deliver best-of-breed IT services to avoid being locked into one provider or to choose providers
based on which one offers the lowest prices.
Some companies already have on-demand IT resource delivery within their infrastructure and
do not require a public cloud. Others build private clouds because their workloads may carry
private data that companies don’t want in the public cloud due to security or compliance
concerns.
Public cloud providers and many third-party software developers help merge cloud and on-
premises resources to make management, backups, and security easier. For example, VMware
Cloud on AWS (and VMware for Azure and Google equivalents) can help overcome some of the
public cloud's challenges as it is the fastest and easiest way to have infrastructure that meets
regulatory compliance. VMware Cloud migrates and extends your on-premises environments to
the public cloud. Numerous other vendors, such as those listed below, can also assist with
managing technology requirements across multiple environments:
● Red Hat OpenShift builds on the portable nature of containers and Kubernetes,
providing a Platform-as-a-Service.
● OpenStack creates and manages cloud infrastructures.
● VMWare extends your existing on-premises infrastructure and operations to any public
cloud, running enterprise programs with a consistent operating model.
● Veeam provides modern-data protection software for virtual and physical infrastructures
within a multicloud environment.
● Nutanix combines the ease and agility of public clouds with the performance and
security of private clouds. Whether on-premises or in a hybrid environment, centralized
management, one-click operations, and AI-driven automation will assure business
continuity.
● NetApp provides storage and backup solutions for hybrid clouds.
Summary
The most critical components in choosing a cloud strategy for most enterprises will be
affordability, accessibility, reliability, and scalability. Your sector, security, compliance
legislation, budget, and plans for the future will determine whether a private, public, hybrid, or
multicloud environment is the right solution for your needs.
Discover which cloud provider is right for you in this whitepaper, How to Choose the Right Cloud
Provider.
Chapter 2 VMware
VMware is software designed to create virtual machines (VMs), which are virtual copies of
computers, operating systems, and installed programs, created to maximize computing
resources. You’ll find an easy explanation of virtual machines here in “The Pros and Cons of
Virtual Machines and Containers.” VMs run independently of one another and make it possible
to accommodate multiple operating systems and workloads on a single server with high
performance and very low latency.
Traditionally, physical servers housed only one application on one server. But with VMs, rather
than running only one application on one server, you can create dozens of virtual machines, and
each one houses its own application, saving companies thousands of dollars on the costs of
buying numerous servers.
VMware offers a stack of products in addition to ESXi, which alone will allow you to build VMs
on a laptop, desktop, or server. You’ll need another computer to connect over a web browser to
your ESXi host. As long as your ESXi management network is visible, you input the IP address
of the ESXi host, log in to your account, and you can start building things from there.
While virtualization brings a lot of efficiencies to the data center, one of the challenges is
resource contention or over-working a physical host. There will be times when you want or need
to move a VM from one host to another. Without vMotion, you’d have to manually shut down the
VM, unregister it on the current host and re-register it on the new host, which all takes time and
requires a maintenance window for the application outage. vMotion lets you move a VM from
one host to another while the VM is running, there’s no need to shut the VM down.
VMware DRS automates that entire process as it detects when a host has too much strain on it
and whether a VM would be better off being on another host. DRS will automatically move VMs
among hosts based on the load. So, if a VM is using so much power that the host is struggling
to give it the power it needs, DRS automatically detects that and moves VMs around the cluster
to resolve the resource constraint.
Storage
Built into the VMware hypervisor is vSAN, which lets you aggregate storage devices within your
ESXi environment and create a single shared data store across your entire cluster of virtual
machines. vSAN delivers enterprise-class storage. You can create storage policies for each VM,
and if there are any policy deviations, you’ll get an alert that will show you what VMware is doing
to mitigate the difference in the policy. If you need to grow storage, you can add more hosts to
expand your vSAN datastore, scaling performance as you scale storage space.
Virtual Network
VMware NSX is the network virtualization solution. As organizations move to a software-defined
data center (SDDC) model, NSX delivers a software-centric approach to networking, including
switching, routing, firewalling, IDS/IPS, and load balancing in a distributed architecture. NSX
provides data center-wide visibility, simplified policy compliance analysis, and streamlined
security operations, connecting and protecting your workloads wherever they’re deployed. Like
VMs, these networks can be created, saved, deleted, and restored easily.
All of these functions and activities are coordinated through VMware’s vCenter Server, the
centralized management platform for controlling your VMware vSphere environments, allowing
you to manage virtual machines, multiple ESXi hosts, and all dependent components from a
single pane of glass so you can connect and protect applications across your data center,
including private and public clouds, no matter where your applications run, whether in a VM,
container, or bare metal.
Part of the VMware stack, Aria, provides customers with a graph — a total view — of all their
assets to effectively manage their cloud-native applications and multicloud assets across cloud
environments. This feature helps companies determine which apps should be deployed on
which cloud and how to optimize cost versus performance. Aria also helps companies detect
whether policies are being applied consistently across environments and provides federated
access to manage users and govern their access to multiple applications.
To learn about using VMware for your private cloud, start with VMware vSphere: Install,
Configure, Manage [V8].
Chapter 3 Veeam
Veeam is a software platform developed to back up, restore, and replicate data. The first
solution to focus on protecting virtual machines (VMs) and recognizing the difference between
VMs and physical endpoints, Veeam backs up data across cloud, virtual, physical, and network-
attached storage (NAS) devices.
As virtualization gains importance for companies in almost every industry, the need for
specialized backup and recovery systems also increases.
Since companies no longer need to maintain data centers and other complex, large-scale
hardware and software installations, they don’t need the personnel to support them on-premises
at their physical locations. However, the cloud’s radically different IT delivery model introduces
new complexities and challenges. Here are just a few of them: Some enterprises, especially
businesses operating in highly regulated industries, need private clouds to ensure the security
and privacy of their sensitive data. Other enterprises may opt for the cost savings of public
clouds, or choose hybrid clouds, which involve some combination of public and private clouds
and on-premises installations. But performance, interoperability, and flexibility issues often arise
when an enterprise moves to the cloud or interacts — as is increasingly the case — with
partners, customers, and even competitors across a highly distributed industry ecosystem.
Perhaps most importantly, the shift to the cloud requires specialized IT and business process
skills — in areas like network security, virtualization, and disaster recovery.
Nutanix’s comprehensive range of products and services are designed to simplify the cloud’s
complexities, offering centralized management, one-click operations using a unified dashboard
that looks and works the same way for all users, and automation powered by artificial
intelligence (AI). Nutanix services, applications, and infrastructure offerings provide the ability to
manage virtually all of an enterprise’s end-to-end operations, including storage, computing,
virtualization, and networking, simply, efficiently, and at scale. This means that Nutanix can
move the enterprise’s entire existing tech stack — including its storage systems and network
servers, its services, virtualization resources, and more — to a hyperconverged cloud
infrastructure. Nutanix can handle the migration path, the day-to-day management of systems,
applications, and services, and the provisioning and deployment of new ones.
Automation
Automation is central to many of Nutanix’s products and services. Nutanix can automate IT
support using advanced techniques like predictive analytics to anticipate surges in calls to the
help desk and can ensure that adequate personnel are always available. Additionally, Nutanix’s
unified cloud offering means that a single IT team can manage all applications and data across
even the most complex multicloud environment. Nutanix’s products and services are
interoperable with essentially all the hardware an enterprise uses.
Nutanix has been placed as a Visionary in the Gartner’s Magic Quadrant for Distributed Files
Systems and Storage for two years in a row, with the research and advisory firm highlighting
Nutanix’s “ease of use and high-quality customer support experience” as one of its key
strengths. Those qualities lie at the heart of Nutanix’s complete value proposition, across its
extensive range of cloud services and applications. One important example is multicloud
security: Nutanix can create software-based firewalls that protect critical applications and data
against emerging threats across the most complex cloud environment and can do it without the
enterprise having to hire or upskill specialists in cloud security. Nutanix security products use
advanced AI techniques and ML algorithms to automate the analysis, identification, mitigation,
and reporting of security threats and ensure comprehensive regulatory compliance.
The bottom line: Nutanix offers a complete end-to-end set of products and services for
enterprises that are looking to move all or part of their systems and applications to the cloud, to
do it simply and efficiently, and to use the cloud as a platform for future innovation.
Introduce yourself to the products, capabilities, and technologies that serve as the foundation of
the Nutanix Hybrid Cloud by taking a course like Nutanix Hybrid Cloud Fundamentals (NHCF).
Chapter 5 Cloud-Native
What is Cloud-Native?
Cloud-native is a technical and business approach to using the cloud to create business
applications quickly and more frequently than ever before. The cloud-native approach is about
building applications for the cloud using microservices to increase speed to market, respond
quickly to business needs by scaling as needed, and integrate the continuous integration and
continuous delivery (CI/CD) pipeline. Cloud-native apps can be developed in days or weeks as
compared to months, the time it often takes to create monolithic apps. Applications built with a
cloud-native architecture allow developers to make simple changes to them in minutes without
ever taking them offline.
Chapter 6 Microservices
Microservices have become the architecture of choice for application development. Whereas a
monolithic architecture was yesterday’s main style of developing applications, today’s modern
app development comes about through using microservices.
A component of cloud-native computing, microservices are mini-applications that compose a
business application. Each microservice is responsible for doing one discrete piece of
functionality and doing it well.
Take an eCommerce site as an example. In a microservice architecture, the company’s
shopping website may look like a typical business application, but it is a combination of dozens
or even hundreds of mini-applications consisting of a product catalog, user profiles, and order
processing. Application Programming Interfaces, referred to as APIs, are gateways between
each of these small, independent applications. Each min-application conducts its own service
and communicates with other microservices, providing a seamless shopping experience. This is
in contrast to traditional applications, which are known as monolithic applications.
Monolithic Applications
A monolithic application is built as a single unified unit. When one part of it fails, there are often
ripple effects that cause other parts of it to fail. To make a change to this type of application, you
must take the entire application offline. Because these applications are built as one unit, they
typically aren’t deployed until the entire application has been built.
Disadvantages Of Microservices
While there are a lot of benefits to microservices, it’s not perfect for every use case, and it has
some drawbacks.
Complexity
There are moving pieces as each application is its own entity and could reside anywhere,
making it difficult to see all the mini-applications at one time. You have to handle requests
travelling between different modules, and the remote calls to a service could experience latency.
Carefully planning out this architecture is critical.
You’ll need to spend time connecting the microservices by enabling authorized access to all the
microservices to get the application running. For small business applications with few services,
it may be best to use a monolithic architecture.
Limited Reuse of Code
Because each smaller application may be written in a programming language that’s different
from the other applications, there’s a limited ability to reuse code.
Learn how to build and connect microservices to help transform your business.
Servers
To understand VMs, let’s start with a basic understanding of servers. They’re called servers
because they serve up applications, information, or other services to other computers.
VMs are deployed on a host, either a physical computer or a physical server. You can run a VM
on a computer or laptop, but companies, which run dozens or hundreds of VMs, typically house
their VMs on a physical server because it has a lot more resources than a computer. The more
resources a physical machine has, the more VMs it can host.
Before hosting business applications in the cloud, companies housed them on physical servers,
which were stored in racks in a company’s data center. Typically, there would be only one
application running on each server. Enterprises might need 1,000 physical servers to host 1,000
applications. That’s a lot of servers to buy and manage.
When an application on a traditional physical server lacks the resources that are needed to
handle increased traffic, the application slows down or, worse, crashes. But that issue rarely
occurs with VMs and containers because they take up far fewer resources. So instead of having
only one VM or container on a server, a server can house dozens of VMs or containers.
Virtual Machines
A virtual machine emulates the functionality of a physical computer. To better understand what
a VM is, let’s start with something we know: a Word document. Although a VM is more complex
than a Word document, a VM is kind of like a Word document in that they are both virtual—
meaning you can’t touch them—and they both contain information. A Word document is a
vessel that hosts content, some type of code, typically made of words and graphics. A VM is a
vessel that typically hosts business applications, which consist of code. To create a Word
document, you need special software like Microsoft Word or Google Docs. So, too, to create a
VM you need software like VMware, VirtualBox, or QEMU. Virtual machines are great for
hosting applications that were created using a monolithic architecture.
Monolithic Architecture
Traditional applications were written in a monolithic architecture, meaning all code instructions
were built in one large block of code. What does that look like? Imagine one large Word
document that contains instructions for operating an electric car. There’s a section in the
document that operates the steering wheel, one that operates the blinkers, one that operates
the engine, one that operates the cruise control. If one of those instructions—say the blinkers—
fails or gets corrupted, that failure could trickle down to other sections of that operating
document. The corruption might start with the blinkers, but because the blinker instructions are
closely tied to the engine instructions, those could also become corrupted. Then before you
know it, the driver of the car not only has blinkers that don’t work but also a non-working engine.
Companies today typically build monolithic apps only for small applications. For example, if a
company wanted to create an application for employees to update their contact information and
add two people to contact in case of an emergency, that would be a small application, so it
would make sense to create it using a monolithic architecture.
When companies move a monolithic application to the cloud, they move an application off a
physical server and deploy it in a VM. A company will typically create a few different instances
of that VM, so if it ever has any type of failure, another duplicate VM takes over, giving the user
a seamless experience with no downtime. When an application resides directly on a physical
server rather than a VM or container, when the application fails, it must be taken offline until it
can be fixed.
Microservices
Microservices are independent services that aren’t dependent on other services. Using our
previous example above, if a blinker were to go out, it wouldn’t affect the brakes, cruise control,
radio, engine, or anything else. Rather than housing all these services in one block of code as is
done in a monolithic application, each of these services is created independently as a mini-
application. Each mini-application is typically housed in its own container and communicates via
APIs to form a business application.
Containers
Each container hosts a mini-application that is part of a business application. For example, an
online retailer might offer a loyalty rewards program. The underlying coding for that program
would be its own service held in its own container. If that service were to fail, it would not affect
any of the other mini-applications associated with the overall business application. The
shopping cart and database would remain running, so even though shoppers would not see the
typical loyalty rewards offer, they could still buy products.
Systems are put into place so that all these containers communicate when needed. Users would
not be able to detect whether an application was a monolithic application or a microservices
application.
Summary
VMs are a great way to move legacy and traditional applications to the cloud. Containers work
great for creating large applications and adopting a cloud-native architecture.
Take a course to learn the core technical skills needed for VMs and containers.
Chapter 8 Docker
How Docker Simplifies and Speeds Up Application Development
Docker containers provide functionality like that of virtual machines (VMs) but are far more
lightweight, as they occupy far less space on the host machine. Since its inception in 2013, the
open source Docker engine has been one of the most impactful innovations in IT. Worldwide,
over 13 million developers currently rely on Docker’s Platform-as-a-Service (PaaS) technology
to facilitate speedy, scalable, container-based development.
Prepackaged Dependencies
Traditional development often leads to a complex, convoluted matrix where different
applications require different versions of the same libraries and dependencies. Docker alleviates
that by packaging all the dependencies into a container alongside an application.
Chapter 9 Kubernetes
What is Kubernetes?
Also known as K8s (for the eight letters in between the first and last letter of its name),
Kubernetes is an open source container orchestration platform that automates the deployment,
scaling, and management of containerized applications across multiple hosts. A Kubernetes
cluster consists of a set of worker machines, called nodes, that run containerized applications.
Able to run on-premises, in the cloud, and in hybrid environments, Kubernetes treats the entire
cluster as a single resource, making it much easier to deploy, scale, and manage containerized
applications.
· Python
Python boasts a massive collection of third-party modules and support libraries.
Compared to many other programming languages, it’s also relatively easy to learn, with
a highly engaged online community for troubleshooting and resolving issues.
Python might be beginner-friendly, but it's not just for beginners. Large organizations
ranging from Wikipedia to NASA all use Python, as do many social media companies
and some of the most prominent cloud platforms in the world, including AWS and
Microsoft Azure. Leveraging its immense development capabilities, Python is one of the
best coding languages for rapidly growing fields like artificial intelligence (AI), machine
learning (ML), and data analytics.
· Java
First released in 1995, this high-level, general-purpose language continually ranks
among the most widely used programming languages. It’s not hard to see why. Java is
versatile, modular, and platform-independent. This means that Java applications can run
on both Windows and Linux as well as other popular operating systems. Every major
cloud platform provides a software development kit (SDK) for Java.
Java’s syntax is similar to the C and C++ languages that influenced its development but
has fewer low-level facilities than either of them. Java is an object-oriented language,
with all code written inside classes and no operator overloading support. TIOBE, which
publishes a monthly list of the most popular software, ranked Java in October 2022 as
the third most widely used programming language in the world.1
Like Python listed above, Java is considered relatively easy to use when compared to
other programming languages. Developers frequently take advantage of Java’s security
and portability to code scalable enterprise cloud applications.
When it comes to cloud computing, Java’s multi-platform ability to run the same program
across multiple systems makes the end-to-end development process much smoother.
Java is one of the best cloud development languages for beginners.
Though developed by Microsoft, .NET is compatible with all major cloud platforms. With
that said, many Azure products, features, and capabilities were designed to run .NET
natively.
Released alongside .NET back in 2002, ASP.NET is an important tool that builds
upon .NET’s existing web application development capabilities with additional editing
features, libraries, and templates. It’s an open source language interoperable web app
framework that can be used for speedy, scalable cloud development. ASP.NET Core
· Angular
Developed by Google, Angular is an open source web platform that is increasingly
popular with cloud developers. Angular is a complete rewrite of the discontinued
AngularJS framework and has introduced core features that function independently to
reduce the risk of minor errors derailing the code.
Dynamic loading allows Angular to start up independently, discover new libraries, and
access new features, significantly speeding up start times and simplifying cross-platform
cloud development.
Angular is written in Microsoft’s TypeScript. The developers recommend using this free,
open source programming language when developing web applications in the platform.
A superset of JavaScript, TypeScript was originally designed to simplify some of the
complexities in JavaScript’s code. As such, TypeScript has all the core capabilities of
Javascript and can be used to develop JavaScript applications. Angular can also be
used with CSS and HTML.
· React
When it comes to building user interfaces, one of the most beneficial tools in a cloud app
developer’s arsenal is the React JavaScript framework. Also known as ReactJS, it is a
front-end library for creating eye-catching user interfaces.
According to a Stack Overflow survey from 2021, React was the preferred web
framework among developers.2 It offers rapid development speed, improved efficiency,
and extensions that enable easy custom component creation. React’s speedy rendering
can drastically reduce app and website load times, helping improve SEO to get noticed
on Google and other search engines.
One of the biggest selling points, however, is the emphasis on User Interface (UI). The
ReactJS framework’s declarative components, reusable components, and virtual
Document Object Models (DOMs) enable the streamlined creation of engaging user
interfaces.
Entry-level cloud developers will also be interested in the framework’s large online
community and relative ease of use. React only deals with the view layer of a website or
app, giving JavaScript developers a much easier entry point than other options like
Angular.
· Spring
Spring is a lightweight, open source framework created for Java development.
Leveraging core features that are compatible with all Java applications and web
With Spring, developers can define remote procedures without remote APIs, run
database transactions without transaction APIs, and solve complex technical problems
in real time. The Spring Framework contains a collection of sub-frameworks, including
Spring Web Flow, Spring ORM, Spring Cloud, and Spring MVC. The Spring framework
also serves as a base for Spring Boot, and Spring GraphQL.
· Go
Some developers call it Go, others call it Golang: Either way, this is one of the best
programming languages for cloud development. This robust, modern language was
ranked among the top 10 most widely used programming languages as recently as
March 20203 and was ranked No. 11 as of October 2022.4
At the time of its initial development, the primary goal of Go was to improve upon the
perceived deficiencies of languages like Python, JavaScript, and C while incorporating
the positive aspects of each.5 As a result, Go combines the runtime of C with the ease of
use and accessibility of Python and JavaScript.
Go enables the reliable, speedy development of secure, scalable apps via microservices
and boasts impressive degrees of package management and concurrency support. It
can be used across most cloud platforms but is most effective when developing cloud-
native apps for Google Cloud.
· Rust
Rust is a general-purpose, multi-paradigm programming language that emphasizes
memory safety and performance. Its built-in ability to provide safe access to hardware
and memory without a runtime or garbage collector enables developers to catch and
address unsafe code before it reaches the user.6 More importantly, Rust provides this
additional level of security without sacrificing speed or increasing memory consumption.
This combination makes it a great fit for cloud developers looking to reduce the
frequency and prevalence of bugs without slowing down or otherwise impacting
performance.
The primary drawback of Rust is its inherent difficulty. It has a steep learning curve that
can be challenging for entry-level developers. In the short term, organizations might
suffer a decrease in immediate productivity, which could scare them away from this
otherwise beneficial programming language. This difficult learning curve has led to Rust
3 TIOBE, “The Go Programming Language,” October 2022. Source.
4 TIOBE, “TIOBE Index,” October 2022. Source.
5 Stanford University, “ Stanford EE Computer Systems Colloquium,” April 2010. Source.
6 Tech Target, “Rust Rises in Popularity for Cloud Native Apps,” August 2021. Source.
being less widely used than it should be based on its capabilities and features. With that
said, many of these once-hesitant organizations are now realizing that Rust
development leads to secure, stable cloud-native apps.
· Kafka
Written in Java and Scala, Kafka is an open source event streaming platform that
processes data feeds in real-time across multiple systems. Kafka enables speedy,
scalable development by capturing and recording streaming data in an immutable
commit log that can then be accessed and added to.
According to Apache, over 80% of all fortune 500 companies use Kafka in some
capacity, and the platform has been downloaded over 5 million times.7 It’s a trusted,
secure, and highly available source of permanent storage, with a robust library of open
source tools, built-in stream processing and the ability to connect with almost everything,
including the stream processing services of the major cloud platforms. Kafka also gives
developers the capability to access and process event streams in many different
programming languages.
· Confluent
Built by the creators of Kafka, the Confluent platform provides many of the same data
streaming capabilities without the demanding monitoring and management
requirements. Confluent simplifies the connection between Kafka infrastructure and data
sources, serving as a central source of truth for all historical and real-time data. In
addition to enabling databases and file systems to access Kafka via the Kafka Connect
API, Confluent also serves as a Kubernetes operator.9
Confluent is the best option for cloud-native development on Kafka. The platform
provides direct access to a serverless, cost-effective, and highly available cloud
development ecosystem that’s currently used by most Fortune 500 companies.10
Essentially, Confluent turns Kafka from a demanding tool with significant overhead and
management requirements into an open source, enterprise-ready cloud infrastructure
solution for scalable growth.
· Selenium
Local browser testing infrastructure can be inflexible, unscalable, and expensive, and
typically lacks the capabilities required to run adequate tests. The Selenium framework
is different. In addition to being a SaaS model with no overhead that only requires you to
pay for what you use, testing with Selenium on the cloud leverages the power of parallel
testing to unlock more complete coverage.
Selenium WebDriver allows you to automate graphical user interface (GUI) tests on
Chrome, Firefox, and other leading browsers and has a sizable online community for
troubleshooting and support.
Provisioning, configuring, managing, and reconfiguring infrastructure has long been a time-
consuming, difficult process for system administrators. Repeatedly executing any process
manually runs the risk of introducing inconsistency. Even when executing processes correctly,
people still must individually reconfigure individual servers when they go offline due to an error
or accident. But with automation, no one ever must configure or reconfigure servers individually
because it’s done automatically once you connect the server to a configuration tool.
Founded on DevOps practices, IaC automates processes for both system administrators and
DevOps. IaC allows you to build, change, and manage your infrastructure in a safe, consistent,
and repeatable way by defining resource configurations you can version, reuse, and share.
Without IaC, teams must maintain deployed environment settings individually, which is costly
from a time perspective. Over time, each of those environments becomes a unique
configuration, referred to as drift, which cannot be reproduced consistently, causing
inconsistencies among environments and problems with deployment and security.
The impact of infrastructure automation via IaC
Below is a list of the strategic business value that infrastructure automation presents:
Helps to stop inaccuracies and inconsistencies in the creation and documentation of
your architecture
Provides an accurate picture of your infrastructure at any given moment
Enables and enhances version control
Reduces reliance on undocumented or poorly shared tribal knowledge that may be held
by just a few people in your organization
Minimizes the risk of human error
Reduces the time needed to manage technology infrastructure, freeing up staff to focus
on higher-value tasks
While it seems like all companies should put IaC in place, many of them don’t because it
requires an initial time investment. But once IaC becomes part of your system, IaC generates
repeatable and identical environments for any one device—or group of devices—providing you
with the same environment every time it deploys, preventing configuration errors.
Discover how to automate your infrastructure with Ansible or Terraform and how to create an
automated CI/CD pipeline.
Ansible and Terraform are both used to automate repetitive tasks that typically take system
administrators hundreds of hours a month to manually perform. Yet many companies are still
not taking advantage of these Infrastructure as Code (IaC) tools, which can release system
administrators from laboring over manual procedures.
Without IaC tools, system administrators must manually configure servers, release new versions
of applications on dozens of servers, install security updates to servers and applications,
conduct backups and system reboots, create users, assign permissions for individuals and
groups, and document the latest server configurations and steps for installing applications.
Many companies still spend countless hours each week doing these tasks, but they all can be
automated with software like Terraform and Ansible.
In this article, we’ll help you understand the benefits of Terraform and Ansible, as well as the
main ways they differ from one another. But first, let’s define what each of these software
solutions was designed to do.
What is Ansible?
Ansible is a collection of open source software tools that automate configuration management,
software provisioning, intra-service orchestration, server updating, and many other routine IT
tasks. Written in Python, Ansible is easy to deploy, making it a popular option for organizations
looking to streamline version control. It does not require extensive programming knowledge to
understand, which is advantageous to end-users as well as DevOps teams. Ansible can
configure systems, deploy software, and orchestrate advanced workflows to support application
deployment, system updates, and more. It also supports hybrid cloud automation, network
automation, and security automation. Automation streamlines essential routine activities in
addition to testing and deploying network changes, helping you run your network more
efficiently.
What is Terraform?
Terraform is an open source solution for securely building and maintaining IaC processes. It can
manage proprietary infrastructure solutions as well as other solutions provided by third-party
vendors. Terraform-managed infrastructure can be hosted on leading public clouds like Amazon
Web Services (AWS), Google Cloud, and Microsoft Azure. Alternatively, it can be hosted on-
premises using private clouds. Terraform is commonly leveraged by IT departments and
DevOps teams to ensure a single, secure workflow across multiple cloud environments.
Terraform is also commonly used for managing Kubernetes clusters and multicloud
deployments, as well as for automating the infrastructure deployment of existing workflows.
Whether or not you intentionally store data in the cloud, your data is somewhere there due to
applications you intentionally use like Salesforce or those your employees use without your
permission. That’s why cloud security needs to be at the forefront of every IT decision you
make.
In the early days of public cloud service providers, many companies were leery of housing data
in the cloud because of the strict requirements they had to comply with. But nowadays,
according to the Sysdig 2022 Cloud-Native Security and Usage Report, most security incidents
in the cloud occur due to misconfigurations. Granting excessive privileges, unintentionally
exposing assets to the public, and neglecting to change weak default configurations are
examples of common misconfiguration errors that are the fault of their customers rather than the
cloud service providers (CSPs) themselves. Generally, cloud customers are not taking the
necessary precautions to secure their data.
The main cloud service providers employ dozens of cybersecurity professionals and must
maintain the highest security standards for a variety of compliance requirements, like HIPPA,
DDS, and DPA. Cloud service providers regularly undergo Quality Security Assessments
(QSAs), so cloud customers should be more concerned with their own cloud cybersecurity
practices rather than the security of their cloud providers.
CSPs aren’t perfect, but breaches that do occur there are mainly due to issues arising from their
customers’ subpar cloud architecture, regulatory compliance violations, poorly configured
services, vulnerable APIs, and inside attacks.
In other words, the strategies and architecture deployments of your internal team are often more
impactful to your overall cloud security than insecure data centers. Optimizing your cloud
architecture is the most powerful way to improve organizational cloud security.
Cloud security certifications are a great way to build this knowledge, but there are lots of
different options to choose from. Broadly speaking, these can be split into two categories:
vendor-specific security certifications and security certifications from professional IT
organizations.
In the security courses for each cloud provider, you’ll learn how to protect your environment in
their cloud. Courses like Security Engineering on AWS, Microsoft Azure Security Technologies,
and Security in Google Cloud provide a wide overview of security controls for their clouds. Each
cloud service provider offers different controls, so if your data is in a multicloud environment,
you’ll need security training for each CSP.
If you want to secure your public cloud, take the security courses that your cloud provider offers,
such as AWS Security Essentials or Microsoft Azure Security Technologies.
Chapter 14 DevOps
To stay competitive and better market to customers, companies consistently create new
applications and revise existing ones. But innovation is hampered when problems arise between
the development side and the operations side of application production. Development teams
write code to create new products, features, and updates; operations teams deploy and manage
software while diagnosing any errors caused by integrating new code that developers deliver for
release. Because of the barriers between these traditionally siloed teams, miscommunication
and product release delays are not uncommon. To help both sides work better together,
companies use a collaborative approach, DevOps, which unites two teams — developers (Dev)
and operations (Ops) — to improve productivity by automating more of the product release
process and monitoring software performance in time-efficient chunks.
Benefits of DevOps
Adopting the DevOps approach improves the quality of software development, the rate of
software releases, and the speed of innovation. Errors can be caught and fixed before code can
be pushed out into the production environment. The deployment process increases efficiencies
and shortens the time to market from months or weeks to days or hours.
Git and GitHub are two different technologies used by developers. You don’t need GitHub to
use Git, but you need Git to use GitHub.
Git
Git is an open source (freely distributed) version control system (VCS), which automatically
tracks coding activity over time and allows developers to save each modified version of the code
in anticipation of situations that require reverting to an earlier version.
Git is a command-line application that developers can install and host on their personal
computers or their organization’s server, so multiple developers working on the same code base
don’t accidentally overwrite another developer’s changes. According to the 2022 Stack Overflow
Developer Survey, Git is the overwhelming choice for version control by almost 94% of today’s
developers.
GitHub
GitHub is a web-based hosting service that provides developers with a globally accessible
platform for collaborating on Git projects. Git and GitHub are two separate entities. A standard
way of doing version control, Git was developed by Linus Torvalds as a free open source
program. On the other hand, GitHub was created as a commercial (for-profit) product by a
couple of software developers, who had no connection to him. While GitHub offers a free
hosting service, a user-friendly VCS, and standard automation tools that fulfill the requirements
for many development projects, it also offers premium plans — GitHub Pro, GitHub Team, and
GitHub Enterprise — for more specialized development and deployment needs.
GitHub’s freely distributed VCS offers version control and activity tracking based on Git. GUI-
based and very intuitive, GitHub is easy to learn, even for non-programmers. GitHub’s standard
features include project management tools, such as user authentication and access controls,
permission settings, task management, and internal project team messaging.
Git users can upload their work to an online forum or virtual community to solicit feedback or
find collaborators with similar interests and expertise. Alternatively, users already working with
other programmers can upload their work to an off-site server via a Local Area Network (LAN)
or an online location where the work is “merged” with the main project code.
As the world’s most popular online destination for Git projects, GitHub stores uploaded data in a
filing system called a “repository.” Using GitHub’s GUI-based VCS and project management
tools, a development manager shepherds the team’s collective body of work to completion with
minimum hands-on oversight. Think of GitHub as a virtual design studio where technology
projects based on work done by a team of Git collaborators are managed and stored.
Speaking of which, Git users don’t need to use GitHub, but to collaborate on the GitHub
platform, all users (with certain exceptions) must be using Git.
For example, let’s say a Git user has an idea for a new feature or solution that will fix a bug. The
user creates a branch file where the new code is written. After uploading the file to the GitHub
repository, the modified code is subject to testing and approval before being merged with the
main code. The steps involved in this process are controlled by user-generated notifications,
known as “pulls” and “merges,” which automatically create a communication thread for audit
review.
Sounds simple enough, but when dozens or hundreds of Git programmers are working on a
project over an extended period, GitHub’s GUI-based VCS and management tools help smooth
out the workflow and make the project manager’s task much easier.
Teams wanting to learn GitHub and developers seeking to improve their Git skills can learn
basic and advanced Git commands and best practices in just two days. Sign up now for a Git &
GitHub Boot Camp.
Ticket 6 Ticket 4
Ticket 7
Ticket 8
Daily Scrum
Every morning at the same time, everyone attends a meeting, commonly referred to as the
stand-up, where each contributor to the project reports on their progress and issues that have
arisen since the last stand-up. These daily meetings, which last about 15 minutes, hold
everyone accountable for their work and allow people to discuss improvement ideas.
The Process
After the stand-up, people get to work, and over time, the cards gradually make their way across
the board. For a sprint that lasts two weeks, by the middle of the second week, it’s expected that
the cards have moved at least one step. It’s always a race to get everything done by the end of
the sprint so the services they built can all be released. Anything that was not completed during
a sprint stays on the board for the next sprint.
Retrospective
A retrospective meeting is held after each sprint to discuss what went well and what can be
improved upon in the next sprints. The team often discusses the best ways to improve the
processes, tools, and communication between teams and team members, as well as the action
items the team will commit to for the next sprint.
Scrum Team
A Scrum Master has no management authority over the team but is responsible for ensuring the
scrum values are followed and removing obstacles obstructing the team’s progress. A Scrum
Master who holds a certification as a Scrum Master helps the team perform better and ensures
the output meets the product objectives.
Often instrumental in assisting with face-to-face communication with stakeholders, the Product
Owner is responsible for maximizing the value of the product, as well as for deciding when the
team needs to continue developing the product and when to release it. A product owner who
holds the Certified Scrum Product Owner title helps ensure that the team works well together
and the product meets the needs of the business.
Both Scrum and Kanban limit the work in progress, identify bottlenecks, and improve
productivity. They both break down large tasks so they can be completed efficiently and place a
high value on continual improvement.
Kanban
Kanban, a Japanese word that means signboard or billboard, encourages small, incremental,
evolutionary changes to a current process and is focused on getting things done. Kanban seeks
to improve already established processes in a non-disruptive way by continuously improving
them through constant collaboration and feedback.
Originating in the manufacturing industry, Kanban was later adopted by agile software
development teams and has since been adopted by other business lines throughout a variety of
industries.
Kanban uses a board to visualize and manage work. The columns could be the same as the
one in Fig. 2 below or could include many other columns to suit the project and team. The work
items that need to be done are put on individual cards and present important information about
a task. This information includes the names or job roles of the people who will handle the work
item, a brief description of the job being done, screenshots, technical details that help explain
the task, and the amount of time the piece of work is estimated to take.
Kanban Cards
The cards for projects start on the far-left column of the Kanban board and travel from left to
right across the board, allowing team members to see the status of any work item at any point in
time. If a card is held up too long in any column, everyone will be able to see that on the
visualization board, allowing team members to identify any tasks that are taking longer than
expected.
Limits
A central Kanban principle is to limit work in progress. Collectively, the team decides how many
cards can be in any one column at any one time. The team members may decide there may
only be five cards in the Developing column and four cards in the Testing column.
Card 18 Card 13
A Collective Leadership
Kanban is all about visualizing and optimizing a process. All team members should have a good
understanding of Kanban skills and optimizing the system. They can become proficient in
adopting the practices by taking a Kanban workshop.
Across industries and business departments, teams rely on data to drive decision-making
and optimize day-to-day operations. But when data is siloed and inaccessible to the people in
your organization who need it, they can’t analyze all the data the company has collected to
identify valuable business opportunities. This inaccessibility is one of the areas where on-
premises analytical technologies fall far behind cloud analytics.
Lack of Scalability
The first problem companies are faced with when performing analytics on-premises is the lack
of scalability with their traditional analytics software. Even if your data is already housed in the
cloud — and it’s likely that at least some or all of it is stored there — many companies are still
performing their analytics on-premises. This wasn’t a huge problem twenty years ago when
most data was manually entered by humans at a keyboard. But these days, there is so much
machine-generated code that almost every organization is a “Big Data” organization.
Traditional on-premises data warehouse devices and Hadoop clusters have tightly coupled
compute and storage, so when either the data or the amount of processing required increases,
more expensive hardware needs to be purchased. Hadoop clusters are a collection of
computers, known as nodes, that are networked together to perform parallel computations on
big data sets. When you use cloud analytics, you don’t have to buy a collection of computers to
perform parallel computations. You pay the cloud providers only for the time that you use their
machines to perform computations.
High Costs
The third problem that arises when performing traditional analytics on-premises is the high cost
of both the analytics software and the servers it runs on. These software programs can take
days or weeks to get fully up and running and must continually be maintained and paid for,
regardless of whether analytics are run for just an hour a day or 12 hours a day. Running
analytics on huge data sets requires huge computing power, and that could take one machine
on-premises dozens of hours. Over time, as your data grows, that means more servers are
needed for storage and computing.
Improved Security
The cloud also provides increased security and safety for both your business and your users.
Data stored in the cloud is generally backed up in multiple locations in the local region, and, if
you choose to, you can also store your data around the globe, for a fee, eliminating the danger
of a single point of failure. Sensitive information doesn’t have to be transferred through emails or
on flash drives. The cloud analytics service providers’ tools can help you to determine whether
there’s anything that violates governance or compliance requirements like GDPR or PCI DSS.
What is Databricks?
Companies have been collecting data about their business operations since the concept of a
business originated. With the advent of computers as tools for amassing large quantities of
data, they have increasingly faced problems associated with organizing, applying, and
understanding that data. To be of value to the organization, that data must be accessible, well-
organized, and accurate.
This is where Databricks comes into play.
Microsoft Power BI and Tableau (whose parent company is Salesforce) are business
intelligence (BI) tools that gather, integrate, analyze, and present business information to help
you interpret corporate data and make wise business decisions. The 2022 Gartner Magic
Quadrant report positions both tools as leaders in the BI space.
The functionality and features of the two platforms appear incredibly similar at first inspection.
But Power BI is easier to use, allowing anyone in your organization to start creating reports and
visualizing data. Tableau will be easy for data analysts, engineers, and data scientists to use but
will be more difficult for people who work outside of IT.
Tableau is similarly strong, but its interface isn't as user-friendly, making it more difficult to use
and master. Those with prior data analytics and statistics experience will have an easier time
cleaning and translating data into visuals. People just starting out using analytics software will
undoubtedly feel overwhelmed by the uphill fight of learning basic data science before creating
any dashboards or reports.
Power BI is faster and performs better when data volume is limited, whereas Tableau can
handle large volumes of data quickly. Power BI also has limited access to some databases and
servers, while Tableau has access to the types of databases that are used by data analysts like
Hadoop.
Power BI
Power BI is a powerful application for data analytics, data visualization, and ad-hoc report
creation, providing a multi-perspective view of the information. After you clean and aggregate
data to create a single data model for analysis processes, you can view, visualize and analyze
the information to generate key business insights.
Power BI imports data from .PXIB files, reducing storage requirements, data transmission time,
and the need for more bandwidth. It offers several software services, over 100 connections, and
a drag-and-drop capability to make it easy to use.
Tableau
Tableau is a well-known data visualization and business intelligence program used to analyze
and create reports on massive amounts of data. The software enables users to build various
charts, graphs, maps, dashboards, and stories to view and analyze data to make business
decisions. It incorporates natural language capabilities, used in artificial intelligence, into its
software, helping to find solutions to complex problems by understanding the data better.
The learning curve for Tableau dashboard development is slightly steeper than Power BI, but
given it is geared more towards data analysts than casual users, it still doesn’t require you to be
highly technical.
Ease of Use
Power BI provides end users with real-time dashboards and data analysis. To improve the user
experience, the tool also has superb drag-and-drop functionality. Users do not require
substantial technical knowledge to use its robust data analytics and discovery features.
Tableau also has powerful dashboard and reporting options, albeit some of them are less
straightforward. Tableau's live query features are advantageous to analysts. Tableau also
includes query-based visualization and drag-and-drop functionality.
To learn how to operate Tableau, take Tableau Desktop 1 Fundamentals. Business Analysts,
Data Analysts, and Data Scientists should start their Power BI learning path with Quickstart to
Power BI For Analysts And Users.
Conclusion
Once you decide on the technologies you’re most interested in working with, consider the job
roles and skills that are required to use them to their full capabilities. Even if your organization
has been using specific technologies for a while, it could be beneficial for various roles to
receive vendor-authorized training for them. Instructors often report back to us that students
who have been using a tool for more than a year say the course they just took gave them new
knowledge that would have saved them hours on projects they have previously worked on.
It's time to understand the skills and training needed to best use your various cloud technologies
to advance your digital transformation. ExitCertified has been providing IT training to individuals
and organizations for over 20 years. As well as providing standard courses, we also customize
training to suit your IT projects and goals, so your IT staff learns all they need to know to
perform their job duties. To speak with a subject matter expert who can help you discover which
courses would best suit you or your organization, contact us.