Full Stack Unit-V

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

FULL STACK DEVELOPER VS.

CLOUD COMPUTING: THE


KEY DIFFERENCE

A full stack developer is an IT PROFESSIONAL, usually with a computer


science degree. They typically have hands-on experience dealing with all
essential hardware, software, operating systems, business logic, programming
languages, applications, and, more importantly, servers, databases, and
networks.
Cloud computing isn’t a specific job or role. It’s the entire setup of IT solutions
necessary to facilitate an array of services. These services include infrastructure,
platform, software, business processes, application, management and security,
system, and desktop.
Full Stack Developers vs. Cloud Computing Jobs
One of the most popular cloud computing jobs is that of a software engineer.
A software engineer develops programs. The program could be an operating
system, an application, and even a video game.
The responsibilities of a software engineer or developer are limited to research,
design, and writing the codes for the program or application.
Neither software engineers nor architects work on the entire front-end and back-
end infrastructure spectrum, be it hardware, software, or other solutions. Many
software engineers work only on a limited portion of the code required to run an
operating system, game, or application.
Software architects have a broader role in upholding coding standards and
ensuring compatibility with platforms and tools.
All other cloud computing professionals, such as data engineers, data scientists,
system engineers, system administrators, or those who work with programming
languages and different platforms, have specific roles in the larger scheme of
things. Only a full stack developer straddles the entire ecosystem.
Two cloud computing jobs are somewhat similar to that of a full stack developer.
These are front-end and back-end developers.
 A front-end developer works on all the solutions that are accessed by the end-
users.
 A back-end developer works on the entire infrastructure necessary to facilitate
the front-end services.
 A full stack developer works on both.
An Overview of Full Stack Development
Almost all IT solutions and indeed cloud computing services have two significant
ends. The first is the server-side, also known as the back-end.
The second is client-side, also referred to as the user-end. However, this is a
simplified explanation. The back-end has multiple levels or stacks, and so does
the front-end.
Every IT system, process, or solution runs on stacks. A stack is an abstract data
type within a more extensive setup. Any program based on PHP, Java, Ruby, or
Python is a significant stack. HTTP-based communications are a different type of
stack. Databases are also a kind of stack.
Multiple stacks work in synergy to run a process.
There are various stacks at the back-end, such as databases, servers, operating
systems, business logic, architectures, and more. Some stacks connect the back-
end and the front-end, such as an Application Programming Interface (API).
There are multiple front-end stacks, too, some of which are of the IT or cloud
service provider, and the others are in the native application or interface being
used by the client or customer. Full stack development covers everything in this
enormous ecosystem.
Cloud computing and many other IT solutions can function uninterruptedly and
as expected only when all the stacks are developed, managed, and operated
flawlessly. While every department plays its part, the overall functioning and
management rest with the full stack development team.
Key Responsibilities of a Full Stack Developer
These developers work with databases, web servers, architectures and
frameworks, various types of hardware and software, network infrastructure,
programming libraries, desktop & mobile applications, API, and UI/UX design.
Full stack developers or engineers have more key responsibilities.
The comprehensive role of a full stack developer requires an eligible professional
to be familiar with a combination of the following:
 Front end – Javascript, Typescript, Elm, XML, React, GraphQL, Boostrap,
Angular, Gulp, Grunt
 Back end (aka server side) – Javascript, Ruby, Python, Scala, Go, Node.js,
Express.js, SQL, Java, PHP, C#
These developers plays a fundamental role in setting up, testing, improving,
managing, functioning, and troubleshooting everything in the back-end,
connecting interfaces, and the front-end of a cloud computing ecosystem.
Full stack developers usually test hardware and software, troubleshoot and
upgrade them, work on data protection and security, coordinate with other teams
to improve all back-end & front-end systems, and write the technical documents
for all the systems.
Career Prospects for a Full Stack Developer
Full stack developers are once again in demand these days, not only in cloud
computing environments. All IT-dependent sectors need full stack developers and
engineers, from banking to e-commerce, social media to manufacturing.
Eligible professionals can explore jobs in Verizon, Samsung, Deloitte, Spotify,
and Starbucks, among countless other organizations.
Salary.com reports that full stack developers earn between $82,430 and $109,156
per year BUT once you reach a Lead or Principal level compensation really
jumps. Indeed says that full stack developers or engineers with three to five years
of experience make around $128,585. The latter is at par with the salaries of
software architects and senior software engineers.
Final Thoughts
A full stack developer or engineer serves as an active facilitator to ensure all
departments of cloud computing or an IT environment work in tandem. It is an
all-inclusive process, but the developers or engineers aren’t solely responsible for
everything. Most companies have teams of similar developers.

Virtual private cloud


A virtual private cloud (VPC) is an on-demand configurable pool of shared
resources allocated within a public cloud environment, providing a certain level of
isolation between the different organizations (denoted as users hereafter) using the
resources. The isolation between one VPC user and all other users of the same cloud
(other VPC users as well as other public cloud users) is achieved normally through
allocation of a private IP subnet and a virtual communication construct (such as
a VLAN or a set of encrypted communication channels) per user. In a VPC, the
previously described mechanism, providing isolation within the cloud, is accompanied
with a virtual private network (VPN) function (again, allocated per VPC user) that
secures, by means of authentication and encryption, the remote access of the
organization to its VPC resources. With the introduction of the described isolation
levels, an organization using this service is in effect working on a 'virtually
private' cloud (that is, as if the cloud infrastructure is not shared with other users), and
hence the name VPC.
VPC is most commonly used in the context of cloud infrastructure as a service. In this
context, the infrastructure provider, providing the underlying public cloud
infrastructure, and the provider realizing the VPC service over this infrastructure, may
be different vendors.

Implementations
Amazon Web Services launched Amazon Virtual Private Cloud on 26 August 2009,
which allows the Amazon Elastic Compute Cloud service to be connected to legacy
infrastructure over an IPsec VPN.[1][2] In AWS, VPC is free to use, however users will
be charged for any VPN they use.[3] EC2 and RDS instances running in a VPC can
also be purchased using Reserved Instances, however will have a limitation on
resources being guaranteed.[citation needed]
IBM Cloud launched IBM Cloud VPC[4] on 4 June 2019, provides an ability to manage
virtual machine-based compute, storage, and networking resources. [5] Pricing for IBM
Cloud Virtual Private Cloud is applied separately for internet data transfer, virtual
server instances, and block storage used within IBM Cloud VPC. [6]
Google Cloud Platform resources can be provisioned, connected, and isolated in a
virtual private cloud (VPC) across all GCP regions.[7] With GCP, VPCs are global
resources and subnets within that VPC are regional resources. This allows users to
connect zones and regions without the use of additional networking complexity as all
data travels, encrypted in transit and at rest, on Google's own global, private
network. Identity management policies and security rules allow for private access to
Google's storage, big data, and analytics managed services. VPCs on Google Cloud
Platform leverage the security of Google's data centers.[8]
Microsoft Azure[9] offers the possibility of setting up a VPC using Virtual Networks.

What is a virtual private cloud


(VPC)?
A virtual private cloud (VPC) is a secure, isolated private cloud hosted within

a public cloud. VPC customers can run code, store data, host websites, and

do anything else they could do in an ordinary private cloud, but the private

cloud is hosted remotely by a public cloud provider. (Not all private clouds

are hosted in this fashion.) VPCs combine the scalability and convenience of

public cloud computing with the data isolation of private cloud computing.

Imagine a public cloud as a crowded restaurant, and a virtual private cloud


as a reserved table in that crowded restaurant. Even though the restaurant is
full of people, a table with a "Reserved" sign on it can only be accessed by
the party who made the reservation. Similarly, a public cloud is crowded with
various cloud customers accessing computing resources – but a VPC reserves
some of those resources for use by only one customer.
What is a public cloud? What is a
private cloud?
A public cloud is shared cloud infrastructure. Multiple customers of the cloud
vendor access that same infrastructure, although their data is not shared –
just like every person in a restaurant orders from the same kitchen, but they
get different dishes. Public cloud service providers include AWS, Google
Cloud Platform, and Microsoft Azure, among others.

The technical term for multiple separate customers accessing the same cloud
infrastructure is "multitenancy" (see What Is Multitenancy? to learn more).

A private cloud, however, is single-tenant. A private cloud is a cloud service


that is exclusively offered to one organization. A virtual private cloud (VPC) is
a private cloud within a public cloud; no one else shares the VPC with the
VPC customer.

How is a VPC isolated within a


public cloud?
A VPC isolates computing resources from the other computing resources
available in the public cloud. The key technologies for isolating a VPC from
the rest of the public cloud are:

Subnets: A subnet is a range of IP addresses within a network that are


reserved so that they're not available to everyone within the network,
essentially dividing part of the network for private use. In a VPC these are
private IP addresses that are not accessible via the public Internet, unlike
typical IP addresses, which are publicly visible.
VLAN: A LAN is a local area network, or a group of computing devices that
are all connected to each other without the use of the Internet. A VLAN is a
virtual LAN. Like a subnet, a VLAN is a way of partitioning a network, but the
partitioning takes place at a different layer within the OSI model (layer 2
instead of layer 3).

VPN: A virtual private network (VPN) uses encryption to create a private


network over the top of a public network. VPN traffic passes through publicly
shared Internet infrastructure – routers, switches, etc. – but the traffic is
scrambled and not visible to anyone.

A VPC will have a dedicated subnet and VLAN that are only accessible by the
VPC customer. This prevents anyone else within the public cloud from
accessing computing resources within the VPC – effectively placing the
"Reserved" sign on the table. The VPC customer connects via VPN to their
VPC, so that data passing into and out of the VPC is not visible to other
public cloud users.

Some VPC providers offer additional customization with:

 Network Address Translation (NAT): This feature matches private


IP addresses to a public IP address for connections with the public
Internet. With NAT, a public-facing website or application could run
in a VPC.

 BGP route configuration: Some providers allow customers to


customize BGP routing tables for connecting their VPC with their
other infrastructure. (Learn how BGP works.)
What are the advantages of using a
VPC instead of a private cloud?
Scalability: Because a VPC is hosted by a public cloud provider, customers
can add more computing resources on demand.

Easy hybrid cloud deployment: It's relatively simple to connect a VPC to a


public cloud or to on-premises infrastructure via the VPN. (Learn about
hybrid clouds and their advantages.)

Better performance: Cloud-hosted websites and applications typically


perform better than those hosted on local on-premises servers.

Better security: The public cloud providers that offer VPCs often have more
resources for updating and maintaining the infrastructure, especially for
small and mid-market businesses. For large enterprises or any companies
that face extremely tight data security regulations, this is less of an
advantage.

How does Cloudflare support virtual


private clouds?
Cloudflare makes it easy to use any cloud service by providing a single plane
of control for performance, security, and reliability services, including bot
management, DNS, SSL, and DDoS protection (even for layer 3 traffic). The
full Cloudflare stack sits in front of any cloud deployment and accelerates
good traffic while blocking bad traffic.
HORIZONTAL VS VERTICAL SCALING
To scale up or scale out? That is the question. When your business
is growing and your applications need to expand accessibility,
power, and performance, you have two options to meet the
challenge — horizontal scaling and vertical scaling. This
Touchstone blog will help you answer that question: “Should my
business scale up or scale out?”

What is Scalability?
If you work in the data center industry or any other industry, you
will probably hear two terms often referred to as horizontal scaling
and vertical scaling, and these are the two most common
buzzwords when working with data centers and data center
management systems (DMS).

First, let’s explain what scalability is. Scalability is simply


measured by the number of requests an application can handle
successfully. Once the application can no longer handle any more
simultaneous requests, it has reached its scalability limit. There
are multiple scaling approaches for businesses in 2020. For
example, your application may be able to successfully handle X
number of requests simultaneously, but as soon as your
application hits X + 1 simultaneous requests, your critical
hardware resources run out and your application has reached its
maximum capacity.

Which approach is right for your business?


For your business to grow, to prevent downtime, and reduce
latency, you must scale your resources accordingly. You can scale
these resources through a combination of adjustments to network
bandwidth, CPU and physical memory requirements, and hard
disk adjustments. Horizontal scaling and vertical scaling both
involve adding resources to your computing infrastructure, your
business stakeholders must decide which is right for your
organization.

The main difference between scaling up and scaling out is that


horizontal scaling simply adds more machine resources to your
existing machine infrastructure. Vertical scaling adds power to
your existing machine infrastructure by increasing power from
CPU or RAM to existing machines.

What is vertical scaling?


Vertical scaling keeps your existing infrastructure but adds
computing power. Your existing pool of code does not need to
change — you simply need to run the same code on machines with
better specs. By scaling up, you increase the capacity of a single
machine and increase its throughput. Vertical scaling allows data
to live on a single node, and scaling spreads the load through CPU
and RAM resources for your machines.

Vertical scaling means adding more resources to a single node and


adding additional CPU, RAM, and DISK to cope with an increasing
workload. Basically, vertical scaling gives you the ability to
increase your current hardware or software capacity, but it’s
important to keep in mind that you can only increase it to the
limits of your server. Vertical scaling also refers to increasing the
number of machines in a data center by adding more resources to
single nodes — but without adding additional CPUs, memory, or
hard drives to allow more capacity and allow the individual node
to handle an increased workload.
This can be achieved by changing the instance size, buying new,
more powerful devices, or discarding old ones. Instead of buying a
powerful machine, horizontal scaling means adding a simple
server with a distributed computing model. If you run your
application in your own data center (not in the cloud) and use
Kubernetes to scale it, you get all the benefits of an on-demand
server.

Drawbacks of vertical scaling


Small- and mid-sized companies most often use vertical scaling for
their applications because it allows businesses to scale relatively
quickly compared to using horizontal scaling. One drawback of
vertical scaling is that it poses a higher risk for downtime and
outages than horizontal scaling. Correctly provisioning your
resources is the best way to ensure that upgrading was worth it
and that your business will not experience the negative effects of
vertical scaling.

What is horizontal scaling?


Horizontal scaling simply adds more instances of machines
without first implementing improvements to existing
specifications. By scaling out, you share the processing power and
load balancing across multiple machines.

Horizontal scaling means adding more machines to the resource


pool, rather than simply adding resources by scaling vertically.
Vertical scaling gives you the ability to zoom in to add more
servers to your network, but it also requires you to zoom out by
adding a bit more power, CPU, and RAM to the existing
infrastructure. Scaling horizontally is the same as scaling by
adding more machines to a pool or resources — but instead of
adding more power, CPUs, or RAM, you scale back to existing
infrastructure. Horizontal scaling allows you to scale your data
with more resources than you can add resources using vertical
scaling.

How to achieve horizontal scaling?


The first step towards horizontal scaling is to remove your
applications’ reliance on server-side tracking. Managed AWS Cloud
and DevOps Automation solutions are a great way to ensure your
organization can scale effectively and efficiently. Horizontal
scaling is favored by DevOps experts because it is done
dynamically automatically — scaling based on the load for optimal
performance.

Factors to consider when deciding between horizontal


scaling and vertical scaling
Both vertical and horizontal scaling can be performed
automatically, also known as auto-scaling, as the actual process of
scaling is not particularly difficult. When handling containers, both
horizontal and vertical scaling automatically switches on, so you
do not have to worry about manual intervention. If your cloud
service provider or data center has difficulty ensuring vertical
scaling, it can be easy to skip the application stage that scales
horizontally and the horizontally scaling stages.

Scaling or vertical scaling is the process of moving more resources


to a single server to accommodate the growth of your application.
Cloud Vertical Scaling is the addition of an existing server or the
replacement of a server with a more powerful server. Vertical
scaling can have a few drawbacks, namely cost, and hardware
failure.

What is your horizontal scaling or vertical scaling


strategy?
A scaling strategy can dynamically select changes in application
capacity and scale, including vertical or horizontal scaling. Scaling
strategies may include the addition or removal of a single server or
a number of servers in the same data center and may include a
combination of vertical, horizontal, and/or hybrid scaling
strategies for different data centers.

If you want superior performance, you can easily use vertical or


horizontal scaling. You can also work in the same data center —
scaled vertically and horizontally, if necessary — to achieve
superior performance without any problems. You do not need
both vertical scaling and horizontal scaling to achieve better
performance; you can scale your existing machine infrastructure,
pool of resources, or data center either vertically or horizontally
and avoid redundancy.

Depending on the company and application, you can weigh the


advantages and disadvantages of horizontal and vertical scaling
and determine which is best for you.
VIRTUAL MACHINE
A Virtual Machine (VM) is a compute resource that uses software instead of a
physical computer to run programs and deploy apps. One or
more virtual “guest” machines run on a physical “host” machine. Each virtual
machine runs its own operating system and functions separately from the other
VMs, even when they are all running on the same host. This means that, for
example, a virtual MacOS virtual machine can run on a physical PC.

Virtual machine technology is used for many use cases across on-premises and
cloud environments. More recently, public cloud services are using virtual machines
to provide virtual application resources to multiple users at once, for even more cost
efficient and flexible compute.

What are virtual machines used for?


Virtual machines (VMs) allow a business to run an operating system that behaves
like a completely separate computer in an app window on a desktop. VMs may
be deployed to accommodate different levels of processing power needs, to
run software that requires a different operating system, or to test applications in a
safe, sandboxed environment.

Virtual machines have historically been used for server virtualization,


which enables IT teams to consolidate their computing resources and improve
efficiency. Additionally, virtual machines can perform specific
tasks considered too risky to carry out in a host environment, such as accessing
virus-infected data or testing operating systems. Since the virtual machine
is separated from the rest of the system, the software inside the virtual machine
cannot tamper with the host computer.

How do virtual machines work?


The virtual machine runs as a process in an application window, similar to any other
application, on the operating system of the physical machine. Key files that make up
a virtual machine include a log file, NVRAM setting file, virtual disk file and
configuration file.

Advantages of virtual machines


Virtual machines are easy to manage and maintain, and they offer several
advantages over physical machines:

 VMs can run multiple operating system environments on a single


physical computer, saving physical space, time and management costs.
 Virtual machines support legacy applications, reducing the cost
of migrating to a new operating system. For example, a Linux virtual
machine running a distribution of Linux as the guest operating system
can exist on a host server that is running a non-Linux operating system,
such as Windows.
 VMs can also provide integrated disaster recovery and application
provisioning options.
WHAT IS AN ETHERNET SWITCH?
Ethernet switching connects wired devices such as computers, laptops, routers, servers,
and printers to a local area network (LAN). Multiple Ethernet switch ports allow for faster
connectivity and smoother access across many devices at once.

An Ethernet switch creates networks and uses multiple ports to communicate between
devices in the LAN. Ethernet switches differ from routers, which connect networks and use
only a single LAN and WAN port. A full wired and wireless corporate infrastructure
provides wired connectivity and Wi-Fi for wireless connectivity.

Hubs are similar to Ethernet switches in that connected devices on the LAN will be wired
to them, using multiple ports. The big difference is that hubs share bandwidth equally
among ports, while Ethernet switches can devote more bandwidth to certain ports without
degrading network performance. When many devices are active on a network, Ethernet
switching provides more robust performance.

Routers connect networks to other networks, most commonly connecting LANs to wide
area networks (WANs). Routers are usually placed at the gateway between networks and
route data packets along the network.

Most corporate networks use combinations of switches, routers, and hubs, and wired and
wireless technology.
What Ethernet Switches Can Do For Your Network

Ethernet switches provide many advantages when correctly installed, integrated, and
managed. These include:
1. Reduction of network downtime
2. Improved network performance and increased available bandwidth on the network
3. Relieving strain on individual computing devices
4. Protecting the overall corporate network with more robust security
5. Lower IT capex and opex costs thanks to remote management and consolidated wiring
6. Right-sizing IT infrastructure and planning for future expansion using modular switches

Most corporate networks support a combination of wired and wireless technologies,


including Ethernet switching as part of the wired infrastructure. Dozens of devices can
connect to a network using an Ethernet switch, and administrators can monitor traffic,
control communications among machines, securely manage user access, and rapidly
troubleshoot.

The switches come in a wide variety of options, meaning organizations can almost always
find a solution right-sized for their network. These range from basic unmanaged network
switches offering plug-and-play connectivity, to feature-rich Gigabit Ethernet switches that
perform at higher speeds than wireless options.
How Ethernet Switches Work: Terms and Functionality

Frames are sequences of information, travel over Ethernet networks to move data between
computers. An Ethernet frame includes a destination address, which is where the data is
traveling to, and a source address, which is the location of the device sending the frame. In
a standard seven-layer Open Systems Interconnection (OSI) model for computer
networking, frames are part of Layer 2, also known as the data-link layer. These are
sometimes known as “link layer devices” or “Layer 2 switches.”

Transparent Bridging is the most popular and common form of bridging, crucial to
Ethernet switch functionality. Using transparent bridging, a switch automatically begins
working without requiring any configuration on a switch or changes to the computers in the
network (i.e. the operation of the switch is transparent).

Address Learning -- Ethernet switches control how frames are transmitted between switch
ports, making decisions on how traffic is forwarded based on 48-bit media access control
(MAC) addresses that are used in LAN standards. An Ethernet switch can learn which
devices are on which segments of the network using the source addresses of the frames it
receives.

Every port on a switch has a unique MAC address, and as frames are received on ports, the
software in the switch looks at the source address and adds it to a table of addresses it
constantly updates and maintains. (This is how a switch “discovers” what devices are
reachable on which ports.) This table is also known as a forwarding database, which is used
by the switch to make decisions on how to filter traffic to reach certain destinations. That
the Ethernet switch can “learn” in this manner makes it possible for network administrators
to add new connected endpoints to the network without having to manually configure the
switch or the endpoints.

Traffic Filtering -- Once a switch has built a database of addresses, it can smoothly select
how it filters and forwards traffic. As it learns addresses, a switch checks frames and makes
decisions based on the destination address in the frame. Switches can also isolate traffic to
only those segments needed to receive frames from senders, ensuring that traffic does not
unnecessarily flow to other ports.

Frame Flooding -- Entries in a switch’s forwarding database may drop from the list if the
switch doesn’t see any frames from a certain source over a period of time. (This keeps the
forwarding database from becoming overloaded with “stale” source information.) If an
entry is dropped—meaning it once again is unknown to the switch—but traffic resumes
from that entry at a later time, the switch will forward the frame to all switch ports (also
known as frame flooding) to search for its correct destination. When it connects to that
destination, the switch once again learns the correct port, and frame flooding stops.

Multicast Traffic -- LANs are not only able to transmit frames to single addresses, but
also capable of sending frames to multicast addresses, which are received by groups of
endpoint destinations. Broadcast addresses are a specific form of multicast address; they
group all of the endpoint destinations in the LAN. Multicasts and broadcasts are commonly
used for functions such as dynamic address assignment, or sending data in multimedia
applications to multiple users on a network at once, such as in online gaming. (Streaming
applications such as video, which send high rates of multicast data and generate a lot of
traffic, can hog network bandwidth.
Managed vs. Unmanaged Ethernet Switches

Unmanaged Ethernet switching refers to switches that have no user configuration; these
can just be plugged in and turned on.

Managed Ethernet switching refers to switches that can be managed and programmed to
deliver certain outcomes and perform certain tasks, from adjusting speeds and combining
users into subgroups, to monitoring network traffic.
Secure Ethernet Switching with FortiSwitch

Fortinet switches offer advanced features in a simple, easy-to-manage solution, including


the ability to enable full security features without slowing down performance.

Docker makes development efficient and predictable


Docker takes away repetitive, mundane configuration tasks and is used
throughout the development lifecycle for fast, easy and portable application
development – desktop and cloud. Docker’s comprehensive end to end
platform includes UIs, CLIs, APIs and security that are engineered to work
together across the entire application delivery lifecycle.
Build
 Get a head start on your coding by leveraging Docker images to
efficiently develop your own unique applications on Windows and
Mac. Create your multi-container application using Docker
Compose.
 Integrate with your favorite tools throughout your development
pipeline – Docker works with all development tools you use
including VS Code, CircleCI and GitHub.
 Package applications as portable container images to run in any
environment consistently from on-premises Kubernetes to AWS
ECS, Azure ACI, Google GKE and more.

Share
 Leverage Docker Trusted Content, including Docker Official Images and images
from Docker Verified Publishers from the Docker Hub repository.

Kubernetes, also known as K8s, is an open-source system for


automating deployment, scaling, and management of containerized
applications.
 It groups containers that make up an application into logical units for easy management and
discovery. Kubernetes builds upon 15 years of experience of running production workloads
at Google, combined with best-of-breed ideas and practices from the community.

 Planet Scale
 Designed on the same principles that allow Google to run billions of containers a week,
Kubernetes can scale without increasing your operations team.
 Never Outgrow
 Whether testing locally or running a global enterprise, Kubernetes flexibility grows with
you to deliver your applications consistently and easily no matter how complex your need
is.

 Run K8s Anywhere

 Kubernetes is open source giving you the freedom to take advantage of on-premises,
hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it
matters to you.

 rations and Reliability, Financial Times


 Watch Video

Attend KubeCon North America on October 24-28,


2022

Attend KubeCon Europe on April 17-21, 2023


Kubernetes Features

 Automated rollouts and rollbacks

 Kubernetes progressively rolls out changes to your application or its


configuration, while monitoring application health to ensure it doesn't kill
all your instances at the same time. If something goes wrong, Kubernetes
will rollback the change for you. Take advantage of a growing ecosystem
of deployment solutions.
 Service discovery and load balancing

 No need to modify your application to use an unfamiliar service discovery


mechanism. Kubernetes gives Pods their own IP addresses and a single
DNS name for a set of Pods, and can load-balance across them.

 Storage orchestration

 Automatically mount the storage system of your choice, whether from


local storage, a public cloud provider such as GCP or AWS, or a network
storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.

 Secret and configuration management

 Deploy and update secrets and application configuration without


rebuilding your image and without exposing secrets in your stack
configuration.

 Automatic bin packing

 Automatically places containers based on their resource requirements and


other constraints, while not sacrificing availability. Mix critical and best-
effort workloads in order to drive up utilization and save even more
resources.

 Batch execution

 In addition to services, Kubernetes can manage your batch and CI


workloads, replacing containers that fail, if desired.

 IPv4/IPv6 dual-stack

 Allocation of IPv4 and IPv6 addresses to Pods and Services


 Horizontal scaling

 Scale your application up and down with a simple command, with a UI, or
automatically based on CPU usage.

 Self-healing

 Restarts containers that fail, replaces and reschedules containers when


nodes die, kills containers that don't respond to your user-defined health
check, and doesn't advertise them to clients until they are ready to serve.

 Designed for extensibility

 Add features to your Kubernetes cluster without changing upstream source


code.

 Innovate by collaborating with team members and other developers and by easily
publishing images to Docker Hub.
 Personalize developer access to images with roles based access control and get
insights into activity history with Docker Hub Audit Logs.
Run
 Deliver multiple applications hassle free and have them run the same way on all
your environments including design, testing, staging and production – desktop or
cloud-native.
 Deploy your applications in separate containers independently and in different
languages. Reduce the risk of conflict between languages, libraries or
frameworks.
 Speed development with the simplicity of Docker Compose CLI and with one
command, launch your applications locally and on the cloud with AWS ECS and
Azure ACI.

A container is a standard unit of software that packages up code and all its
dependencies so the application runs quickly and reliably from one computing
environment to another. A Docker container image is a lightweight, standalone,
executable package of software that includes everything needed to run an
application: code, runtime, system tools, system libraries and settings.
Container images become containers at runtime and in the case of Docker
containers – images become containers when they run on Docker Engine.
Available for both Linux and Windows-based applications, containerized
software will always run the same, regardless of the infrastructure. Containers
isolate software from its environment and ensure that it works uniformly despite
differences for instance between development and staging.
Docker containers that run on Docker Engine:
 Standard: Docker created the industry standard for containers, so they could be
portable anywhere
 Lightweight: Containers share the machine’s OS system kernel and therefore do
not require an OS per application, driving higher server efficiencies and reducing
server and licensing costs
 Secure: Applications are safer in containers and Docker provides the strongest
default isolation capabilities in the industry

You might also like