Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 667

Section 1: Practicing Modern Software Development

Introduction
Like medicine, law, education, and other fields, common practices emerged as software development matured and
evolved over the years. Knowing how to use a version control tool or data parsing library is not only basic knowledge for
every software engineering professional today, but this knowledge will save you time writing code and maintaining it.

Rise of APIs in Software Design 

Software development is a broad field, and there are many different types of software that you encounter daily. Compare
the firmware running on the car you drive with the mobile banking applications you use on your mobile phone—each
serves a specific purpose. Similarly, the development process for each type of software has its own specifics. Mobile
application development is nothing like embedded programming. Yet, you can still identify some similarities across many
areas.

In the software development industry, you can observe these trends:

 Web applications: Replacing traditional desktop (fat) clients, such as Microsoft Office 365

 Proliferation of mobile applications: Often providing an alternative, seamless access to web applications

 Integration with social media: To apply existing functionalities of social platforms

 Cloud services: For data sharing and processing

 Free software and libraries: To save cost in applications and services rather than implementing everything from
scratch

APIs separate functionality into building blocks. Web applications, for example, rely on web servers to store data but use a
web browser for presenting it. Single-page applications (SPAs) are modern web applications, which dynamically load data
on demand into a single web page. This approach requires communication with the web server. Likewise, use of cloud
services, software libraries, and social media require some form of communication, either to the server hosting the service
or to the library code that you want to use. The details of communication are specified by the application programming
interface (API).

With software trends, the use of APIs is also spreading. One of the reasons more and more developers are relying on APIs
is the fact that they allow faster prototyping and development of software. APIs allow you to simply use an existing service
or library to create a working prototype or the final program, without having to implement the functionality yourself.

APIs enable communication between computer systems or programs by specifying the details of how the information is
exchanged, thereby facilitating code and functionality reuse.

This benefit is the reason so many developer-focused cloud services exist today. Not only is the functionality already
implemented, they also take care of maintaining the service for you. The increasing number of applications are really
integrations of existing cloud services in interesting new ways. Such cloud services were designed to be used as part of
some other application. Applications can also reuse functionality of existing standalone systems if they provide an API. This
capability is most useful for implementing custom applications and automating various tasks, such as notifying users via
the Cisco Webex Teams messaging application or publishing information on a web dashboard. APIs are often used to
streamline the development process as well. Examples include building with automated processes, bug tracking, testing,
and more.

Using APIs

An increasing number of systems that expose functionality through APIs has enabled building more complex solutions and
driving automation and integration unlike anything seen in the past. Therefore, consuming and building APIs has become a
foundational skill for most professional software developers.

To use APIs effectively, keep in mind these considerations:

 Modular software design

 Prototyping and testing API integration

 Challenges in consuming networked APIs

 Distributed computing patterns

Prototyping and testing API actions allows you to quickly verify the feasibility of a chosen design, determine if the design
meets stakeholder needs, and ensure that integration keeps working over time, especially across version upgrades. Many
APIs rely on the network for communication, so it is also important that you understand the kinds of challenges an
unreliable network presents and how to address them.

APIs also make more complex software architectures possible. However, to tackle the additional complexity, you must
become familiar with modular software design, which will make the software you write maintainable and testable. This
design familiarity holds true for either a single program code base or across distributed systems. Distributed systems in
particular present many trade-offs, and adhering to best practices for their implementation will save you many headaches
in the end.

One example of a distributed system that is of interest to developers and DevOps engineers is infrastructure automation
workflow. It reduces management overhead by provisioning servers and networks in an automated way, either for a
specific application or for shared infrastructure. It is commonly used as part of the application build process, such as in
continuous integration and continuous deployment (CI/CD) pipelines.

The main purpose of APIs is to expose functionality, so documentation is at least as important as implementation.

Developers rely on API documentation to provide information such as:

 Which functions or endpoints to call


 Which data to provide as parameters or expect as output

 How to encode protocol messages and data objects

The first two are inevitably bound to each individual API, and you will have to consult the API documentation for your
particular use case. However, many networked APIs have settled on a common, standardized set of encodings and data
formats.

API Data Formats 

There are many different data formats used for your applications to communicate with a wide range of APIs available on
the Internet. Each format represents syntax coding data that could be read by another machine but in such a way that is
easy to understand for humans, too.

For instance, if you want to use an API to configure a Cisco router, you have to check which data types are supported by
that API. Then, you can start writing a request to be handled by that API that has an effect on your router configuration. An
API server comprehends your written code and translates it into instructions that are suitable for your router to process
and create an action.

You will most likely encounter these common data formats:

 YAML Ain't Markup Language (YAML)


 JavaScript Object Notation (JSON)

 eXtensible Markup Language (XML)

Common Use Cases

Take a look at some of the most common use cases of XML, JSON, and YAML. The first thing you will notice is that their
files have the same extension as their name (.xml for XML, .json for JSON and .yaml for YAML).

XML has been recognized many times as hefty and not so humanly readable as the other two formats. XML is verbose,
redundant, and complex to use. It is mostly used to interexchange highly structured data between applications (machine-
to-machine communication). When you are talking about programming in Java, XML is widely used.

JSON is serving as an alternative to XML because it is often smaller and easier to read. It is mostly used for transmitting
data between a server and a web page. The JSON syntax is the same as in the JavaScript programming language; therefore,
you can very easily convert JSON data into JavaScript objects. JSON syntax is also useful for YAML, because JSON is
basically a subset of YAML. Parsing JSON files with a YAML parser is therefore very intuitive.

If you are building your first API and you are a nondeveloper, YAML would be a way to go with when choosing which data
format to use. YAML is made for people who are starting to write code from scratch. XML and JSON are mostly for your
programming code to be more machine readable; that is why you export YAML into one of these two formats. YAML is
used in many configuration files today. Because of its similar indentation styles, YAML files resonate with people that know
Python.

Common Characteristics

Data formats are the foundation of API. They define the syntax and semantics, including constraints of working with the
API.

A syntax is a way to represent a specific data format in textual form. You notice that some formats use curly braces or
square brackets, and others have tags marking the beginning and end of an element. In some formats, quotation marks or
commas are heavily used, while others do not require them at all. But no matter which syntax is used, each of these data
formats has a concept of an object. You can think of an object as a packet of information, an element that has
characteristics. An object can have one or more attributes attached to it.

Many characteristics will be broken down to the key-value concept, the key and value often being separated by a colon.
The key identifies a set of data and it is often positioned on the left side of the colon. The values are the actual data that
you are trying to represent. Usually, the data appears on the right side of the colon.

To extract the meaning of the syntax, it is crucial that you recognize how keys and values are notated when looking at the
data format. A key must be a string, while a value could be a string, a number, or a Boolean (for instance, true or false).
Other values could be more complicated, containing an array or an entirely new object that represents a lot of its own
data.

Another thing to notice when looking at a particular data format is the importance of whitespaces and case sensitivity.
Sometimes, whitespaces and case sensitivity could be of high importance, and in others, it could carry no significance
whatsoever, as you will get to know through some future examples.
One of the main points about data formats that you should bear in mind is that you can represent any kind of data in any
given format.

In the figure, there are the three previously mentioned common data formats—YAML, JSON, and XML. Each of these
examples provides details about a specific user, providing their name, role, and location.

You can quickly recognize that the same data is represented in all three formats, so it really comes down to two factors
when considering which format to choose:

 If the system you are working with prefers one format over the other, pick the format that the language uses.

 If the system can support any of them, pick the format that you are more comfortable working with.

In other words, if the API that you are addressing uses one specific format or a handful of them, you will have to choose
one of them. If the API supports any given format, it is up to you which one you prefer to use.

YAML

The first data format you will learn about is YAML which, as the name suggests, is not a markup language like JSON and
XML. With its minimalistic format, it was more heavily weighted to be humanly writable and readable but works the same
way as other data formats. In general, YAML is the most humanly readable of all the formats and at the same time is just
as easy for programs to use, which is why it is gaining increasing popularity among engineers working with
programmability.

Whitespaces are significant to YAML because whitespace indentation defines a structure of a YAML file. All the data inside
a particular object is at the same indentation level. In this example, the first object that is indented is a user named john.
All data at the same indentation level are attributes of that same object. The next level of indentation starts at
the location property, denoting an object that represents a location, with properties city and state. Typically, YAML uses
indentation of two whitespaces for every newly defined object, but you can also use your preferred indentation system.
Note

Tab indentations are not allowed in YAML because they are treated differently by different tools.

In YAML, keys and values are being separated only by a colon and space, which makes it very intuitive for humans to read
or write. YAML will also try to assume which data type is intended as the value, so there are no quotes necessary. If it is a
value such as john, YAML will assume that it is a string; you do not have to be explicit with quotes. The same concept
applies to numbers and other types of values.

Note here that no commas are ending any of the values attached to the key. YAML automatically knows when there is an
end of a value. Also, intuitively, "-" (dash) syntax in YAML denotes lists. They might appear so natural to you that it is like
writing a shopping list for your groceries. Put a dash with the same indentation space in front of every element to add it to
the same list.

JSON

Another heavily used data format is JSON. JSON was derived from the JavaScript programming language. Because of that
historical background, JavaScript code can easily convert data from a JSON file into native JavaScript objects.

JSON syntax uses curly braces, square brackets, and quotes for its data representation. Typically, the very first character in
a JSON file is a curly brace defining a new object structure. Below that, other objects can be defined in the same way,
starting with a name of an object in quotes following a colon and a curly bracket. Underneath, you will find all information
about that object.

Note:

There are also some exceptions regarding the very first character in a JSON file. You could come across some small JSON
files containing only values—for instance: Hello World!, 100 or true. All three options are regular JSON documents.

With YAML, the whitespaces are important, but that is not the case with JSON. All whitespaces that you see are just for
humans consuming and reading the data; they have nothing to do with how the JSON file will be consumed by an
application or script. Here, you are free to choose which kind of formatting style you want to use with JSON, as long as the
other syntax rules remain satisfied.

Note:

The explained information is true for all the whitespaces that are not part of a value itself. In this case, the values
"john" and "j o h n" would not be considered the same because the whitespace is inside quotation marks. That way,
whitespace carries importance.

You will notice that all the data in a JSON file format is presented similarly as in YAML, using a key-value notation. Every
object starts and ends with a curly bracket, and inside that main object in this figure is user. That object defines all
information that you would like to configure for a user. You can see here that the john user is given a name and is assigned
a location and a list of roles. You will also notice that all values attributed to this user are separated by a comma.
Separating values by comma is obligatory for all objects except the last one; there is no comma at the end of a list of
objects in JSON code.

XML

Another data serialization format that is broadly used for interchanging information over the Internet within two machines
is XML. The beginnings of XML go back to the previous century, with the first version initially defined in 1998. It is very
similar to HTML; they are both markup languages, meaning that they indicate which parts of a document are present and
not how the data is going to be shown in your system specifically.

For that purpose, XML code heavily uses <tags></tags> to surround elements in form as <key>value</key>. All the
information about an object is defined inside the opening <tag> and </tag> with a slash to indicate a closing tag. When
using tags, you have to be careful that the beginning and ending tags match up both in the name itself and the same letter
case. For instance: <tag></TAG> would not work properly, whereas <tag>.</tag> or <TAG></TAG> would work perfectly.
Note that it is the same tag name that only differs in case. That said, each of the tags represents a completely different
element.

Usually in XML, are tag names that are all written in lower case.

Whitespaces can be quite important in some formats, or they carry no significance in others.

XML is a combination of both. Significant whitespaces are a part of the document content and are always preserved. A
good example is the whitespaces inside the value or opening and closing tag (<t1>John Wayne</t1> is not the same
as <t1>JohnWayne</t1>). Whitespaces that are mostly meant to make XML documents more humanly readable are
insignificant whitespaces and are used between different tags (<t1><t2> is considered the same as <t1> <t2>).

An object usually contains multiple other objects inside it, as shown in the figure. The main object is <user> that ends
with </user> tag at the end of an output. It is composed of many other tags such as <name>, <location>, and <roles> to
provide all the information needed about that specific user. An object can either contain basic information (such as a
name) or more complicated data with tags being nested inside that object (such as location in this case).

XML Namespaces

By having an increasing number of XML files exchanged through the Internet, you can quickly come into a situation where
two or more applications use the same tag names but represent a completely different object (from that tag) at the same
time. Here, you see a conflict with systems trying to parse some information from a specific tag that uses different
hierarchy than the system expects to get. Solving that issue requires the use of namespaces and prefixes.
In this figure, you can see the same <table> element in two separate XML documents. Even though the starting tag name
is the same, each element represents different information, which could cause a name conflict. The upper XML code
carries HTML table information, whereas the lower one carries information about a table as a piece of furniture. You can
easily avoid that by defining a prefix and a namespace for each of those two elements, as shown in the right side of the
figure.

A prefix is an alphabetic character or a string put before the actual tag name followed by a colon
(<a:tag_name> or <b:tag_name>). That way, you are defining an exact tag name for your application to parse correctly.
When you are using prefixes in XML, you also have to define namespaces for those prefixes. The name of a namespace is a
Uniform Resource Identifier (URI), which provides uniquely named elements and attributes in XML documents.

Namespaces are defined with the xmlns attribute in the starting tag of an element and a syntax as
follows: xmlns:prefix=URI'. A URI can be any arbitrary string as long as they are different from any other URI. It can also be
a URL linking to a specific page with a definition of that namespace. However, there is no need for the URL to be accessed.
The only thing that matters is that the URI uniquely represents a logical namespace name.

Similar to XML, both YAML and JSON can also use namespaces that define the syntax and semantics of a name element,
and in that way avoid element name conflicts. Take a look at the example codes from each format.

In this figure, you can find the same namespace encoded in each of the formats. In general, you will see that YAML and
JSON usually do not use namespaces often, as is the case with XML. However, JSON has one exception, which is the
Representational State Transfer Configuration Protocol (RESTCONF). When you are using RESTCONF, it requires a
namespace. RESTCONF is a subset of Network Configuration Protocol (NETCONF), which is an IETF network management
protocol designed specifically for transactional-based network management. It basically allows you to configure specific
network devices such as routers, switches, and so on.

Note:

The namespace myapp is not a valid namespace name because it has to be in the correct URI format. This example is just
for the sake of a simplified illustration.
Serialization and Deserialization of Data 

Serialization and deserialization may sound like unfamiliar terms at first, but you are performing these two actions in your
daily life. Take for instance a telephone call between two entities. When you are talking to another person over the phone,
your words have to transform into a series of bits that are then sent over an electronic medium or a wireless signal. Your
speech has to literally transform to something understandable to the medium that it is traversing. This process is called
serialization. On the other side of that telephone call, a receiver has to do the opposite process to extract the meaning out
of those bits sent over and reconstruct your words. This process is called deserialization.

Serialization in computer science indicates converting a data structure or an object into a binary or textual format that can
be stored and reconstructed later. You want to save an object constructed in your code to a file, giving it permanent
storage. This way, you preserve the state of an object. Serialized files could be in YAML, JSON, XML, or any other textual
data format or a binary format. From here on, you'll focus on textual formats because of their convenience.

The serialized file could now be transferred to any other system over the network. The receiving system is able to open
that file and reconstruct it to the original state with all the objects defined inside. This process is the opposite of
serialization and it is called deserialization.

Like saving data to files, APIs also require reliable serialization and deserialization to exchange data. Most programming
languages, like Python, have existing tools for working with various data formats.

Here is a practical example that you could be facing as a network engineer working with network programmability. You
have written some code in Python for configuring switches in your organization. You want to send the data held in your
objects to the switch API in a format that it will understand unmistakably. You have to convert that Python data structure
into a valid YAML, JSON, or XML format. To do that, you will use the serialization process.

Other times, you might want to get some information about the specific interface on your switch through that same API.
You would receive a configuration of that switch from the API in YAML, JSON, or XML format. To interpret that file so your
Python code could understand it, you would use the deserialization process. Basically, you are extracting the details out of
a textual file and converting it into valid Python objects. This process is also called data or file parsing.

Introduction to Python

The Python programming language was created by Guido van Rossum and was first released in 1991. It has been gaining
ground in recent years among developers, being a serious competition in popularity to core programming languages such
as Java or C. Many engineers adopted Python quickly because it is fast, powerful, platform independent, and open source.
The syntax was designed to be clear and readable so that you could read through the actual code and understand what is
going on, without complexity seen in other core programming languages. The Python syntax structure makes it easy to
learn for everyone, including network engineers trying to get into the network programmability as quickly as possible.

Tim Peter, one of the core Python developers who were working on it from the very beginning, was a considerable
influence on how Python was written. He summed up the programming language in 19 guideline principles that had
considerable influence on how Python was actually designed. That set of principles is known as The Zen of Python and was
written in the style of a short poem. If you are not familiar with The Zen of Python, you can access it by executing
the import this command from the Python interactive shell.

Python is widely available, whether you are on a Linux workstation, Windows machine, or Macintosh laptop. You might
already have Python preinstalled if you are a Linux or Mac user, which makes it even easier to start using it. You will even
find Python available on multiple routers and switches across most of the Cisco platforms today.

Note:

To check if you already have Python installed on your workstation, try running the python --version or python -V (note the
capital V) command from your console window. If Python has been already installed, the command will return the current
version present on your computer. Otherwise, you will have to install Python manually.

Note:

The most recent major version of Python is 3.7. All the examples featured in this course are based on this version.

Python Libraries

One of the things you will find out working with Python is that you do not have to write every last bit of code for your
project yourself. You will discover many relevant code samples and training resources publicly available on the Internet to
do some common activities.

You have already seen all the different data formats that you might have to work with, and for the common data formats
Python provides the libraries to make it easier to work with them. A library is practically any code outside your script that
you want to use. With their usage, you can write more efficient and consistent code, avoiding a lot of unnecessary
programming in between. In many ways, your code becomes less error prone, and you are able to reuse the code that has
been tested over and over again by a fast-growing community of developers. Just by importing a specific library into your
Python code, you get a wide range of features that the library provides, without the need to develop them yourself.

There are many ways to get access to libraries. If you are only starting with Python and want to use the functionalities of
some of the existing libraries, you can check what is included with the Python standard library bundle. These
functionalities are included with Python itself and differ a bit from version to version. If you want to check the list of
libraries included with your chosen version of Python, check the website https://docs.python.org/3/library. Some common
standard libraries are datetime,  os,  sys, and json for working with JSON files. Importing packages that are part of Python
standard library is a pretty straightforward process with the import library command run inside your Python code (for
instance, import json). You will see these typically at the top of a script. An import statement is a powerful tool for getting
access to hundreds of existing Python libraries.

Sometimes, you might only need a specific resource inside a library. In this case, you would use the from library import
resource statement.

In addition to the standard library, there is a growing collection of packages, libraries, and even entire applications
available on the Python Package Index (PyPI). The PyPI is one of the greatest resources available for Python packages. It is
a repository of Python software that you can take advantage of simply by installing and importing them into your script.
The pip install library command gives you the ability to install and download the available libraries directly from PyPI to
your environment. After that, you can use all the underlying features that the library provides inside your code.

Every developer can create a library that does certain things, package it, and publish it on PyPI. Consequently, the PyPI
community grows rapidly each day.

Note:

The package manager for Python (pip) is already installed on your workstation if you are using Python version 3.4 or
newer. To be sure if pip is present, you can always verify by running the pip --version command from your console
window.

Also, pip is the recommended way of installing new libraries onto your system, because it will always download the latest
stable release.

If you are unsure about the package name or if a certain package even exists, you can always browse through all the
packages available directly on the https://pypi.org webpage. When you locate the package you want to use, you have an
option of downloading it manually or installing it simply through the pip install command.

If you are more skilled in programming, you may be familiar with the pip search command that is used to search for a
specific library from your console window instead of your browser. Both options work the same, so it is up to you to
choose the option that you are most comfortable using.

 PyPI repository

 Command to install a library:

1. pip install <libraryname>

Note

You should run the pip install command from your console window, not inside your Python script. On Windows, that
means the Command Prompt. For Linux or Mac, use the Terminal application.

Another option when searching for a specific library is accessing the GitHub repository. From there, you can quickly see the
popularity of a specific library and the last time it was updated. From that information, you can decide if a library is worth
looking into and using for your next project. From https://github.com, you have an option of downloading the library
manually and then importing it into your project.

There are numerous libraries available on the Internet that can help you deserialize or parse information from a wide
palette of data formats into a Python data structure. You will use the same libraries to serialize objects back into the
chosen data format if needed.

When you are searching for a specific library for parsing your chosen data format, you will notice that usually, there are
multiple libraries available. It is up to you to choose which one to work with given your requirements. A good practice is to
check the related documentation for a specific library to get you started working with it.

Some commonly used libraries to work with data formats are shown in this table:

For parsing YAML, there are two commonly used libraries from which you can choose. The newest, ruamel.yaml, is a
derivative from the original pyYAML library. For some time, the original was not maintained, which is why the new library
was created by a community of Python developers. Later, the pyYAML library received an update, and now both libraries
are actively maintained. The biggest difference when choosing one over the other is that they have different support for
YAML specifications. PyYAML only supports the YAML 1.1 specification, whereas ruamel.yaml  comes in handy if YAML 1.2
is required.

Today, pretty much every programming language has libraries for parsing and generating JSON data. With Python, the
JSON library is part of the Python standard library and is therefore included by default. You do not have to run any pip
install commands from your console; you can leverage JSON library features with only the import statement inside your
Python code.

Working with XML in Python, there are several native libraries available. They all differ in some way, thus providing you the
ability to choose the specific one for your needs. If you need powerful manipulation of XML, the libraries shown in the
table are worth looking at.

YAML File Manipulation with Python

When you parse data from data formats, you need to know how the actual conversion is going to happen inside your
Python code. In this case, how are the strings from a YAML file going to be translated into Python objects? YAML probably
is one of the closest data formats to Python itself because it natively maps the data into a Python dictionary, on which you
can do all sorts of powerful manipulation later on.
In the figure, there is a snippet from a YAML configuration file. The file hosts information about an application user. This
user has a username, office location information, and assigned application roles.

How the translation from a YAML structure into a Python object is going to happen is shown in the table in this figure. By
default, a YAML object is converted to a Python dictionary, an array is converted to a list, and so on.

Once you have your preferred library installed—in this case, PyYAML—you have to first import it into your script. You do
that with a simple import yaml  statement at the top of your code. Note that the PyYAML library does not come with
Python natively, so you have to make sure that the library is installed on your system prior to importing it into your script
and taking advantage of all the features the library provides. If that library has not already been installed on your system,
simply run the pip install PyYAML command from your console window to install it.

The next action is to open the YAML file and load it into a Python object so that you can work with the data more easily.
You do that by defining a variable in Python to parse all the information from that file. You are using
the yaml.safe_load() method for that purpose.

To print out the Python object that you parsed all the data from a YAML file, use this code: print(data). Your output should
look something like this:
{'user': {'name': 'john', 'location': {'city': 'Austin', 'state': 'TX'}, 'roles': ['admin', 'user']}}

You can see that the output is not the cleanest; it is all kind of bunched together. It looks like a Python dictionary. To check
the type of that data variable, use the following code:

In this case, you can confirm that this is a valid Python dictionary. You can access elements by key names—for
example, data['user'].

From here on, all the manipulation you are doing is on the Python dictionary inside your script and does not affect the
YAML file itself.

Next, you would like to traverse through all the roles in your dictionary and print them out. You do that by creating a
Python loop to go through all the objects with a key roles.

That loop in the example would give you the following output:

You have now successfully opened a YAML file, parsed the information to a variable, traversed through the desired key,
and printed out the desired data.

Suppose that you are faced with a task to change the location for a bunch of users as a result of moving the office of your
organization to Dallas. Manually changing all of them would be very time consuming. For that purpose, you want to use
your existing script to perform changes on those users.

The data in question is city, which is currently set to the value of "Austin". Changing it requires you to locate that key inside
your Python dictionary and change its value to the new city name.

Until now, all you were doing is changing the content of a Python dictionary inside your script. To save all the changes
permanently, you have to serialize it back to the YAML file. In this example, you are creating a new YAML file for that
purpose. That way, you can compare it later to the original file and see the effects made by your script.

Similarly, as with opening the file, you have a method inside the PyYAML library to save information back to a file. Using
the method yaml.dump() inside your script makes it very convenient to do so.

JSON File Manipulation with Python

Here, you have the same information about this user saved in a JSON format. The syntax almost looks like a Python
dictionary. This JSON file in the figure has a key called user, and a value of it is an object, representing the user
named john.
As you can see in the conversion table in the figure, the same translations into Python elements are being made as with a
YAML data format. A JSON object gets natively translated into a Python dictionary, which makes it working with JSON
largely similar to YAML.

From the functionality perspective, JSON and YAML are very similar when writing your Python code.

First, you have to open the JSON file and parse it into a Python dictionary before you can do any manipulation of that data.
For that purpose, you are using the method json.load(). After you are done working on that dictionary inside your script,
you want to save the changes back to the file. For that, you are leveraging the method json.dump(), which is deserializing
the data back to a JSON file.

If you look closely at the code in the figure, you may notice that the core code is the same as in the YAML example. The
loop created for traversing through all the roles and printing them out is the same code. In both cases, the data is parsed
into a dictionary so that you are working on the same Python element type.

XML File Manipulation with Python

In the figure, you can see the same data as in the previous two examples, represented also in an XML data format. Because
of the XML syntax structure, parsing XML documents differs a bit from the other two shown.
Python offers numerous libraries for manipulating XML data. The main differentiation between them is the way that they
parse an XML file into Python elements. You have the option of converting the XML data into a Python object represented
as nodes and attributes using the untangle library, or if you are more familiar with the Document Object Model (DOM),
you could use the minidom library, which in turn converts an XML data into DOM objects. For an experience similar to
working with YAML or JSON files, you could use xmltodict, which converts an XML document into a Python dictionary.
Another heavily used option is the ElementTree library, which represents XML data in a hierarchical format represented in
a tree structure. It is often described as a hybrid between a list and a dictionary inside Python.

Because of its convenience, you decide to go with the ElementTree Python library for parsing the XML data. As always, you
have to first include the library inside your script to use it. ElementTree comes with Python natively, so
one import statement is sufficient to start using all the capabilities from it. Because the library has a longer name, you can
import the entire library under an alternate name, ET, as shown in the code. That way, you can use the simplified name for
further reference to it.

The first action you need to take, as with any other data format, is to open a file. Here, you are declaring two variables—
first, to parse the read data into a tree structure by creating an ElementTree object, and second, to get the root element of
that tree. Once you have access to the root element, you can traverse through the entire tree. You can imagine a tree
structure as objects forming a connected graph.

If you want to find all elements by a specific tag name, you can use a findall() method on your ElementTree. When you are
searching for a specific tag name, you will use a find() method to retrieve it. And lastly, if you want to access the tag value,
you can make use of a text attribute. The latter also is used in situations where you want to change the value of a
particular tag. In the code in the figure, that is changing the city of the user location.

To save all the changes you made to your ElementTree structure to a permanent location, use the write() method.

Parse API Data Formats with Python

Parse API Data Formats with Python

Different types of data formats require different approaches to obtain a specific value when being parsed. In Python, there
are many modules to parse a wide variety of data formats. You will focus on parsing YAML, JSON, and XML files with the
most used modules. But first, you will prepare the development environment when developing in Python and get to know
Visual Studio Code.

Note:

Click the Initialize button to start the lab. A progress bar will show you when the lab is initialized and active. Please be
patient with the initialization process; several components are loading and getting ready.

After the initialization is complete, start the lab within 10 minutes, or it will be automatically terminated. After you have
entered the active lab, do not navigate away from the lab page until you have completed that lab exercise, or it will be
terminated.

Note:

In case you encounter issues with writing code during the lab, please find the final code snippets in the
/home/student/lab_solutions directory.

Prepare Development Environment

In this procedure, you will set up the environment for developing code in Python. First you will familiarize yourself with
Visual Studio Code, which is an integrated development environment (IDE) where you will edit and manage the code with
the included tools. You will review the files and folders in the working directory and try to run the Python code using
pipenv, a tool for managing Python packages and virtual environments.

Virtual environments allow you to develop and run different Python projects independently, without having to worry
about package version clashes (for example, an older project relying on an obsolete version of a package, while a different
project requires the latest version). With a pipenv tool, you can select the version of Python your project will use, so some
projects can still use the older version 2, while others use the newer version 3. You can also add or remove packages using
pip inside the virtual environment; simply run pipenv install or pipenv uninstall instead of pip install or pip uninstall. The
configuration of the virtual environment is kept inside the Pipfile to allow easy reinstallation if required.
By default, the activity bar on the left side consists of Explorer, Search, Source Control, Debug, Extensions and GitLens.
Use Explorer to navigate the working directory.

Explorer consists of two sections. On the top is Open Editors, which lists the files that are currently open. The second part
has the name of your working directory and shows its structure.

In the bottom-right corner, an alert for the Linter pylint not being installed might show up. Click the Install button.
student@student-workstation:~/working_directory$ pipenv run python lab01.py

DevNet

<... output omitted ...>

Note:

The pipenv tool activates the virtual environment for you and runs the command inside. Because you have specified
Python version 3 in the Pipfile, the python command will run that version, even if the system default version differs.
{

"python.pythonPath": "/home/student/.local/share/virtualenvs/student-32m93jd"

}
(working directory) student@student-workstation:~/working_directory$ pipenv install ruamel.yaml
Note

Another popular module for manipulation of YAML files is PyYAML. The ruamel.yaml module is a derivative of the first one,
is compatible with both Python version 2 and 3, and supports YAML 1.2, released in 2009.
<... output omitted ...>

User object:

{'score': 18.3, 'id': 3242, 'last_name': 'Smith', 'first_name': 'Ray', 'birth_date': datetime.date(1979, 8, 15), 'address':
[{'postal_code': 44663, 'street': '94873 Ledner Rue', 'primary': 1, 'state': 'OH', 'city': 'Royal Oak'}, {'postal_code': 17319,
'street': '832 William Ave', 'primary': 0, 'state': 'PA', 'city': 'Elnaville'}]}

<... output omitted ...>


<... output omitted ...>

Print user_json:
{"birth_date": "1979-08-15", "first_name": "Ray", "id": 3242, "score": 18.3, "last_name": "Smith", "address": [{"primary":
1, "postal_code": 44663, "state": "OH", "street": "94873 Ledner Rue", "city": "Royal Oak"}, {"primary": 0, "postal_code":
17319, "state": "PA", "street": "832 William Ave", "city": "Elnaville"}]}

<... output omitted ...>

Answer:

# Create JSON structure with indents and sorted keys

print('JSON with indents and sorted keys')

user_json = json.dumps(user, default = serializeUser, indent=4, sort_keys=True)

print(user_json)

Answer:
# Define namespaces

namespaces = {'a':'https://www.example.com/network', 'b':'https://www.example.com/furniture'}


Collaborative Software Development 

Version control systems are tools that help manage changes of files over time. Most importantly, version control systems
enable efficient collaboration for those people contributing to a project. Imagine you and your team are working on a
development project. Without version control systems, you are probably working with shared folders containing the whole
project and possibly replicating it a few times in another location as a backup. In a situations like this, multiple people on
your team may work on the same file at the same time, potentially causing many issues. These problems are mitigated by
using a version control system.

Generally, software development is a collaborative process:

 Using and contributing to software libraries from other developers

 Tracking bugs and feature requests from users

 Assisting operations and testing teams

 Multiple developers working as a team

You can use tools such as issue trackers and version control systems to help you organize development work and
collaborate effectively with others.

Version control software keeps track of every modification of the code. If a mistake is made, developers can look back and
compare earlier versions of code to help fix the mistake, minimizing disruption to all team members. Version control
protects source code from any kind of errors that may have serious consequences if not handled properly.

So, when should you use version control? In reality, version control should be used for any engineering-related project; it
does not have to specifically be for code management. Regardless of the file types being used, version control proves to be
valuable for team-based projects in which they require error tackling, code backup and recovery, code history, change
logging, and an easier way to experiment with the code.

Version control systems can be useful not only for developers but for networking teams as well. For example, it is useful to
have a history of all the configurations that are active on networking devices. Using a system such as Git allows you to see
the configurations, how and when they change, and of course, who or which system made the change. Also, as more
automation tooling is used—for example, Ansible—you can use Git to version control playbooks, variables, and
configuration files. As you collect information from switches such as operational data, you can store that data in text files.
You can easily track and see the changes in operational data, such as changing neighbor adjacencies, routes, and so on.

Version Control with Git 

Git, created by Linus Torvalds in 2005, is an example of a version control system. Torvalds also is the creator of the Linux
operating system, so the history of Git is closely connected to Linux as well. In fact, when Linux was first being developed,
Torvalds and his team struggled with managing large codebases maintained by many engineers.
The system is distributed, so no central server is required—unlike many older systems. While a central repository can still
be used, every team member has their own copy of the project files. This copy is a complete copy of the full project, not
just the files being worked on. Changes made by one developer may be incompatible with changes made by some other
developer. Git helps track every individual change by each contributor to prevent work from conflicting.

Moreover, you have full control of your local repository, so you may decide to commit and push only some parts of your
work and leave some others on your local system. This practice may be the case of a file containing confidential data,
credentials, and so on.

Git architecture is composed of several different components:

 Remote repository: A remote repository is where the files of the project reside, and it is also from where all other
local copies are pulled. It can be stored on an internal private server or hosted on a public repository such as
GitHub, GitLab, or BitBucket.

 Local repository: A local repository is where snapshots, or commits, are stored on the local machine of each
individual.

 Staging area: The staging area is where all the changes you actually want to perform are placed. You, as a project
member, decide which files Git should track.

 Working directory: A working directory is a directory that is controlled by Git. Git will track differences between
your working directory and local repository, and between your local repository and the remote repository.

While GitHub is the leading platform for remote Git repositories, you should understand that others do exist. GitLab is a
GitHub competitor that also offers an open source, on-premises version of their software. Another service is BitBucket.
BitBucket supports both Git and Mercurial version control systems. Atlassian, a popular cloud service that offers many
developer-friendly products, acquired BitBucket in 2010. In 2015, Atlassian changed the name of their enterprise Git
platform from Stash to BitBucket Server, which competes with GitHub Enterprise.

Git Commands

One of the most important elements of Git is a repository. A repository is a directory that is initialized with Git.

A repository can contain anything such as code, images, and any other types of files. This understanding is the foundation
of getting started with Git locally on your machine.

When you create a new project, the git init command must be used locally on your machine in order to initialize a project
to work with Git. You must initialize a working directory with the git init command for Git to start tracking files. Make sure
to initialize a working directory in the top directory that contains all the necessary files. Otherwise, Git will only track the
subset of the files in the directory where you ran the command.
Once you initialize a directory using the git init command, Git must be configured with a username and email address. This
information is used in the Git history and during commits, making it possible to see who has made changes. Configurations
for Git can be per project or for all projects on the system. When executing the git init command, Git also creates a
subdirectory called .git that contains all the local snapshots (commits) and other metadata about the project.

To configure a Git username and email address, use the git config command. To see a list of all configurable options for
a git config command, you can use the Linux man git config  command.

You should also often use the git status command. This command allows you to see the status of your project. It shows
you the files that need to be staged, the files that are staged, which branch you are on, and if a commit is required. Also, it
will show the files that are not being tracked by Git.

Once the project is created and initialized, you will use the git add command to add files into the Git project. The git
add command performs two primary functions:

 Starts tracking files

 Adds files to the staging area

For example, if you want to add two new switch configuration files to a project, add them as follows:

If you want to remove file from Git project, use the git rm command.

Once you add files to the staging area, you can verify the status with the git status command.
Use the git commit command to commit the staged changes. What the git commit command is really doing is creating a
point-in-time local snapshot of the project. All incremental changes are stored in the .git directory that was automatically
created when the git init command was executed.

When you use the git commit command, you are required to include a commit message (using the -m flag). When you
commit your changes, Git creates a commit object representing the complete state of the project, including all files in the
project. At this point, after the git commit command is used, you have a local snapshot.

After you commit changes, you can use the git status command again to verify the changes.

Once changes are stored in the local repository, you need to specify the centralized remote repository that will be used to
store your changes and changes of other participants of the project. Remote repositories in general are repositories that
are hosted in your private network or on the Internet. You can specify multiple remote repositories for a specific project.

You can add remote repository using the git remote add command. To check which remote repositories are configured for
a specific project, you can use the git remote command.

Once the remote repository is specified, you can use the git push command to send your snapshots (commits) to a remote
repository. Similarly, you can use the git pull command to get commits of other participants on the project from a remote
repository.

The git pull command is similar to the git fetch command; however, understanding the difference between these two
commands is essential. While the git fetch command fetches changes from the remote repository and stores them to the
local repository, the git pull command also merges changes to your current branch.
Suppose that you are in the middle of development, and you have not staged any changes yet because your code is not
ready to be committed. You are assigned to a more urgent request, so you have to focus on other files, but you do not
want to lose changes that you have done so far. You can use the git stash command to temporarily save your current
changes. Once you finish with other work, you can then use the git stash pop command to retrieve your previously saved
changes.

Now assume a similar scenario, but instead of using the git stash command to save your changes to temporary storage,
you execute the git commit command and save your changes to your local repository. If you want to revert this commit,
you can use the git reset HEAD~1 command. This command will remove commit from your local repository and move the
changes done in that commit to your working directory.

It is also possible that you do not want to create a new project but rather copy a pre-existing project. For example, you
want to test an open source project or just some code you found on GitHub. You can run this test by using this command:

The project has been cloned into a new directory. The project is a clone, so there is no need to use the git init command
because the project was already initialized for Git.

Git Workflow

A Git workflow usually consists of the following actions:

 Directory initialization for Git

 Adding a file
 Committing a file to a local repository

 Pushing a file to a remote repository

 Updating a local repository by pulling changes from a remote repository

One of the Git functionalities is also reverting changes that you already committed. With the git revert command, you can
revert any commit, not just the last one.

Assuming that you already created two commits, in the first commit, you configured an interface Loopback0 on switch1,
and in the second commit, you configured interface Loopback0 on switch2. After deploying configuration to the switches,
you decide to revert the configuration change on switch1.

Use the git log --oneline command to view commit hashes. By observing the command output, you realize that you want
to revert changes of the commit with commit hash e871a41.

Use the git revert e871a41 command to revert changes on switch1. By executing the git revert command, Git will create
new commit and prepend the "Revert" word to the original commit message. Use the git log command again to verify the
commits.

Git GUI Workflow

Popular code editors have the ability to interact with Git using the editor GUI itself. Some editors already have built-in
support for Git; in other editors, Git extension must be installed manually. Git extensions in code editors usually offer
limited functionality compared with the Git command-line tool.
Here are some code editors that support Git integration:

 Visual Studio Code

 PyCharm

 Atom

 Notepad++
Branching with Git 

Take a look at a real-world scenario. You decide to work on a new feature for your automation project by making some
changes to existing code and creating some new configuration files. After an hour of programming, you receive an email
that a critical issue exists in production and you need to fix it. You should provide a fix to the current production code as
soon as possible, but you already did some changes to the code and do not want to lose these changes What should you
do?

The next example is a typical real-world scenario. The solution to this scenario is branching and merging.

The "*" sign is an indication of the current branch.

Git allows you to create different instances of your repository called branches. By default, there is only one instance of
your repository called master branch.

Considering the example here, you have a master  branch in your automation project repository that corresponds to the
production code. The master branch contains production code, so this branch should contain only the code that is stable.

The following workflow should be used in Git to be able to apply fixes to your code and develop new features:

 Never develop new features or apply bug fixes directly to the master branch.

 Always create separate branch for a new feature or bug fix.

 Apply frequent merges of the master branch to your feature or bug fix branch to avoid merge conflicts.

 When the feature or bug fix is ready, merge it back to the master branch.

Typical real-world development workflow introduces an additional branch, usually called the develop branch. This branch
is used when developing new features. Developers usually do not create new feature branches from the master branch
but from the develop branch. Once features are developed, they are merged into the develop branch, and once a new
software release is being prepared, the develop branch is merged into the master branch. However, for urgent fixes,
developers usually create a bug fix branch directly from the master branch. Once the bug fix branch is merged back to the
master branch, it must also be merged into the develop branch.

Creating separate branches for new features and bug fixes provides an isolated environment for every change in your
codebase. Code changes in other branches do not affect the production code running in the master branch. Once new
features and bug fixes are tested, the corresponding branch can be merged into the master branch, and code can be
deployed to production.
The git branch command lets you manipulate with branches. Using the command without any additional options will
create a new branch in your local repository.

You can use the -D option to delete a branch from your local repository.

The git checkout command lets you navigate through branches. By default, you have only the master branch available.
Once you create a new branch with the git branch command, you can use the git checkout command to navigate to the
newly created branch.

If you need to fix a bug that could be a configuration file or code, you can create a specific branch. You can create this new
branch and automatically navigate to it as follows:

As its starting point, the branch has all files from the master branch. Once you update the files with the fix, commit them
and push them back using a command such as git push origin fix_aaa_bug. This command will push changes to the
fix_aaa_bug branch on the remote repository.

At this moment, though, you have not made any file changes, so there is no difference between contents of the master
and fix_aaa_bug branches.

Using the ls command, you can check which files and folders currently exist in this branch.

Use your favorite text editor to edit the aaa.cfg file to fix a bug. Once you fix the bug, use the git status command to verify
which files were changed.
Use the git add command to move your changes to the staging area and the git commit command to move changes to
your local repository.

Finally, use the git push origin fix_aaa_bug command to push changes from your local repository to the remote
repository.

Your changes are now available to other developers on the remote repository in the fix_aaa_bug branch. Once you verify
that your changes solved the issue, you can merge your changes back to the master branch.

Use the git merge command to merge changes from fix_aaa_bug to the master branch and the command git
push origin master to push the code to the remote repository.

Git Branches

Git branch workflow:

This figure shows using the commands that are previously reviewed, but all in a single workflow.

Handling Merge Conflicts

Merge conflicts are common when using Git. They can arise in these scenarios:

 Two developers changed the same line or lines in the same file.

 One developer deleted the file, while the other developer changed it.

 One developer moved the file to other directory, while the other developer changed it.
 Both developers moved the file to different directories.

Most conflicts arise from the first option. Git has a very good merge conflict resolution algorithm that can automatically
solve most of the merge conflicts, but in the aforementioned examples, Git cannot know which change is correct. In these
cases, you have to solve merge conflicts manually.

Assume that you and your coworker were both assigned to fix two separate authentication, authorization, and accounting
(AAA) bugs. Both of you created your own branch, made the changes to the aaa.cfg file, committed them, and pushed
them to the remote repository. When you decide to merge your changes to the master branch, your coworker had already
merged their changes. Because you both changed the same line in a file, you will get a merge conflict.

By opening the aaa.cfg file with your favorite text editor, you can see the following content:

By observing the conflicting file, you can see that Git already marked the conflicting lines. The "HEAD" section contains the
content of the file that already exists in the branch that you wanted to merge to (master branch). Similarly, the
"fix_aaa_bug" section contains changes that you made in the fix_aaa_bug branch that you want to merge to the
master  branch. Use your favorite text editor and merge changes manually. By merging changes, also remov the HEAD and
fix_aaa_bug section marks as well as the ======= mark. The merged file should contain only relevant AAA configuration.

The following output shows an actual merge conflict that you need to solve. As you can observe from the output, your
coworker already configured the TACACS+ server using an IP address. You used a hostname to configure the TACACS+
server.

Decide which is the correct configuration, and modify the changes.


Once you have made the changes to the file, use the git add command to move changes to the staging area, the git
commit command to move changes to your local repository, and the git push command to move changes to the remote
repository.

$ git branch

Note:

Some organizations may disallow changes directly on the master branch like it is done here. If this is the case, you must
first merge the master branch into your branch (fixing the conflicts), and then merge back into the master branch.

git diff Command

You can use the git diff command to view the differences of files in your working directory—that is, compare the same files
in your staging area or in your local repository (last snapshot commit). You can also use the git diff command to show
differences between two commits, branches, or tags.

Assume that you started working on an AAA bug, and you already made some changes to some files, but you have not
staged or committed anything yet. You want to know which changes you have made to the file so far. By observing the
following git diff command output, you can see the changes between your working directory and staging area. A minus
("-") sign at the beginning of the line refers to the state in your local repository, and a plus ("+") sign at the beginning of the
line refers to currently made changes.

By applying the --cached option to the git diff command, you can see the changes between your staging area and local
repository.

By applying the HEAD option to the git diff command, you can see the changes between your working directory and local
repository

By applying additional options to the git diff command, you can also check the changes between two commits.

You can obtain Git commit hashes by executing the git log command.


$ git diff 739e0c8eb0d33bc5c4331d1f35012e6a4e68e7dc..b310ba57019c7e271fd5006af72951d1125ddccd

Use Git for Version Control 

Use Git for Version Control

Here, you will learn how to get started working with Git, one of the main tools when developing code. You will learn how
to clone a repository, manage the local repository, and push the changes back to the remote. To understand how GUI-
based Git software such as Git extension in Visual Studio Code works, you will be managing the repository using the
command-line commands.

Note

Click the Initialize button to start the lab. A progress bar will show you when the lab is initialized and active. Please be
patient with the initialization process; several components are loading and getting ready.

After the initialization is complete, start the lab within 10 minutes, or it will be automatically terminated. After you have
entered the active lab, do not navigate away from the lab page until you have completed that lab exercise, or it will be
terminated.

Note

In case you encounter issues with writing code during the lab, please find the final code snippets in the
/home/student/lab_solutions directory.

Clone Git Repository

In this procedure, you will access the GitLab site available in the lab environment, log in to the site, and obtain the link for
cloning an existing repository to your local working environment.
student@student-workstation:~/working_directory$ git config --global user.name "student"

student@student-workstation:~/working_directory$ git config --global user.email student@dev.local

student@student-workstation:~/working_directory$

student@student-workstation:~/working_directory$ git clone http://dev.gitlab.local/student/devnet-lab02.git .

Cloning into '.'...

Username for 'http://dev.gitlab.local': student

Password for 'http://student@dev.gitlab.local': 1234QWer

remote: Enumerating objects: 3, done.

remote: Counting objects: 100% (3/3), done.

remote: Total 3 (delta 0), reused 0 (delta 0)

Unpacking objects: 100% (3/3), done.


Note:

The dot at the end of the git clone command defines the path to where the repository is cloned. If you omit the dot, a new
directory is created that contains the repository.
Note:

For simplicity reasons, the master branch is not a protected branch in this repository. When developing the code in a team,
the master branch might be a protected branch.
student@student-workstation:~/working_directory$ git checkout -b staging

Switched to a new branch 'staging'


student@student-workstation:~/working_directory$ git add lab02.py

student@student-workstation:~/working_directory$ git commit -m "Add print statement"

[staging 8a5f68a] Add print statement

1 file changed, 3 insertions(+), 1 deletion(-)


Section 1: Summary Challenge
Section 2: Describing Software Development Process
Introduction

Because software has become integrated into daily life, software engineers are developing methods that enable efficient
processes, a fully tested product, and a means for continual improvement. Throughout the years, practices such as code
review have shown so many benefits that they have become industry standards and are often a required part of the
development process. While the methods differ in approach and popularity, they all enable you to bring better quality of
software and services to users.

Software Development Methodologies 

In the past, developers embraced ideas such as Waterfall, but in recent days, because of the need for faster and more
frequent application delivery, developers are moving toward a new paradigm. This paradigm includes using Agile
methodology and Lean processes.

Marc Andreessen, a famous venture capitalist, said, "Software is eating the world." The following figure can be attributed
to Andreessen.

Software is everywhere. Today, software touches most aspects of daily life in some way. Do you want to stay in contact
with your friends? You can use Facebook. Do you want to find your favorite recipe? You can Google it. Do you need a taxi?
Uber is an option. In fact, most of these types of companies did not even exist 10 years ago.

The modern world is dynamic and dominated by software and technology. Startup companies are founded every day with
the goal to compete and win against more traditional competitors. John T. Chambers, a former executive chairman and
CEO of Cisco, said that many of the traditional and dominant companies that have been around for years are now at risk. It
is likely that many of these same companies will not be around in 10 years.

Software is transforming industries of all types, including transportation, media, retail, and hospitality.
The company Uber is transforming transportation using software by building a connection between those who deliver the
service and their customers.

Facebook content is the user content. Facebook users are the ones who post and share information to span the world, and
yet the company is one of the most popular media providers in the world.

Alibaba and Amazon are transforming the retail industry, something that has always been naturally connected to the brick-
and-mortar retail establishments. Today, retail stores are going virtual. Virtual stores are quickly becoming the primary
means for customers to purchase goods from the comfort of their homes.

Airbnb is doing something like Uber but in the hospitality industry, building direct connections between the people who
deliver accommodation services and their customers who consume them.

These illustrations are only a few examples of how applications are transforming the lives of people. However, one can
surely find other examples of technology that have had a significant impact on everyone.

The major takeaway from these examples is that each of these companies is a technology and software company.

Most engineers can open a text editor, write some code, and eventually compile and run a script or program of some sort.
This kind of approach, if used by a team of developers, has no structure or process. Following this type of approach is
error-prone, and the software that is created would likely not meet user expectations. A more structured approach is
needed.

The Software Development Life Cycle (SDLC) process is used by software creators to design, develop, and test high-quality
software products and applications. In addition to creating high-quality software, the SDLC aims to meet and exceed
customer expectations on budget and on time.

There are several methodologies that exist to help implement SDLC. These include Waterfall, Agile, prototyping, rapid app
development, and extreme programming:

 Waterfall: A development methodology where a linear process is visualized as moving down through phases.

 Agile: One of the newer and most popular methodologies that are based on frequent and small incremental
releases.

 Prototyping: Designed with the idea to build an initial version of a product quickly in order to better understand
requirements and scope. By using this prototype, the client can get an actual feel of the system in a relatively short
time, even though this effort may complicate larger projects.

 Rapid app development: Puts less emphasis on planning and more emphasis on process. It is a type of incremental
model in which all components are developed in parallel.

 Extreme programming: An extension of Agile, with unit testing, code reviews, simplicity, as well as customer
communication and feedback taken to the "extreme."

Waterfall

The Waterfall methodology is based on a linear and sequential process.


The Waterfall process has been around since the 1950s and is rooted in the manufacturing and construction industries.
Because no formal software development methodologies existed at the time, this model was simply adapted for software
development.

The Waterfall model provides a very structured approach, which works for some industries but is proving to be a less
desirable choice for software development. The Waterfall model assumes that once you move to a certain phase of the life
cycle, you can go only "downhill" to the next phase. There is a possibility to remain stuck if something unexpected arises.
The Waterfall method assumes that all requirements can be gathered up front during the requirements phase. Once this
stage is complete, the process runs "downhill."

During the design phase, all the information obtained that is in the requirements and analysis stage is used to build a new,
high-level design. When this phase is complete, the systems analyst begins transforming the design that is based on the
specifications of specific hardware and software technologies. This effort leads to the implementation (or coding) process.

Following the development of software, the testing (and verification) phase is used to ensure that the project is meeting
customer expectations.

During the maintenance phase, the customer is using the product, and changes are made to the system based on
customer feedback.

Waterfall is a good choice when all requirements are very well known before a project begins; therefore, it works well for
small, well-defined projects.

Waterfall is a linear process, so each phase must be completed before moving to the next phase. At the end of each phase,
a project review is done to determine if the work done meets all expectations. If not, the current project is discarded and
the process restarts all over, because remaining phases depend on the current phase. It is important to note that testing
starts only after the whole work is complete.

Here are some advantages and disadvantages of Waterfall.

Advantages:

 Design errors are highlighted before any code is written, saving time during the implementation phase.

 Good documentation is mandatory for this kind of methodology. This effort is useful for engineers in the later
stages of the process.

 Because the process is rigid and structured, it is easy to measure progress and set milestones.

Disadvantages:

 In the very early stages of a project, it can be difficult to gather all the possible requirements. Clients can only fully
appreciate what is needed when the application is delivered.

 It becomes very difficult (and expensive) to re-engineer the application, making this approach very inflexible.

 The product will only be shipped at the very end of the chain, without any middle phases to test. Therefore, it can
be a very lengthy process.
Lean

You must understand what the Lean management philosophy is before focusing on Agile methodologies.

Lean takes its origin from the automotive industry in Japan, and its goal is to represent a new way to build products. The
Lean philosophy provides the most efficient way possible by eliminating everything that is useless. If you do not need it,
get rid of it.

Keep in mind that suboptimal allocation of resources is a waste, and that waste reduction increases profitability.
Therefore, implementing a Lean process is meant to minimize constraints (resources) while at the same time producing
consistent flow and profitability.

According to Lean pioneers, an enterprise must focus on three major points to fully embrace the Lean philosophy:

 Purpose: Which customer problems will the project solve? Ask, why does it solve the problem, and keep going
with more Why questions to get more answers. Ask why continuously to fully understand the purpose. These
types of questions are often referred to as the Five Whys.

 Process: How will the organization assess each major value stream to make sure that each step is valuable,
capable, available, adequate, and flexible? How will these organizations ensure that all the steps are linked by
flow, pull, and leveling? There must be a process, and metrics in place to determine this process.

 People: The root of it all is people. The right people need to take responsibility and produce outcomes. The people
need to continuously evaluate the value streams in terms of business process and Lean process. How can everyone
who plays a part in the value stream be actively engaged in operating it correctly and always improving it?

Agile

As you have seen so far, Waterfall implements a linear process. However, for larger development projects, it becomes
increasingly difficult to use a linear development methodology. This difficulty stems from the fact that feedback is needed
each step along the way.

The Waterfall process limits the possibility to freely move between project phases, such as analysis, design, code, and test.
You are forced to complete a phase before going forward, with no possibility to go back, often within a very strict timeline.

When using the Waterfall methodology, what would happen if there was an issue in the middle of a phase? You have
completed 50 percent of the work, for example, but that effort is completely unusable because you cannot move on until
the phase is completed.

If anything changes during the design stage or any subsequent stage, you must go back to the analysis phase to begin
again. This work effort is not only inflexible but also expensive and time-consuming.

Due to the nature of these methodologies, another issue is that the test phase is skipped because of time constraints. The
result is that a lower quality product is delivered to the customer. Imagine the software that is produced if testing is not
done properly. Because of these concerns, the Agile methodology has proliferated.
Lean is merely a concept. Without a way to implement it, it is of no value to the systems that need optimizing. Agile
addresses the need for optimization.

Agile is a means of implementing the Lean philosophy in the software development industry. It is primarily based on the
concept of short sprints, seeking to do as much as possible in a relatively short time, and without losing a focus on value.
Agile software development includes customers in the software development life cycle by delivering software in very early
stages, to gain valuable feedback from actual consumers of the software. This procedure is a popular way to test software
and learn about issues that can be addressed in future releases.

In contrast to the time-consuming and inflexible approach of the Waterfall method, the Agile method provides continuous
and incremental value to the software development process.

In the fast-paced world of software development, user requirements change frequently. As the market environment
becomes more competitive, software development companies are applying Agile methodologies to meet the requirements
of their customers as quickly and effectively as possible.

Scrum is an Agile project management methodology, though it is helpful to consider it more of a framework for managing
processes. Scrum is designed with the idea that requirements are not always fully understood and may not be listed on the
early stages of the process. Quick development of small features is what really matters. Therefore, the Scrum framework
places value on iteration and incremental software development.

Scrum is an Agile project management methodology:

The methodology lists all the requirements on a Product Backlog. These requirements are built around user stories, or
small features of the final application such as create an account, perform login, and so on. This large backlog is split into
smaller, manageable pieces and put into Sprint Backlogs that must be completed in a short time. It is common to have
sprints that span two weeks.
Scrum also recommends a daily Scrum in which developers address what they did the day before and what they are going
to do that day. They discuss the challenges that they are facing and evaluate potential challenges in upcoming work. This
discussion ensures that teams are on the same page, and it provides for a more collaborative work environment.

At the end of a sprint, the team provides a shippable product increment that can be delivered to the customer.

Scrum practitioners believe that this emphasis of development over requirements leads to less wasted time and greater
productivity.

The Waterfall methodology is split into several sequential and linear phases, namely Analysis, Design, Code, and Test.

Agile has the same phases, but they are managed in a different way. In fact, thanks to the concept of sprints, all phases of
the life cycle are completed within each single sprint. Effectively, a developer (or team) can accomplish an entire process in
a two-week sprint. This process is a very flexible way to work, providing a means to add, fix, or make changes very quickly.
Also, even in the middle of a software development roadmap, you will have completely usable code, adding tremendous
value to your work and your customers.

Agile represents a more advanced way to work compared with older methodologies like Waterfall, but it still has its own
pros and cons as well. Of course, it still matters which type of software is getting developed. But Agile is better suited for
fast-paced environments and when all the requirements are not known upfront.

Advantages:

 The rapid and continuous delivery of software releases helps customers to better understand what the final piece
will look like. This capability also provides a fully working code improved week after week, leading to customer
satisfaction.

 Thanks to Scrum, people interaction is emphasized.

 Late project changes are more welcome.

Disadvantages:

 This kind of approach lacks emphasis on good documentation, because the main goal is to release new code
quickly and frequently.

 Without a complete requirement-gathering phase at the beginning of the process, customer expectations may be
unclear, and there is a good chance for scope creep.

 Rapid development increases the chance for major design changes in the upcoming sprints, because not all the
details are known.
Test-Driven Development 

In today's agile world, software is developed iteratively. Building software step by step is far more productive than
attempting to build software in large chunks. Smaller parts of code are easier to test, and testing smaller parts of code can
cover more edge cases to detect and prevent bugs. This process ensures that the code satisfies the requirements. But
when do you create all these tests? Test-driven development (TDD) is a software-development methodology where you
write the test code before the actual production code.

TDD ensures that use cases are tested, and source code has automated tests by using the test-first approach.

Development is done in iterations, where you do the following:

 Write tests.

 Run these tests; they must fail (code may not even compile).

 Write source code.

 Run all tests; they must pass.

 Refactor the code where necessary.


At the beginning of software development, the team defines tasks for what you want to achieve in the current iteration. By
working on a specific task, you should have very accurate information about what you need to achieve:

 Which kind of functionality you need to develop?

 What are the input parameters for the functionality?

 What is the expected result of the functionality?

Having this information, you should be able to create the following tests:

 Tests for expected input parameters and expected output

 Tests for unexpected input parameters

Writing these tests first has immediate benefits for development:

 Gives you a clear goal—make the tests pass

 Shows specification omissions and ambiguities before writing code, avoiding potentially costly rewrites

 Uncovers edge cases for you to address from the start

 Makes debugging easier and faster, because you can simply run the tests

 Passing each test is a small victory that drives you forward.

Tests that cover all possible input parameters, especially edge cases, are crucial in software development because most
bugs arise from scenarios that were not covered during the development phase. The tests that you create and run will fail
because no actual code is developed yet. If these newly written tests do not fail, revise them to verify that the tests that
you wrote do not have obvious bugs themselves, such as tests always passing.

The next step in the TDD approach is to develop the code that implements required functionality—the actual task. Here,
you will write code and run tests, perhaps many times, until all tests pass. You have already written tests, so it is simple to
run and debug the code that you are writing. Tests give you direct feedback about how much functionality you have
already implemented, telling you when you are done, and keeping you focused on the task at hand.

Do not write more code than is needed to achieve your objective: getting the tests to pass. Otherwise, parts of code may
be untested and will need to be maintained, even though they are not required.

When all tests pass, you can be confident that you have implemented the new functionality and did not break any other
parts of the system.

When code is developed, previously created tests should pass. But even with the passing tests, development is not
finished yet. At the beginning of task development, you usually do not know all the details, and you may have written code
that is not optimal. Now step back, look at the whole codebase, and clean it up by refactoring. Developers usually struggle
the most with this step, but it is very important.

Refactoring the code before moving to the next task has multiple advantages:

 Better code structure: You might find out that a certain part of the code will be used multiple times, so you move
it to a separate function.

 Better code readability: You might want to split or combine different parts of the code to be more readable for
other developers who are not familiar with the task.

 Better design: You might find that refactoring the code in some way will simplify the design or even improve the
performance of the software.

The TDD iteration is finished when code refactoring is done and all tests pass. Then, you can proceed to the next iteration.
This process can be applied on many levels, from fixing one-line bugs to writing functions, or even whole modules. You can
apply it right at the start of development, before design, to help you detect design flaws. This way, you avoid costly
rewrites because it is easier to make changes before the implementation gets too advanced.

TDD Example 

Assume that you work for a service provider, and must develop a script that extracts customer data from a router
configuration. The script should parse customer data, such as customer name, customer VLAN, and customer IP address,
and return the data in the JavaScript Object Notation (JSON) format.

To simplify the task, the router configuration should already be in the config.txt file, located in the same folder as the
script.

As a testing framework, you should use unittest.

By observing the configuration in the config.txt file, you verify that the required customer data is available in the
configuration.
Development Procedure

Before actual programming, you decide to build a script in multiple iterations. Each iteration will already fulfill part of the
requirements, but the last iteration will group everything.

Iterations:

 Iteration 1: Parse Customer Names

 Iteration 2: Parse Customer IP Addresses

 Iteration 3: Parse Customer VLANs

 Iteration 4: Group All Together

Steps used for a specific iteration:

1. Write tests

2. Run tests (should fail)

3. Write code

4. Run tests (should pass)

5. Refactor code

Parse Customer Names

Step 1: Write tests.

During the first iteration, you decide to parse customer names from the configuration.

To follow the rules of the TDD, first create tests for parsing customer names.

You decide that the class responsible for configuration parsing will be called ConfigurationParser. This class will contain all
methods required for parsing customer data. Therefore, you can already instantiate the class in the test. You can also
create this class, but without any functionality.
Next, you decide that parsed customer names should be returned in a Python list.

The method that parses customer names is called parseCustomerNames().

At the end of the test, assertions are being made to check the method output based on the input.

Step 2: Run tests (should fail).

No actual functionality exists, so running the test will fail.

Step 3: Write code.

Once tests are created, you should implement the actual functionality.

As specified in the requirements, the router configuration should already be present in the config.txt file.

You decide that the customer names will be parsed from the VRF configuration.

To parse a certain substring from a configuration line, use regular expressions.

The result of the re.findall method is a list, which is what you need to return.

Step 4: Run tests (should pass).

If the functionality is implemented correctly, tests should pass now.


Step 5: Refactor code.

The last step of the iteration is to refactor the code if needed. Because you will implement additional methods to parse
customer VLANs and customer IP addresses, you decide to put the deviceConfig variable out of the method so that other
methods can use it, too.

The TDD iteration is now complete, and you can go on to the next iteration.

Parse Customer VLANs

Repeat Steps 1 to 5.

In the second iteration, you decide to parse the customer VLAN. Similar to the previous iteration, first you create tests
where you define the following:

 The method that will be used to parse the customer VLAN: parseCustomerVLAN()

1. Parameters: customer_name

 The return type of the method (integer): expected_vlan = 100

At the end of the test, check that the actual parsed data are the same as the expected data.

Repeat Steps 1 to 5.

You decide that the customer VLAN will be parsed from the interface configuration.

To parse a certain substring from a configuration line, use regular expressions.


As expected in the test, you return the integer value of the VLAN.

Parse Customer IP Addresses

Repeat Steps 1 to 5.

In the third iteration, you decide to parse the customer IP addresses. Similar to the previous iterations, first create tests
where you define the following:

 The method that will be used to parse the customer IP address: parseCustomerIPAddress()

1. Parameters: vlan

 The return type of the method (string): expected_ip = "10.10.100.1"

At the end of the test, check that the actual parsed data are the same as the expected data.

Repeat Steps 1 to 5.

You decide that the customer IP addresses will be parsed from the interface summary configuration.

To parse a certain substring from a configuration line, you use regular expressions.

As expected in the test, you return the string value of the IP address.

Combine All Together

Repeat steps 1 to 5 (continued).


For the last iteration, you decide to combine all previously developed methods together. Similar to the previous iterations,
first create tests where you define the following:

 The method that will be used to parse customer data: parseCustomerData()

 The return type of the method (dictionary): expected_data = { "CUSTOMER_A": [100, "10.10.100.1"] }

At the end of the test, check that the actual parsed data are the same as the expected data.

Repeat steps 1 to 5 (continued).

The parseCustomerData() method combines all the previously developed methods and returns customer data inside a
Python dictionary, which can be very easily converted to JSON format later if needed.

Code Review

Code review is a phase in the software development process that helps identify bugs and poor code practices, as well as
improve design and overall readability of the code. Its main goal is to find bugs before code is deployed to production, or
even to a test server. Code review is a very important part of the development process, but it is often overlooked due to
the lack of time or resources.

In the code review phase of software development, at least one person checks the source code of a program.
Reasons for a code review:

 Identify bugs.

 Improve code quality.

 Get familiar with different parts of the project.

 Learn something new.

Note

There are many tools for formalizing a code review process. From mailing patches (like the Linux kernel repos), to using
web applications like GitHub (pull request), GitLab (merge request), Gerrit (changeset), etc.

Identifying bugs is probably the most important reason to do code reviews. The last thing you want is your code to break in
production. By reviewing other people's code, you can improve it or learn something new. Either way, by reviewing the
code, you expand your knowledge of the project, becoming more competent to work on other parts of the project.

Who should perform the code review? Source code might be reviewed by other software developers, testers, or even
people who are experts in the area that the source code covers. In a typical environment, multiple people are assigned as
code reviewers. In addition to checking the syntax of the code, their tasks are to verify that the code works and sufficient
tests exist, and to improve the code if possible.

When should code reviews be performed? The code review phase should be a part of every software development project,
not just in large-scale projects. Reviewing code takes a little more time and effort before delivery, but the code will be
more stable and easier to maintain. For the code review to be as efficient as possible, smaller code changes are preferred.
With large code changes, bugs are harder to identify in a single code review. Large code changes will also take a great
amount of time to be reviewed. Each code reviewer should be able to properly review code in 60 to 90 minutes.

Multiple tools exist, including GitHub and GitLab, to help you during the code review phase. Developers can initiate the
code review by using the GitHub pull request feature or the GitLab merge request feature.

Here is a typical feature development or bug fixing workflow:

 Create a new branch.

 Commit code changes to the remote repository.

 Create a pull request.

 Perform automated tests.

 Review code.

 Accept code (merge).

 Perform additional automated tests.

The first task of any software developer is to create a new branch where code changes will be made. After the code
changes are made and enough tests exist, the code changes are pushed to the remote repository. Code changes can
consist of one or multiple commits. Commit messages should be descriptive so that other developers and reviewers easily
understand the code changes that were made in a particular commit during the code review.

Once a branch that contains a new feature or a bug fix is ready to be merged, a developer creates a new pull request
where the developer selects the source (feature or bug fix) branch and destination branch (among other parameters). In a
typical software development environment, pull request creation automatically triggers additional checks that are made
before the reviewers begin reviewing code:

 Front-end code tests

 Back-end code tests

 Database migration tests


 Code style test

 Building a Docker image test

 Other tests

Running code tests is the most common step after creating a new pull request. If some tests fail, the pull request cannot
be merged, so the developer needs to make sure that all tests pass. Multiple tools (such as Circle CI and Jenkins) can be
integrated with GitHub for automated test execution.

Another common check when creating a new pull request is database migrations. The pull request might contain database
schema changes that can also be verified by running them on multiple test databases.

After all the checks pass, the code can be reviewed. Code reviewers might request additional code changes or simply
approve them. After code reviewers approve the changes, they can be merged into the destination branch. Merging pull
requests can trigger additional checks or tasks on the destination branch.

Code Review Example

Assume that you need to create a script that parses customer information (customer names and their IP addresses) from a
router virtual routing and forwarding (VRF) and interface configuration. You already have a repository on GitHub, and your
coworker already put a router configuration to that repository. Your task is to parse relevant customer information from
that configuration.

You should create a Python script that parses the following configuration.

In accordance with software development best practices, you create a new branch called parse-ip-addresses in your Git
repository. After you finish script development, you push it to the newly created branch on the remote repository. At this
stage, the quality assurance team can execute different tests of the script. After testing is complete, the script is ready to
be deployed to production. But because code is deployed to production from the master  branch, you need to merge the
parse-ip-addresses branch to the master  branch.
There are different ways of merging the parse-ip-addresses branch to the master branch:

 Direct merge: Using the git merge command—not a recommended option

 Merge using pull request: Requires code review—the recommended option

When changes are pushed to a custom created branch on GitHub, GitHub automatically offers you the ability to create a
pull request.

When you navigate to your repository on GitHub, you will be offered an option to compare your changes with other
branches and create a pull request.

GitHub automatically tells you to create a pull request when a new branch is pushed to a remote repository with a
notification and a Compare & pull request button.

Click the Compare & pull request button, which will redirect you to the appropriate page for creating a pull request.

Open a pull request in GitHub.

On the "Open a pull request" page, you have a lot of options. Some are mandatory, and others are optional:

 Source and destination branch: In your case, you want to merge the parse-ip-addresses (source) branch to the
master (destination) branch.

 Comment: It is recommended to write a comment to notify reviewers about changes done in a pull request.

 Reviewers: One or more people should review the changes (software developers, quality assurances team, domain
experts, and so on).
 Assignees: One or more people should review or make the changes and perform the merge into the destination
branch.

 Labels (optional): Any custom-made label to which this pull request belongs.

 Projects (optional): Any project to which this pull request belongs.

 Milestone (optional): Any milestone to which this pull request belongs.

On the same page, you will also find information about commits you made in the source branch; all the code changes will
be presented as well.

After you select all the desired fields, click the Create a pull request button. When a pull request is created, reviewers and
assignees are automatically notified via email. GitHub can also be integrated with other systems (for example, Slack), so
there are multiple ways to notify reviewers and assignees about new pull requests.

Once a pull request is created, reviewers and assignees must review the code and either approve the changes or request
additional changes.

Reviewers and assignees can start with the code review by clicking the Add your review button on a pull request. In a code
review, reviewers and assignees not only check the code visually, they can also check out the code locally and test it. Inline
comments are a very useful feature, allowing reviewers to attach a comment to a specific line of code.

When a certain reviewer or assignee finishes with the code review, there are multiple options to submit a review:

 Comment: Submit general feedback (without explicit approval or denial).

 Approve: Approve changes that are done; the pull request can be merged.

 Request changes: Deny changes. Comments made by reviewers and assignees must be addressed before pull
request can be approved and merged.

If any reviewers or assignees request changes on the pull request, you need to make the changes and then request the
code review again. After all comments are addressed, one of assignees will approve the pull request and merge the
changes to the destination branch. After the changes are merged into the destination branch, you can delete the source
branch.

When the code review is finished, the pull request author is automatically notified about the changes that were done on
the pull request.

After a pull request is approved, code changes can be merged into the destination branch.
At this stage, you can also set additional checks, which must pass to be able to merge the pull request. Integration with a
continuous integration platform (for example, Circle CI) is often used as an extra check. Only when all the tests on that
server pass can a pull request be merged. The actual merge is then usually performed by the project maintainer or owner.

Merging a pull request can be done in multiple ways and can be enabled or disabled for each repository:

 Create a merge commit (default option): All commits from the source branch are added to the destination branch
in a merge commit.

 Squash and merge: All commits from the source branch are squashed in a single commit and then merged into the
destination branch.

 Rebase and merge: All commits from the source branch are added to the destination branch individually without a
merge commit.

The pull request is merged into the destination branch. The source branch can be deleted.

The source branch can be deleted after a pull request is merged. Repository settings allow you to set automatic deletion of
the source branch, once a pull request is approved and merged.

Section 2: Summary Challenge


Section 3: Designing Software

Introduction 

Most people think about software from the functionality perspective. What an application delivers to the end users is an
important question to ask, but you as a software developer will have an opportunity to think about the other side of the
spectrum—software design. Software design affects the behavior of a system and controls many factors in a life cycle of a
software system. Poor software design is a fundamental source of weak performance, creating difficulties in updating an
application and integrating it with existing products.

You will learn how to take advantage of techniques that programming languages such as Python offer to write software
with modularity and clarity in mind. You will also learn that many problems in software development have already been
identified and documented. The documented solutions to these problems are gathered in the form of patterns. These
patterns are divided into architectural patterns that take care of a more overall design of a system and define how
different components interact. Design patterns dive into separate components and ensure that the optimal coding
techniques are used for solving a specific programming obstacle and to avoid tangled code.

Modular Software Design 

When you develop software that delivers various services to end users, it is important that the code is structured in a way
that it is readable, easy to maintain, and reliable. Well-maintained code is just as important as the product or service that
this code delivers. When you do not follow some good practices while growing the codebase for your services with new
features, you may come to a point where adding a new feature is a daunting task for any developer, including your future
self.

The figure highlights that you should strive to develop a software system that has a clear communication path between
different components. Such systems are easier to reason about and are more sustainable. There are various paths to
achieve more sustainable software that you, as a developer, can take when working on a project. The decision on how to
design software largely depends on the type of system that you are trying to develop and how this system will be
deployed.

Cloud-based applications that are meant to be flexible and scalable, supporting complexities that networks introduce, will
use a different approach to software design than, for example, an enterprise application that can be more predictable,
with a simpler deployment model. Different architecture and design patterns emerge from the needs to make efficient use
of the deployment options and services that applications are offering.

Cloud-based and other applications are often split into a suite of multiple smaller, independently running components or
services, all complementing each other to reach a common goal. Often, these applications communicate with lightweight
mechanisms such as a Representational State Transfer (REST) application programming interface (API).

This type of architecture is widely known as microservices. With microservices, each of the components can be managed
separately. This means that change cycles are not tightly coupled together, which enables developers to introduce changes
and deliver each component individually without the need to rebuild and redeploy the entire system. Microservices also
enable independent scaling of parts of the system that require more resources, instead of scaling the entire application.
The following figure shows an example of microservice architecture with multiple isolated services.

While the benefits of microservices are obvious, it should be clear that applications are not always developed this way.
There are cases where using microservices is not the preferred architecture, because you are not building an application
that would benefit from all those things mentioned previously. Sometimes, an application that runs as a single logical
executable unit is simpler to develop and will suffice for a particular use case. Such application development architectures
are referred to as monoliths.

Does this mean that a monolithic application is harder to maintain, with less readable codebase, and not so efficient
output? The answer is, it depends. With monolithic architecture, it becomes much easier for your code to become a
complicated sequence of lines that is hard to navigate through. Also, because of dependencies, it can be even harder to
change parts of it. But that would mean that developers, before microservices came along, had a great amount of trouble
with maintaining code, developing new features, and fixing the bugs that this unstructured code brought to surface when
the projects grew.

The following figure is an example of scaling a monolithic application. Because a monolithic application is typically
deployed as a single executable component, it also needs to be scaled as a single component. Even though this component
contains multiple smaller pieces that represent the entire logic of the application, you cannot scale just the critical parts as
you would in the microservices architecture.

Different techniques and good practices emerged to cope with such problems and to make the code more elegant and
efficient in a monolithic design. These techniques are done in the code, on a much lower level than architectural, not
dependent on the type of architecture. Code designs that will be discussed here can be used in monolithic and
microservices architecture for maintaining a clean, well-defined, and modular codebase.

Next, you will look at some of the tools and examples of these practices that you can use while developing software.

Functions

Functions are the first line in organizing and achieving a certain level of modularity in your program. With functions, you
can make order in your code by dividing it into blocks of reusable chunks that are used to perform a single, related task.
Many programming languages come with built-in functions that are always available to programmers. For example, getting
the binary representation of a number in Python language can be as simple as calling the bin() function, without having to
install an additional library.
Note

The bin() function was called from Python interactive command line in the example.

Many lines of code can be hidden behind a simple function call like in the previous example. Functions can be built into the
programming language itself, or they can come as part of libraries, also referred to as modules, that you import to your
program. You as a developer can also define your own functions that belong and purposely serve together to perform a
single defined task.

How to define functions varies between different programming languages. Each of them will have their own syntax with
custom keywords that denote where functions start and end, but in general, they share similarities. Observe the example
of a function in Python.

Functions are defined with special keywords and can take parameters or arguments that can be passed when invoking a
function. Arguments are used inside the execution block for parts that need some input for further processing. They are
optional and can be skipped in the definition of a function. Inside functions, it is possible to invoke other functions that
might complement the task that your function is trying to make.

To stop the function execution, a return statement can be used. Return will exit the function and make sure that the
program execution continues from the function caller onward. The return statement can contain a value that is returned
to the caller and can be used there for further processing. You can see this in the example of
the is_ipv4_address() function, where the Boolean value was returned.

Also common in programming languages is variables. When using functions, you need to be aware of scope where a
defined variable is recognized. Observe an example of another function in Python.

A variable datacenter was defined outside and inside the function generate_device_name(). Variables that are defined
inside functions have a local scope and are as such not visible from the outside.
In the example, you can see that the value of the variable datacenter is "APJ" initially. Even though the function call
changed the value of the variable to "RTP," it did not change it outside the function. These two variables share the same
name but are two different variables with different scope.

Now that you know how to define a function, you might ask yourself how can your code benefit in readability and
maintainability when using them. With a larger set of code, it becomes more obvious, but even with smaller examples,
there are visible benefits.

You can avoid repetitive code by capturing code intent into functions and making calls to them when their actions are
needed. You can call the same functions multiple times with different parameters. However, it would not be very
reasonable to repeat the same blocks of code across the file to achieve the same result. This principle is also known as
DRY, or "Don't repeat yourself."

While writing functions will improve your code, you can still learn some good practices on how to write them to be even
more intuitive and easy on the eyes of a programmer.

Using functions enables you to document the intent of your code in a more concise way. Just by adding a simple comment,
what a function return value is or amplification, why something is necessary to have in the code, will make it easier and
faster to understand when, how, and why to use the function. Have in mind that comments should not justify poorly
written code. If you must explain yourself too much, you should think about writing your code in a different way. Naming
your functions properly is important as well. Names should reveal the intent of the function, why it exists, and what it
does.

Functions should be short and do one thing. While it is sometimes hard to determine whether a function does one or more
things, you should examine your function to see if you can extract another function out of it that has a task on its own, and
does not only restate the code but also changes the level of abstraction in your original function.

Defining functions is your first step toward modularity of your code.

Modules

You learned that functions can be used to group code into callable units. They are great on their own, but they make even
more sense when they are packaged into modules. With modules, the source code for a certain feature should be
separated in some way from the rest of the application code and then used together with the rest of the code in the run
time.

Modules are about encapsulating functionality and constraining how different parts of your application interact.

An application should be broken into modules, small enough that a developer can reason about module function and
responsibility, while making sure that there are no side effects if the implementation is changed. The following figure is an
example of importing two Python modules into a single module, which then references the functions from the two
separate modules.

Modules usually contain functions, classes, global variables, and different statements that can be used to initialize a
module. They should ideally be developed with no or few dependencies on other modules, but most of the time, they are
not completely independent. Modules will invoke functions from other modules, and design decisions in one module must
sometimes be known to other modules.

The interface or API toward the model will be defined by the function definitions, its parameters, public variables, usage
constraints, and so on.

The idea is that the interface presents a simplified view of the implementation that is hidden behind it. Once these
parameters are defined, the application is created by putting the modules together with the rest of the application code.

More or less all modern, popular languages formally support the module concept. The syntax of the languages differs, of
course, and is a subject of looking into the documentation, but they all share the idea of providing abstraction for making a
large structure of a program easier to understand.

The syntax for a basic module in Python would not be any different from a Python file with function definitions and other
statements. The name of the module in the Python language becomes the name of the file, containing the statements
without the suffix .py.

As an example, create a module and save it in the file toolbox.py:

This module can then be imported to your program code or other modules. For demonstration, use an interactive Python
shell to import and use this module, as the following figure showcases.

In Python, modules are essentially scripts that also can be executed that way. Running toolbox.py will not produce any
output. There is not any execution point that would tell the Python interpreter what to do when running this module
directly. But you can change that.
The difference between importing a module to other programs and running it directly is that in the latter, a special system
variable __name__ is set to have the value "__main__".

To see this in action, modify the toolbox module and make it generate a random device name when being executed
directly.

The result of calling this module directly will be the following:

When importing modules to your application code, the import itself should not make any changes or execute any actions
automatically. The best practice is that you do not put any statements inside the module that would be carried out on
import. Any statement that you would want to have in your module belongs inside if __name__ == "__main__" block.

The major advantage of using modules in software development is that it allows one module to be developed with little
knowledge of the implementation in another module.

Modules can be reassembled and replaced without reassembly of the entire system. They are very valuable in the
production of large pieces of code. Development should be shortened if separate groups work on each module with little
need for communication. If the module can be reused on other projects, it also makes sense that it gets its own source
code repository.

Modules bring flexibility because one model can be changed entirely without affecting others. Essentially, the design of the
whole system can be better understood because of modular structure.

Classes and Methods

With functions and modules, you have learned some great techniques to improve modularity of your code. There can be
even higher levels of code organization that come in a form of classes.

Class is a construct used in an object-oriented programming (OOP) language. OOP is a method of programming that
introduces objects. Objects are basically records or instances of code that allow you to carry data with them and execute
defined actions on them. It represents a blueprint of what an object looks like, and how it behaves is defined within a class.

Class is a formal description of an object that you want to create. It will contain parameters for holding data and methods
that will enable interaction with the object and execution of defined actions. Classes can also inherit data from other
classes. Class hierarchies are made to simplify the code by eliminating duplication.

Every language has its own way of implementing classes and OOP. Python is special in this manner because it was created
according to the principle "first-class everything." Essentially, this principle means that everything is a class in Python. In
this overview of classes and OOP, you will skip many details and go just through the essentials of writing classes. For more
in-depth information about OOP, refer to the documentation of your language of choice.

It is time to build your first class in Python and then create object instances from it.
Creating a new device object results in printing the message that it is being created. You can create as many objects as you
want; each will be a new isolated instance of the device object with its own copies of data.

The method __init__ is a special initialization method that is, if defined, called on every object creation. This method is
generally known as a constructor and is usually used for initialization of object data when creating a new object.

The variable self represents the instance of the object itself and is used for accessing the data and methods of an object. In
some other languages, the keyword "this" is used instead.

Besides the __init__ method, there are many more built-in methods that you can define inside your class. In the Python
language, these are known as magic methods. With them, you can control how objects behave when you interact with
them. For controlling how an object is displayed when you print it, for example, define the __str__ magic method. Using
the __lt__, __gt__, and __eq__ magic methods, you can write custom sorting procedure, or by defining
the __add__ method, specify how the addition of two objects using the "+" operator works.

Objects usually carry some data with them, so add an option to give the device object a hostname and a message of the
day (motd).

When you create an object, you can pass arguments that are based on the signature of the class. The device class now
accepts one parameter, which is hostname. The variable motd can be changed after the object is created. After the object
initialization, hostname can also be changed to something else.

Besides storing values to objects, classes should also provide possibility to perform domain-specific actions. To develop
them, you will create code constructs called methods.

Methods in Python are very similar to functions that you already learned about. The difference is that methods are part of
classes and they need to define the self parameter. This parameter will be used inside methods to access all data of the
current object instance and other methods if necessary.

Add a show() method to the device class, which will enable you to print the current configuration of a device object.
Create a couple of objects and try to print the configuration using the new show() method.

Your next step will be to investigate class inheritance. Generally, every object-oriented language supports inheritance, a
mechanism for deriving new classes from existing classes.

Inheritance allows you to create new child classes while inheriting all the parameters and methods from the so-called
parent class. This can improve code reusability even further.

Extend the previous example with a router class, which will be inherited from the device class. You will also define an
Interface class with some properties and use it in the device object for initializing the interface variable.

Note:
You will not add any custom implementation to the Router object yet, so you can use the pass keyword in Python, which
will enable you to leave the implementation empty.

Create a router object. Even though it clearly does not have anything defined in it yet, it inherited all from the device class.

The router object originates from the parent class device but can be now extended further to support parameters and
methods that a router might require for its operations.

These are some of the constructs that are used in programming to improve readability, modularity, and efficiency of your
codebase. Out of these constructs, you will be able to build and understand other, more sophisticated design patterns.

Modular Design Benefits 

Design and development of software could not withstand the ever-changing requirements and technologies if there was
no way to write efficient modular code.

You learned ways of designing such software using functions, modules, and classes. These constructs will help you with
modularity and better organization of your code, but just using them does not guarantee better results. Things can get
even more complicated if they are misused or overused.

So how can you write your code using constructs like classes and modules, so that they will bring better readability and
modularity of your system?

Previously, when discussing functions, you were introduced to a couple of suggestions on how they should be short and do
one thing, or that the name of a function should reveal the intent, and other similar advices could be drawn. While these
can be generally used across the project to improve the codebase, there are other factors that are leaning more toward
the reusability and modularity of your code.

Systems are expected to have a long lifetime; therefore; it is crucial to understand and follow the techniques and best
practices that promise more maintainable and modular software.

Maintaining Modularity
Modules ensure that the code is easier to understand, and they lower the overwhelming feeling when you delve into the
code for improvements or feature additions. They should encapsulate parts of functionality of a system and constrain how
these parts interact among each other.

Too many interactions between different modules can lead to confusion and unwanted side effects when something needs
to be changed in a module. Responsibility of a module needs to be transparent so that you can reasonably know what its
function is.

A change in one part of the application code should not affect or break other parts of the system. To enable independent
evolvement of modules, they need to have well-defined interfaces that do not change. The logic behind that principle is
that interfaces can of course be changed without affecting modules that depend on it.

Here are some design guidelines to consider:

 Acyclic dependencies principle

 Stable dependencies principle

 Single responsibility principle

Note:

Only a few of the design principles are mentioned here; some of them belong to the SOLID design principles introduced by
Robert C. Martin. Different approaches can be found in other design principles.

The acyclic dependency principle ensures that when you split your monolithic application into multiple modules, these
modules—and the classes accompanying them—have dependencies in one direction only. If there are cyclic dependencies,
where modules or classes are dependent in both directions, changes in module A can lead to changes in module B, but
then changes in module B can cause unexpected behavior in module A, from where changes originated. In large and
complex systems, these kinds of cyclic dependencies are harder to detect and are often sources of code bugs. It is also not
possible to separately reuse or test modules with such dependencies.

Depending on your development ecosystem and language of choice, build tools can help you with identifying circular
dependencies. These tools are often using the fact that the module's source code is a part of the same code repository.
You might decide to move a module in its own code repository, whether it is because you want to reuse that module in
other projects or just because you want to trace it and version it separately from other modules and application code. In
this case, you need to be especially careful not to end up having a circular dependency.

If this kind of dependency happens to occur anyway, then there are strategies to break the cyclic dependency chain. High-
level modules that consist of complex logic should be reusable and not be affected by the changes in the low-level
modules that provide you with application specifics. To decouple these two levels of modules, strategies like dependency
inversion or dependency injection, which rely on introduction of an abstraction layer, can be used.

Dependency inversion is defined as follows:

 High-level modules should not depend on low-level modules. Both should depend on abstractions.

 Abstractions should not depend on details. Details should depend on abstractions.

There is a difference in how to implement this approach between statically typed languages, like Java or C#, and
dynamically typed languages, like Python or Ruby. Statically typed languages typically support the definition of an
interface. An interface is an abstraction that defines the skeleton code that needs to be extended in your other custom
classes. Your code depends on abstraction and implements the details of the desired action. An example would be having a
class device that defines a show() interface. The interface does not implement how the show() method should work.
Instead, you can create other, more low-level classes, like a Firewall class, that extends and implements
the show() method. The implementation is then specific to the low-level class. A Firewall class might have a different
implementation than a LoadBalancer class, for example.

Developing against predefined interface abstractions promotes reusability and provides a stable bond with other modules.
Besides using abstractions as a means of breaking the cyclic dependency, you also benefit from easier changes of the
implementations and more flexible testability of the code, because you can produce mock implementations of interfaces
in your tests.

In dynamically typed languages, no explicit interface is defined. In these types of languages, you would normally use duck
typing, which means that the appropriateness of an object is not determined by its type (like in the statically typed
languages) but rather by the presence of properties and methods. The name "duck typing" comes from the saying, "If it
walks like a duck and it quacks like a duck, then it must be a duck." In other words, an object can be used in any context,
until it is used in an unsupported way. Interfaces are defined implicitly with adding new methods and properties to the
modules or classes.

Observe the following example of the app.py module.


The signature of db.py is as follows:

The initialization class is part of the init module.

This is an example of a cyclic dependency where the app module uses the database module for setting up the database,
and the database module uses the init module for initializing database data. In return, the init module calls the app
module runTest() method that checks if the app can run.

In theory, you need to decide in which direction you want the dependency to progress. The heuristic is that frequently
changing, unstable modules can depend on modules that do not change frequently and are as such more stable, but they
should not depend on each other in the other direction. This is the so-called Stable-Dependencies Principle.

Observe how you can, in a simple way, break the cyclic dependency between these three modules by extracting another
module, appropriately named validator.
The app class no longer implements the logic of the runTest() method, so the init module does not reference it anymore.

The cyclic dependency is now broken by splitting the logic into separate modules with a more stable interface, so that
other modules can rely on using it.

Note

Python supports explicit abstractions using the Abstract Base Class (ABC) module, which allows you to develop abstraction
methods that are closer to the statically typed languages.

Modules with stable interfaces are also more plausible candidates for moving to separate code repositories, so they can be
reused in other applications. When developing a modular monolithic application, it is not recommended to rush over and
move modules in separate repositories if you are not sure how stable the module interfaces are. Microservices
architecture pushes you to go into that direction, because each microservice needs to be an independent entity and in its
own repository. Cyclic dependencies are harder to resolve in the case of microservices.

A single module should be responsible to cover some intelligible and specific technical or business feature. As said by
Robert C. Martin, coauthor of the Agile Manifesto, a class should have one single reason to change. That will make it easier
to understand where lay the code that needs to be changed when a part of an application must be revisited. When a
change request comes, it should originate from a tightly coupled group of people, either from the technical or business
side, that will ask for a change of a single narrowly defined service. It should not happen that a request from a technical
group causes an unwanted change in the way business logic works. When you develop software modules, group the things
that change for the same reasons, and separate those that change for different reasons. When this idea is followed, the
single-responsibility design principle is satisfied.
Modules, classes, and functions are tools that should reduce complexity of your application and increase reusability.
Sometimes, there is a thin line between modular, readable code, and code that is getting too complex.

Next, you will learn about how to improve modular designed software even further.

Loose Coupling

Loose coupling in software development vocabulary means reducing dependencies of a module, class, or function that
uses different modules, classes, or functions directly. Loosely coupled systems tend to be easier to maintain and more
reusable.

The opposite of loose coupling is tight coupling, where all the objects mentioned are more dependent on one another.

Reducing the dependencies between components of a system results in reducing the risk that changes of one component
will require you to change any other component. Tightly coupled software becomes difficult to maintain in projects with
many lines of code.

In a loosely coupled system, the code that handled interaction with the user interface will not be dependent on code that
handles remote API calls. You should be able to change user interface code without affecting the way remote calls are
being made, and vice versa.

Your code will benefit from designing self-contained components that have a well-defined purpose. Changing a part of
your code without having to worry that some other components will be broken is crucial in fast-growing projects. Changes
are smaller and do not cause any ripple effect across the system, so the development and testing of such code is faster.
Adding new features is easier because the interface, for interaction with the module, and implementation will be
separated.

So how do you define if a module is loosely or tightly coupled?

Coupling criteria can be defined by three parameters:

 Size

 Visibility

 Flexibility

This criteria is based on the research of Steve McConnell, an author of many textbooks and articles on software
development practices.

The number of relations between modules, classes, and functions defines the size criterion. Smaller objects are better
because it takes less effort to connect to them from other modules. Generally speaking, functions and methods that take
one parameter are more loosely coupled than functions that take 10. A loosely coupled function should not have more
than two arguments; more than two should require justification. Functions that look similar, or they share some common
code, should be avoided and alternated if they exist. A class with too many methods is not an example of loosely coupled
code.

When you implement a new fancy solution to your problem, you should ask yourself if your code became less
understandable by doing that. Your solutions should be obvious to other developers. You do not get extra points if you are
hiding and passing data to functions in a complex way. Being undisguised and visible is better.

For your modules to be flexible, it should be straightforward to change the interface from one module to the other.
Examine the following code:
The device module interacts with the add() function of a module addressDb.

At first sight, this code looks good. There are no cyclic dependencies between the modules, there is actually just one
dependency, function takes one argument, and there is no data hiding or global data modification, so it looks pretty good.
What about flexibility? What if you have another class called "Routes" that also wants to add addresses to the database,
but it does not use the same the concept of interfaces? The addressDb module expects to get an interface object from
where it can read the address. You cannot use the same function for the new Routes class; therefore, the rigidness of
the add() function is making code that is tightly coupled. Try to solve this using the next approach.

The add() function now expects an address string that can be stored directly without traversing the object first. The
function is not tied anymore to the interface object; it is the responsibility of the caller to send the appropriate value to
the function.

Note

Python, which uses duck typing, did not require the exact object of type interface, but only an object that can conform
with the add() function. In statically typed languages, the correct object type would be required.

If you fundamentally change the conditions of a function in a loosely coupled system, no more than one module should be
affected.

The easier a module or function can call another module or function, the less tightly coupled it is, which is good for the
flexibility and maintenance of your program code.

Cohesion

Cohesion is usually discussed together with loose coupling. It interprets classes, modules, and functions and defines if all of
them aim for the same goal. The purpose of a class or module should be focused on one thing and not too broad in its
actions. Modules that contain strongly related classes and functions can be considered to have strong or high cohesion.
The goal is to make cohesion as strong as possible. Aiming at strong cohesion, your code should become less complex,
because the logically separated code blocks will have a clearly defined purpose. This should make it easier for developers
to remember the functionality and intent of the code.

The save_and_notify() function is an example of low cohesion, because even the name suggests that the code in the
function performs more than one action; it backs up the data and notifies the users.

Note

Do not rely on a function name to identify if it has high or low cohesion.

A function should focus on doing one thing well. When a function executes a single thing, it is considered a strong,
functional cohesion as described by McConnell.

Here is an example of a code with lower cohesion:

In the log() function, the collected logs in the logdata variable are first being logged to disk, and then the same data is
cleared in the next step. This is an example of communicational cohesion, in which there are multiple operations that need
to be performed in a specific order, and those steps operate on the same data. Instead, you should separate the
operations into their own functions, in which the first logs the data, and the second—ideally, somewhere close to the
definition of the variable—clears the data for future usage.

Another example is logical cohesion, which happens when there are multiple operations in the same function and the
specific operation is selected by passing a control flag in the arguments of a function.

Instead of relying on a flag inside a single function, it would be better to create three separate functions for these
operations. If the task in a function would not be to implement the operations, but only to delegate commands based on a
flag (event handler), then you would have stronger cohesion in your code.
Architecture and Design Patterns 

As you have learned so far, modularity and reusability of the code that makes your program running is a priority of good
software design and architecture. It is difficult to discuss software design without talking about OOP, a concept of binding
and defining the behavior of programming objects.

You already saw that classes define the blueprint of how an object looks and behaves and that you can create class
hierarchies in which the child class inherits the parent default behavior and can also enhance its role with its own
implementations.

The concepts that define what OOP enables:

 Abstraction

 Encapsulation

 Inheritance

 Polymorphism

The ambition of an abstraction is to hide the logic implementation behind an interface. Abstractions offer a higher level of
semantic contract between clients and the class implementations. When defining an abstraction, your objective is to
expose a way of accessing the data of the object without knowing the details of the implementation. Abstract classes are
usually not instantiated; they require subclasses that provide the behavior of the abstract methods. A subclass cannot be
instantiated until it provides some implementation of the methods that are defined in the abstract class from which it is
derived. It can also be said that the methods need to be overridden.

In statically typed languages, abstract classes and interfaces provide the explicit means of defining an abstraction while in
dynamically typed languages. The usage of duck typing gives you the ability to achieve almost the same effect, except that
the contract between the client and a class is not too formal. While duck typing has the advantage of being very flexible, it
sometimes misses the conventional rules that statically typed languages introduce. Python ABC, a Python library, brings
your code a step closer to the discipline of statically typed languages and their definitions of abstract classes and
interfaces. Abstract methods need to be defined by the classes that inherit them. Observe the following example of using
Python ABC.

In OOP, different objects interact with each other in their runtime. One object can access the data and methods of another
object, no matter if the type of the object is the same or not. Many times, you want some of the data and methods to stay
private to the object so that the object can use them internally, but other objects cannot access them. Encapsulation in
OOP conceals the internal state and the implementation of an object from other objects. It can be used for restricting what
can be accessed on an object. You can define that part of data that can be accessed only through designated methods and
not directly; this is known as data hiding. In Java, C#, and similar statically typed, compiled languages, you will find that
there is an option to explicitly define variables and methods that other objects cannot access, using the private keyword or
protected keyword, which also restricts the child classes from accessing it.

In Python, encapsulation with hiding data from others is not so explicitly defined and can be interpreted rather as a
convention. It does not have an option to strictly define data being private or protected. However, in Python, you would
use the notation of prefixing the name with an underscore (or double underscore) to mark something as nonpublic data.
When using the double underscore, name mangling occurs; this means that a variable name, prefixed with two
underscores, is in a runtime that is concatenated with the class name. If you have a __auditLog() method in a device class,
which is prefixed with a double underscore, the name of the method becomes _Device__auditLog(). This is helpful to
prevent accidents, where subclasses override methods, and break the internal method calls on a parent class. Still, nothing
prevents you from accessing the variable or method, even though, by convention, it is considered private.

The following code calls the private auditLog() method for every call of the action() method.
During these lessons, you have already encountered examples of class inheritance and the ability of building new classes
on top of existing ones. Polymorphism, in OOP, goes hand in hand with class hierarchy. When a parent class defines a
method that needs to be implemented by the child class, this method can be considered as polymorphic, because the
implementation would have its own way of presenting a solution that the higher-level class proposed. Polymorphism can
be found in any setup of an object that can have multiple forms. When a variable or a method accepts more than one type
of value or parameter, it is considered to be polymorphic as well.

Designing object-oriented applications is not easy and requires a solid understanding of code constructs and experience in
developing reusable and readable code. Your design should be definite for the problem you are trying to solve with the
application, but at the same time generic enough so that it can be extended in the future without major problems.

It is important to understand more on how to define class interfaces, hierarchies, and relationships between different
modules and classes, and also decide which programming languages to use—whether they support libraries that you will
need, which database to use, how to use it, and how should all pieces of this puzzle communicate together cohesively.
These are questions that fall into the architecture and design patterns paradigm, and you should ask them before you start
writing any code.

Unified Modeling Language

Codebases that you encounter in the real world are written in a specific programming language. That said, there are not
many developers that are proficient in every single programming language, so how can you find the right communication
layer? When you are talking about software design, it is vital that you have a common language with all stakeholders and
developers on a project. Capturing the intent of software design, no matter the implementation technology, is the goal of
having a unified language that is simple enough for everybody to understand.

The Unified Modeling Language (UML) was created because programming languages, or even pseudocodes, are usually not
at a high level of abstraction. UML helps developers to create a graphical notation of the programs that are being built.
They are especially useful for describing, or rather sketching, code written in object-oriented style.

As an example, look at the following UML class diagram.

The UML can sketch your program before you start writing code for it. It can define many details of a class and the
connection between other classes. In this example, the Router object would inherit all the fields and methods from
the Device object. Class inheritance is shown with a solid line and an arrow at the end.

You can use UML as part of the documentation for a program, or use it for reverse-engineering an existing application, to
get a better picture of how the system works.

Architectural Patterns

The patterns that you can put into the group of architectural patterns are discussed at a higher level than the design
patterns and are used to design large-scale components and structures of a system. The architecture of an application is
the overall organization of the system, and it has a broader scope.

Architecture can be referred to as an abstraction of the entire system, with a focus on certain details of the
implementation. Different parts of an application should communicate over their public interfaces, with their
implementation being hidden behind them. Architecture is concerned with the API or public side of the system that carries
the communication between components and is not concerned with implementation details.
Every system has some sort of components and relationships between them. Even a system with a single component can
be treated as an example of an architecture, although it might be too simple for anyone to consider it as an architecture to
go with. Architectures are composed to solve a specific problem. Some compositions happen to be more useful than
others, so they become documented as architecture patterns that people can refer to.

Architecture is composed from multiple structures that include software components and relations between them. A
software architecture that is documented can complement communication and understanding between stakeholders,
because the abstraction of some parts allows nontechnical people to understand the requirements and express their
concerns. It enables prediction of how well the system can perform and helps with onboarding new engineers on the
project.

The desired attributes of a system need to be recognized and considered when designing architecture of a system. If your
system needs to be highly secure, then you will have to decide which elements of the system are critical and how will you
limit the communication toward them. If higher traffic is expected in peaks, then you need to take care of the performance
of different elements and how to allocate resources. When you need your application to be alive constantly, then you will
think in terms of high availability and how to respond to a system fault. These kinds of attributes lead the decision on the
type of software architecture to use.

A decision on software architecture can be made while studying these characteristics of a system:

 Performance

 Availability

 Modifiability

 Testability

 Usability

 Security

What a system can do for its customers and owners is defined by the functionality requirements, but they do not
necessarily determine which architecture is suitable for a specific use case. As an example, say that a user wants to list
data that concerns them, with enough urgency that the performance requirements could mean designing a highly
responsive application for when this kind of event occurs. As another example, if availability concerns your users, your
architecture needs to consider how a failover can be performed when data cannot be accessed. After an application is
released, changes usually soon follow.

Developers constantly strive to enhance the system, because of new technology, requirements on the market, or security
threats. To enable developers to push out changes, you should first make sure that the changes will not break existing
functionalities, so testability of a system is a top concern for often-changing code. Quality of a system from the end user
perspective involves ease of use, efficiency, how fast one can learn the features it is offering, and how much the system
delights them to increase return usage. The tasks within usability help to define such requirements.

Note

Failover is a tactic of switching to a redundant computer server or system when a failure of the previously active
application occurs.

It is easier to poorly design a system than it is to do it right. Many good design decisions that are proven to work in practice
are already discovered and made available to you for reuse.
Some of the commonly known software architecture patterns:

 Layered or multitier architecture pattern

 Event-driven architecture pattern

 Microservices architecture pattern

 Model-View-Controller (MVC) architecture pattern

 Space-based architecture

It is time to investigate one of the more common architecture patterns.

Layered Architecture Pattern

The layered architecture pattern, also known as the multitier or n-tier architecture pattern, is one of the most common
general purpose software architecture patterns. This design pattern closely relates to the organizational structures of most
companies (Conway's law), so it is an instinctive choice for most application developments that concern enterprise
businesses.

Software components within this pattern are formed into horizontal layers, where each layer performs a specific role in
the application. There is no specification on the number of layers that you should use, but the combination of four layers is
the most frequently used approach. The four typical layers that are used in the architecture patterns are the presentation,
business, persistence, and database layers. For larger and more complex applications, more layers can be used.

Note:

The business and persistence layers are sometimes combined, so you end up with three layers, or so called three-tier
architecture.

The responsibility of the presentation layer is handling the logic for user interface communication. It processes the input of
the user, which is then passed down to other layers that handle the request and return the results that are then formatted
and presented to the user interface with the help of the presentation layer. The business layer is used to perform specific
business rules based on the events that happen in the system or requests that originate from the user. Handling customer
data, processing orders in a web store, and any kind of calculation or action that is considered as a part of business
functionality evolves in the business layer.

The persistence layer is handling the requests for data in the database layer. When the business layer needs to retrieve
data or save data to the database, it passes the request to the persistence layer that performs the required action, using
the query language supported by the database layer that stores all the data—for example, Structured Query Language
(SQL).

Each of the layers handles its own domain of actions. The presentation layer is not concerned about how to store data in
the database, the same as the persistence layer is not concerned with how to format data on the user interface. Business
logic (when using a persistence layer) will not talk to the database layer directly. Instead, it will retrieve data from the
persistence layer, perform some business-related actions on the data, and then pass it up to the presentation layer or back
down to persistence.
One of the most prominent features that this architecture promotes is the separation of concern among different layers.
Different pieces inside the layers deal only with the logic that relates to that specific layer. With this kind of separation
between the layers, it is easier to develop and maintain applications. When requests move from one layer to another, it is
mandatory that they go through all the layers below it. If the request originates in the presentation layer and finishes at
the database layer, it first must go through the business and persistence layers.

You should not communicate directly with the database layer from the presentation layer; otherwise, you violate the
layers of isolation principle. This principle again refers to the idea that development changes in one layer of the
architecture should not affect changes in other layers. If you allow direct communication of the presentation layer with the
persistence or database layer, then these two become tightly coupled. If you change the technology of the database layer,
it would be required to change the presentation logic, which, as a side effect, might affect other parts of the presentation
components.

Sometimes, you need components that are only accessible from some layers and hidden from the others. In this case, it
makes sense to introduce a new layer that will be used only in some scenarios. For example, if you want to create a special
actions layer that is needed by the business layer, it will reside below the business layer. This new layer should be an open
type layer, which means that the business layer does not have to go through it every time a request comes from above. A
layer that is mandatory in the request traversal is referred to as a closed type layer. The open and closed layer concept
helps with identifying the relationship between different layers in the architecture and provides you with the information
about request flow and restrictions.

Each layer should be developed independently with changes that are done in isolation of other layers. The layers in the
layered architecture should have well-defined APIs or interfaces over which they communicate. This way, your system will
be more loosely coupled and easier to maintain and test. As an example, consider the following scenario for an
application.

In the presentation layer, the overall behavior and the user experience of ordering items is developed. When a user
executes an action on the orders page, this event gets delegated to the orders agent that is listening to such events. The
request proceeds in the next layer, where the business logic defines how to process order requests. If needed, the orders
module will communicate with the persistence layer, either for data retrieval or for storing information persistently in the
database. When the request reaches the last layer, the response is generated and processed in the opposite direction and
is finally presented on the user order page.

If a request that comes from the presentation layer needs to reach the database layer, but the business layer (and possibly
some other closed type layers) do not perform any processing of the request—instead, they send the request to the next
layer—then this is considered by some developers as the architecture sinkhole antipattern. It is almost impossible to avoid
this situation, but the goal is to minimize such request scenarios, which might result in changing some layers to be more
open.

The layered or n-tier architecture makes a solid ground for most of the applications. It is easy to test, with the possibility to
mock some layers in your tests, and it is simple to develop, because the skills of the developers can be split across different
layers. You can have developers who are more comfortable with the front-end development work on the presentation
layer. Some people may be more skilled with databases, so they would work on the persistence and database layer. One
disadvantage of this architecture is that it is not the easiest to scale compared with a microservices architecture. Most
layered development tends to be monolithic, which complicates the deployments and makes it harder to create a
deployment pipeline. The deployments must be planned because of the downtime.

Software Design Patterns


A good software architecture is important, but it is not enough for establishing good quality of a system. To ensure the
best experience for all parties involved, the attributes, besides being well designed, need also to be well implemented. The
architecture patterns will give you a bigger picture of how components should be assembled. Software design patterns will
dive into separate components and ensure that the optimal coding techniques and patterns are used in order to avoid
highly coupled and tangled code.

As with architectural patterns, the software design patterns provide solutions to commonly occurring obstacles in software
design. They are concepts for solving problems in your code and not libraries that you would import into your project.
Also, when compared with algorithms, design patterns do not define a clear set of actions but rather a high-level definition
of a solution, so the same pattern, applied to different applications, can have different code.

The simplest design patterns are called idioms and are usually tied to a single programming language to solve a specific
deficiency in that language. Most software design patterns are language agnostic and can be distinguished by their
complexity and applicability to the system in observation.

If you were developing software before, you might have already used some of the patterns without knowing it. When you
gain experience, you should be able to use design patterns, not coincidentally, but because you know that it will solve a
problem.

Software design patterns can reduce the time of development because they promote reusability. Loosely coupled code is
easier to reuse than tangled code that was not written with extensibility and flexibility in mind.

Note:

One of the most influential books on object-oriented software design is Design Patterns: Elements of Reusable Object-
Oriented Software, which set industry standards for writing better code by relying on patterns. Many of the concepts that
you will learn here are coming out of the ideas of the book authors— Erich Gamma, Richard Helm, Ralph Johnson, and
John Vlissides—who are commonly known as the Gang of Four (GoF).

When you are reading about software design patterns, you will typically be introduced to a pattern using a name—for
example, adapter pattern. Each pattern will describe the context and problem that it is solving, then a solution that acts as
a template applicable to different programming languages, and finally, the good characteristics that a pattern brings, as
well as the trade-offs of using patterns.

The sections that are usually discussed with the design patterns are:

 Intent

 Motivation

 Applicability

 Structure in a modeling language

 Implementation and sample code

There are many software design patterns. They vary in their applicability and level of abstraction but can still be classified
based on their purpose. The groups in which patterns are divided are creational, which are the patterns concerned with
the class or object creation mechanisms; structural, which deals with the class or object compositions for maintaining
flexibility in larger projects; and behavioral, which describes ways of interaction between classes or objects.

A pattern will describe the context and problems that it solves, together with a solution in a modeling language and code
examples.

The patterns follow many design principles. The ideas of encapsulation for minimizing side effects when changing parts of
code, SOLID principles, and depending on abstraction, not implementations, are intertwined into design patterns that can
be adopted for your problem solving.

As an example, observe the singleton pattern. The intent of this pattern is to ensure that a class has only one instance
while providing a global access point to it. The motivation for it are classes, where you expect to get that object instead of
a new fresh one once an object exists. This way, you can control access to a resource shared among other objects—for
example, a database object shared by other client code.

Using this pattern, you can provide global access to an object without having to store it to a global variable. Global
variables might seem like a clever way of providing access to any other object, but they also pose a threat, because a
global variable can easily be overwritten with other content. The singleton pattern enables the access to an object from
anywhere, and it also protects that object from being overwritten. This protection is done by making the class constructor
private and creating a static method that, if accessible by your other code, returns the original instance to the caller.

Here is how you would implement the singleton pattern in the Python language.

Class DataAccess() can be instantiated only once. The __init__() constructor first checks if an object instance exists, and if
it does, it raises an error. If your code needs to access the object, it should retrieve the instance using
the get_instance() method.
MVC Architecture Pattern 

Patterns in software development are discovered when arranging higher or lower-level components into one entity. Some
arrangements work better than others, and these get documented and prepared for reuse. The Model-View-Controller
(MVC) is one of the most known and used architectural patterns in programming languages. It was introduced to the world
of software development in 1970s as a part of the object-oriented programming language Smalltalk. Later, the principles of
MVC evolved into a general concept, easily adapted by programming languages, and was enforced into the programming
community as part of various frameworks. MVC played an influential role in mainly user interface frameworks, and is still
relevant and used in interactive desktop and web applications today.

Note

A framework in programming is a project, providing you with the skeleton code for building an application that can be
extended by the programmer to create a specific implementation. It usually comes with already implemented handlers for
user authentication, database connection, and similar frequently used actions.

MVC describes the implementation of software components in terms of their responsibilities.

The MVC patterns introduce three object roles:

 Model

 View

 Controller

These three roles enable the separation of concern and independent development of each. Views have the logic for
displaying information to the user, and the models encapsulate what that information is. If the model, for example, defines
user data, the view renders that data on the display, which can be a web browser or a mobile or desktop application. Any
changes that users make on the view are handled in the controller component. The controller takes input, makes changes
on the model if necessary, and informs the view component to update the presentation accordingly.

The dependencies between the components govern the behavior of the system. The view depends on the model and
obtains data from it, which enables you to develop the model without knowing what will be in the view component. The
view component makes development easier, allowing you to have multiple views for different platforms, and it simplifies
adding a new presentation view later without changing the model. It also enables you to test business logic without having
to worry about the presentation of the data. A model should be independent of the presentation output and input
behavior.

The controller sits between the model and the view, and it depends on both, while the model is not dependent on the
controller. Typically, there is one controller for each view. For example, a user view that has the presentation logic for user
information is using a user controller that handles the communication between the user view and the data model.
Controllers receive inputs from the interaction with the view component, which are translated into requests to the model
component or view component, or both. In this case, you might even create a user model, representing user-specific
information, ways of storing data, and dependencies to other models.

One of the primary features of MVC architecture is the separation between the view and the model, which is considered a
very important design principle in software and should be followed every time that your application has some sort of
dynamic behavior.

In MVC, the view component is the entry point for a user accessing an application. The view component has the
implementation of how the data is presented to the user. Development of the view requires a different set of technology
bits than the controller or model components. You may hear the term front end being used to denote something as the
application display that is used for interaction with the system. In the case of the MVC, the front end is developed in the
view component. Because of the clear separation and difference in technologies, some people specialize in front-end
development only, making user interfaces that are pleasant to look at and to use.

The controller and model components are developed using different sets of technologies. In their case, this is referred to
as back-end development, and again, some people prefer working with this mindset and technologies, solving different
kinds of problems than the ones in the view. The separation between the components enables you to program the code in
different programming languages and technologies, as long as you are capable of connecting everything together using the
desired tools.

The tasks of the controller are to accept the input and translate that to a request either to the model or view. The model
defines the state of an object in the application and implements the functions for accessing the data of modeled objects. It
is responsible for registering dependent views and controllers, and notifying related components when internal data is
changed. There can be multiple views, representing different displays in the application, and usually, each view will have
its own controller.

Consider the flow of requests in the following diagram.

A user interacting with the application view initiates an event that the controller receives as an input (1). An example of an
event is a user logging in to the system with their credentials, or it can be registration to a website, providing all the
necessary information in the request, or simply changing from one view to another in the application. The controller
receives the request that the user initiated from the view and interprets it against the set of rules and procedures that are
defined inside the controller for that view. The controller then sends the state change request to the model component,
which should manipulate the model data (2). The controller could, for example, send the registration information to the
user model after it concluded that the information that the user entered is valid.
After the request was processed, as a result, the controller can ask the view to change the presentation (3). This change
can happen to route the successful registration to the home page view, for example, or in general to change the view after
a successful or unsuccessful request. The model responsibility is to interpret the requests coming from the controller and
store the new state. Communicating with underlying storage technology is mostly implemented in the model component.

As the consequence of the state change, a change-propagation mechanism is initiated to inform the observers of the
model that they should update their presentation based on this new state (4). The view component will, after the
notification, start the update process and request the new state directly from the model (5). After the response of the
module, the view is redrawn using the new data. The view component might also request the module for the state after
the controller requested a change on the view. Likewise, you can register the controller components to listen to the
change propagations so that they get updated with the new state on changes to the module. When the state of the
module controls when something is available in the view or not, then it is the responsibility of the controller to update the
view in respect to the module.

Sometimes, it might seem that you are complicating the application with defining a controller, when you could have the
logic for controlling requests to the model inside the view component. The reason that you want to keep the controller
separate is that if you do not, the view becomes responsible for two different things. The lines between front-end code
and back-end code get blurred, and the view becomes more tightly coupled to the model, because it will be very difficult
to reuse it with another model without rewriting your code. The controller helps with separation of concerns and makes
the view decoupled from the model, resulting in a more flexible design.

The MVC architectural pattern is composed from multiple software design patterns that maintain the relationship between
the components and solve problems that MVC introduces by proposing some of its features.

Common design patterns used in MVC:

 Observer pattern

 Strategy pattern

 Composite pattern

 Factory method

 Adapter pattern

MVC lets you change how a view responds to user input without having to change or develop another view component. As
an example, imagine having a registration screen for your application. The view implementing that screen sends the
request to the registration controller. Currently, the registration controller reads the input and translates that to a request
for a model that stores a new registration. Later, you decide you want the registration controller to perform a different set
of actions, such as validating the input and attaching additional data to the request before it proceeds to the model. MVC
enables you to create a new controller with such features and change the view to use this instead.

The relationship between the view and a controller is an example of the strategy design pattern. This behavioral strategy
design pattern suggests taking a class that has many different related implementations and deriving all the
implementations into separate classes, which are then called strategies. Strategies can be, as the previous example
suggests, interchangeable objects that are referenced by a class—in your example, the class implementing the view. This
view class, which requires a strategy, works with the strategies through the same generic interface, making the view class
independent of the specific strategy class implementation of the algorithm, and is capable of replacing the strategy
whenever that is needed.
When your user interface can be combined hierarchically using different elements, such as nested frames and buttons that
can be represented as a tree, then you are probably using a structural composite design pattern. The MVC architectural
pattern suggests using the composite pattern inside view components. There are other patterns that can be found in MVC,
such as factory method, adapter, and decorator, but the main relationships in MVC are constructed using observer,
strategy, and composite design patterns.

The benefits of having separated views and models in MVC are obvious, but they also introduce an issue. You can have
multiple views that use the same model in your application, so you want to make sure that all active views get updated on
a model state change. As seen in one of the previous figures, a change-propagation mechanism is initiated to inform all
participants that the state has changed. This mechanism in MVC is typically designed using the observer design pattern,
where the view acts as the observer of the model and starts the update procedure when the state changes—that is, when
the model notifies the observers.

In this partial UML diagram showing the model component as a class implementation, you can see the required state and
method fields. A model stores data of some sort or has at least an option to query it from an underlying storage
technology when the view requests it. It also has a list of observers that subscribed to the changes that happen to the
model. It implements methods for attaching and detaching new observers, and a method for notifying them on a state
change. It must implement methods for responding to data requests coming from the view components.

As the next figure shows, an observable—in this case, the user model—will notify all observers on a state change. If a new
observer is required in a form of a view or a controller, it should be easy to attach it to the group of observers.
There are many benefits of using the MVC architectural pattern, but there are also some drawbacks that need to be
pointed out.

Benefits of the MVC pattern:

 Separation of concern

 Multiple views of the same model

 Flexible presentation changes

 Independent testing of components

 Pluggable components

You can benefit from using MVC because it uses separation of concern by introducing component-based development.
Each component performs its own specific role. The view component takes care of the presentation side of the application,
the model defines the state of the application, and the controller governs the behavior of user actions against the view and
the model. Components can then be developed and replaced independently when needed.

The advantage of MVC is that you can prepare multiple views that use the same model, and you can also change the
presentation on those views without affecting any of the related components, if the changes are small. The separation of
components allows you to test them independently by mocking a component. For example, instead of using a model that
works with a complex database system that needs more time to set up, you could mock that database using some other
lightweight implementation that would be sufficient for running tests.

Downsides of the MVC pattern:

 Increased complexity

 Excessive number of change notifications

 View and controller tight coupling

 Separate controller is sometimes not needed

One of the drawbacks of MVC pattern is increased complexity that comes when there are use cases that require
nonoptimal implementation if you want to stay in the frame of MVC definitions. For example, your view has a form that
can be explicitly enabled based on some state in the model. The correct way of enabling this in the frame of MVC is that an
event is triggered that propagates to the model, then the model notifies observers to fetch the state, and after that, the
form is enabled. This is a rather complex process for a simple use case, but because an MVC model does not know about
the views directly, it cannot propagate the changes to the views when necessary.

Also, sometimes, there can be a lot of change propagations that do not benefit all the views that use the observed model.
Even though the view and controller are separate components, they are closely related, which might complicate separate
reuse. Some would argue that a controller separation is not needed, especially when the user interface platform already
implements the event handling by itself.

A couple of variations of the MVC architectural pattern have been introduced; they propose similarly to MVC but handle
some things differently. Examples of such patterns are the Model-View-ViewModel (MVVM) and Model-View-Presenter
(MVP).

Implementing MVC

Now that you know the building blocks of the MVC architectural pattern, it is time to put this knowledge to work. You are
going to observe how to build a basic structure of MVC in Python. As you have seen previously, there are three main
components—model, view, and controller—that you implement together with connections between them. The important
thing is to make sure that these components are not tightly coupled together. The view should be independent from the
model; any changes that are done there should not affect other views that relate to the same model. The views and
controllers should be easily exchangeable, even during runtime.

You should start designing your program on a higher level, using UML class diagrams that can specify how your program
acts. The example covers creating an MVC-based application for creating users and searching for users.

Review the UML class diagram example. The program has one view, UserView, that uses a UserController interface and
UserModel. The controller interface is implemented with SimpleController class, which defines create() and get() methods
for working with users. The controller talks to the UserModel to store user information, and the view contacts the model
after the state has been changed.

Now that the high-level design is represented in UML, you should be able write it all up in a programming language. MVC
as an architecture pattern is not dependent on programming language or technology, of course. The components and
design patterns are developed differently in different languages, but the idea stays the same. In this case, you write the
program in Python, but you may take any programming language and simulate the same style of implementation. First,
look at the code for the user model. The model is, as you already know, not dependent on any other component from
MVC. Obviously, it can depend on libraries and other modules that help with the functionality implementation, such as
database communication. In this case, the code simulates the database by storing the user data in a global variable, but it
could be easily replaced by any other technology and the dependent components would not see the difference.

It is time to connect the theory with practice—first, the module component in the model.py module.
The responsibility of the model is to define what a desired object can have and what it can do. What a model can have, or
the state of an object, is stored within the user model object. Whatever data is defined on the model usually also reflects
how the data is stored in the underlying database of choice. The user model defines two properties, username and email.
These two values are set on object creation, initiated from the controller, which gets the values from the input that the
user enters to the view component. There are a couple of static methods, get_user() and get_users(). The first method
requires an id argument that is used for searching for the user entry. The second simply returns the entire database of
users. A third method, store_user, which is not static because it is referencing the current object instance through the
"self" object, is used for storing new users, and it calculates a new ID for each entry request it receives. How to store new
users depends on the database type. The model is responsible for storing it correctly; the dependent components use the
model API.

Note

A static method in programming is a class method that is a part of a class but not related to a class instance directly, and
can be called without object creation.

Next, look at the code for the view component inside the view.py module.

The view component in this program implementation is responsible for taking the user input and printing the results. It is,
by definition, dependent on the model, but until you come to the point of implementing the observer, it is just a light
dependency because of the module import. The view in this case is very simple. It is a CLI display application that takes the
user input for the username and email and sends that to the controller for further processing. It also implements a display
procedure for showing a user with a specific identification number in the database.

The two methods that communicate with the controller are loosely coupled. Instead of sending an object that the
controller would need to know how to interpret, they simply send the input in a string format.
The update_display() method is used for updating the view and represents the connection from the controller to the view.
When a controller performs an action that the user requested from the view, it calls back the view to update the
presentation accordingly. In the case of the UserView, it prints to the standard output whether the action was successful
and a message for the user.

Note

The update_display() method is more tightly coupled because it deserializes a user object. It would be better in terms of
design principles to make it less dependent on the user object. You can do this reimplementation.

In this example, the view is a CLI, which is not something that modern applications would use as the view. As already
discussed, the idea behind MVC is that the components can be individually developed and reused as one would like. Once
the CLI application works as desired, it should be trivial to replace the view component with another one that implements
the presentation in desktop form or a web page. As long as the communication interfaces between the components stay
the same, you can switch between different views or use multiple views simultaneously.

The last piece of the puzzle is the controller component located in the controller.py module.

The first class in the module, the UserController class, is an abstraction interface and serves as a contract between the
view and the implementors of the controller. There could be many different controllers for the view component, but they
all need to follow the UserController abstraction blueprint and implement the methods. In this example, the controller will
have two methods—one for creating a user and another serving the view functionality of displaying a user by ID number.

The continuation of the previous code from the controller.py module is the SimpleController class.

This is the implementation of the strategy design pattern, where SimpleController is one of the strategies of the
UserController interface. It implements the contract between them and the parent UserController class.
The create() method implementation is reviewing the user input and preventing the user creation if the input is faulty. This
is just a simple showcase of how a controller can be used as a mediator between the view and the model components. If
the input was accepted, the data gets translated into a user object, and the static method store_user() is called on that
object. This starts the process of saving a user, which is implemented in the user model. If everything went right, the
controller must update the view component that the request went through. Because the controller already has a reference
to a view object, the code simply calls the update_display() method with the right values.

The get() method relies on the model component static method get_user() to retrieve the user from the database.
The get() method does not need to know implementation details, nor what is the storage technology. It just requests the
user by the ID and expects to get a positive or negative result based on the data in the database. What is also useful in this
controller is the ability to change the output messages without having to alter the view code. Every method on the
controller can prepare the correct response for the view to display. With multiple views, your code does not need to
duplicate these messages across the views. If they are using the same controller, they will receive identical replies.

Now that you have the code that is written in MVC ready, you need something that will tie everything together and run the
program. You could write another module—for example, app.py, in which you would import the view.py module and call
the create_user() method with your set of values. You can call the method multiple times, and it should result in creating
multiple users. Then, you could continue interacting with the view and call the get_user() method with an ID to retrieve
the user information. You could also use the Python interactive shell and do the same interactively. The last option is
showcased on the following figure.

Now you see that the program works. It would be possible to do the same in just one module, and much simpler, but that
was not the point. Once you start development on bigger projects, where there are more views, controllers, models, and
other moving parts in your program, you need to have a structure in your code; otherwise, you will lack the reusability,
modifiability, and extensibility aspects of your application. Using an architectural pattern like MVC, you can achieve all that
and avoid tangled code that is hard to maintain.

MVC Frameworks

With access to patterns such as MVC, you might find yourself using and reusing ideas all the time in different projects. You
figured out that the application you wrote last month in Python has a nice structure, and you already implemented the
database communication and separated the logic for the view, controller, and model into separate folders. You may have a
place in your project to store additional libraries, and you found a nice way of including them into your application code.
You feel comfortable taking this project and porting it to a new one over and over because you enjoy working in the
project structure that you shaped. It just makes sense and works perfectly for the projects you are working on. It looks like
you just created yourself a framework.

There are many frameworks out there that provide you with generic functionality and can be adopted by your own
implementations. They propose a way of working and promise you great efficiency, stable design, loosely coupled codes,
and more. You end up working inside a defined structure that guides you to write your code better and enforces you to
use patterns without you knowing anything about them. There are many frameworks that incorporate the MVC pattern as
their basis, and it could be that you already used some of them without even knowing that you wrote highly reusable and
flexible code. Using a framework will not prevent you from writing tangled code in the parts where you must develop your
own logic, but the structure should guide you to decrease such problems. There is a learning curve to every new thing in
the technology world, and frameworks require your time as well. Luckily, frameworks are usually well documented,
provided with example projects that you can use for discovering the features and way of working.
There are many MVC frameworks that are written in different programming languages, the first being Smalltalk-80, written
in the 1980s.

The more modern list of frameworks that enable the MVC pattern:

 ASP.NET MVC

 Django, Pyramid, web2py

 Symfony, Laravel, Zend Framework

 AngularJS, Ember.js, Wakanda

 Spring

Observer Design Pattern 

Observer design patterns are behavioral patterns that define a one-to-many dependency between objects, together with a
subscription mechanism for informing subscribed objects on changes happening on the object they are observing. The
pattern is also known as Event-Subscriber or Listener.

As an example, imagine a web application for streaming music. Users can register on the site and access many different
artists and music. Every user develops a taste for a certain genre and wants to listen to more music from that genre. You
start to check frequently for new music that your artist would put up on the streaming platform in the genre you are
enjoying the most, but there could be days before the playlist would be updated. After a while, you see that checking the
playlist again and again is time consuming, and you begin losing interest in the platform.

The developers decide to do something about it, so they establish a system for notifying users on every update there is in a
certain genre. They solved a part of the problem. You now get notified about the new releases in the genre you like the
most, but at the same time, you get updates on the genres you are not interested in, and so does everyone else. It looks
that this solution just discovered a different kind of problem. Broadcasting the message to everyone is not the best way of
dealing with the updates. The solution would be to let the users decide what they want to receive and let them subscribe
to the notification from a genre and stop receiving messages for genres they are not interested in. If you take the analogy
and put yourself in the shoes of the developers of the platform, you would need to incorporate some sort of subscription
mechanism. The observer pattern can be the right tool for the job.
The motivation for this pattern comes from a practical case where a modular system has a collection of related classes that
need to maintain consistency. If you relate to the example again, classes could be user and playlist. The user class kept
consistency by periodically calling the playlist public method for getting back the information if there are any new songs
from the genre. However, as a practical example, that solution did not work out, so the first way of achieving consistency
failed.

Consistency also can be achieved by writing the classes to have more information about each other, but that would make
them tightly coupled and would reduce the reusability of the components. So the playlist class could notify the user class
and call the user public method when a new song comes from the genre that a user is interested in, but this makes the
playlist too dependent and tightly coupled with the user. The observer patterns define how to establish these kind of
relationships in the right way.

The fundamental elements in this pattern are observable or publisher and observer or subscriber. An observable can have


any number of dependent observers. Observers are notified whenever the subject goes through a state change. When a
state change happens, the observers that were notified contact the observable (the subject) to synchronize its state. The
publisher is also known as the subject, and the observers can be referred to as subscribers. The publisher sends the
notifications, but it does not need to know who the subscribers are or how many of them are subscribed. In networking
terminology, you may say that the publisher is sending a multicast notification.

The observer pattern can be used in cases where an abstraction has more aspects that are dependent on one another, and
you are able to encapsulate them in separate objects to increase reusability. It is also useful when changing an object
requires changing other objects, but you do not know which and how many. For example, in the MVC architecture, you
can have multiple views that depend on the same model component data. If one view initiates the change of the model
state, you need to make sure that all the other dependent views are updated with the new data.

The idea of the observer pattern is that you add a subscription mechanism to the class from which you want to generate
change notifications. Other objects (observers) will be able to subscribe or end subscription for certain stream of change
notifications coming from this publisher (observable) class. The implementation is quite simple; the publisher class must
store a list of references to the objects that are subscribing to the publisher, and methods for adding new subscribers and
removing existing ones.

When an event happens on the object implementing the observable, it triggers the notification procedure. This trigger
requires the observable class to go through the list of subscribers, calling their notification methods. You probably will
have multiple objects that will be interested in change notifications from the same publisher. It is important that you do
not write the observers and the publisher to be tightly coupled. Again, developing against interfaces, not implementations,
is preferable. The subscriber classes should implement the same interface that the observable class can call. It also makes
sense to have the same interface on all the publishers if you have more of them.

A publisher in the observer pattern should know that it has a list of observers, with the assumption that each observer
conforms to the agreed abstraction interface. The publisher implements a subscription mechanism for attaching and
detaching of objects that are interested in the state, together with the notification procedure for informing subscribed
objects on changes (1). In the related figure, the implementation of the publisher class contains the observer pattern
procedures and the business logic. You could also split the observable related logic into its own class, so you end up having
an event propagator to which your state-holding objects delegate the operation of observer notifications.

The coupling of the publisher and observer should be loose, because the publisher should not know any concrete class
implementation of the observers. The relation between the publisher and observers should therefore be through an
abstraction interface that defines the observer API (2). In the simplest case, it defines an update() method that could also
accept some parameters—for example, which part of the state changed—or other details. The ConcreteObserver class
models the objects subscribing to the publisher. It stores the state that should stay consistent with the referenced
publisher. It must implement the Observers update() method for staying in sync with the publisher (3).

The listener and publisher objects are independently created in the runtime, and they may be used in the program,
providing their general function, before initiating the request for the subscription contract between them. When the
subscription is needed, the ConcreteObserver object is passed to the attach() method of the publisher class, which
automatically makes that object a part of the notification process of this publisher (4).

As a practical example, explore the Python implementation of the previously mentioned streaming music application,
which incorporates the observer pattern.
First, the observable or publisher class from the playlist module implements the simple logic of adding musical tracks to
the list (state), together with all the necessary observable data and methods. The parent class playlist implements all that
logic, while the child classes only inherit it. The program will instantiate the genre objects, not the playlist directly.
Currently, the genre classes do not have any custom additions to the playlist, but if additions are needed, they can be
easily attached.

New observers can subscribe to the publisher by appending them using the attach() method, which simply adds the
received object to the internal list of observers. The detach() method is not present in this implementation, but it is
nevertheless very simple. You would use the remove() method on the list, specifying which object needs to be removed. It
would also make sense that the state of the observer that was detached is aligned with this event, so cleaning the
observers state from any state coming from the detached publisher is required. The get_state() method returns the
current state of the program—in this case, the current list of the musical tracks in a genre. This method is used by the
observers as a means of syncing their state.

The private _notify() method takes care of traversing the list of all subscribed objects and calling their implementation of
the update method by passing a reference to the current playlist child genre object. This object will be later used inside the
observer update() implementation for accessing the state of the genre publisher.

Note

The self  argument inside a Python class refers to the current object instance that you operate on, and can be used to
access the methods and state of that instance.

Notice that the publisher class is not dependent on the type of the observer object that gets registered with it. The only
contract between the publisher and observer is that the observer implements the update() method and that the publisher
has the get_state() method for state retrieval. You could make the publisher more generic by separating the business logic
and observable logic into separate classes. Then, you would end up having an event propagator for any kind of concrete
publishers.

For implementing the observer or subscriber class, an abstraction is used. First, the abstract observer class defines an
abstract method update(). This is a method that classes need to implement if they want to act as observers in this
program.

The class user is the concrete observer in this case. It inherits the observer class, meaning that it must implement
the update() method because of the contract between the observer class and because this is expected in the publisher
implementation. The concrete observer represents a user object that stores track references from all the genres to which
the user is subscribing. The track list gets updated when there is a change in one of the subscribed genre objects, and the
publisher calls the update() method of a user object. When this occurs, and the user objects are notified,
the get_state() method on the publisher object that was passed in as an argument gets called. The observer then promptly
updates its state with the state of the publisher, and the user can simulate playing all the tracks that are part of the genre
in which the user is interested.
The current implementation is quite simple. The publisher of the notification could also pass in, for example, the difference
between the previous and the current state, or who initiated the change and similar information that might be necessary
for properly updating the state.

To connect all the logic together, you may again use the Python interactive Shell and try running the program.

After you create a couple of users and a genre playlist, you can attach (subscribe) the users to that playlist. The application
must pass the user objects to the attach() method, which are then stored within the playlist observers list, waiting to be
called on a state change. A state change is initiated when a new musical track is added to the genre playlist. As seen in the
figure, both users that were subscribed to the jazz playlist were updated. If you call the users method for playing the
tracks, you will see that they both have the correct state of the playlist, so the observer pattern worked.

Applicability of the Observer Pattern

When you need to dynamically sync the object state based on changes in some other object, and you cannot predict how
many or what kind of objects will need this means of updating, the observer pattern might be the right tool for the job.
How to apply this pattern is evident from the examples, but you need to be aware of a couple of things when you decide
to implement it.

The easiest way of mapping publishers and observers is to store the references to the observers in the publisher. That
approach is shown in the examples. If this is found to be inefficient, then it might be better to store the relationships
outside of the publisher class, in a data structure such as a dictionary that may hold the relationship information.
Sometimes, an observer subscribes to more than just one publisher. In this case, you will need to write the update method
to provide the relation to the correct publisher. As in the examples, passing the publisher object to the update method
works well.
The update of the observer is triggered after an event happens on the publisher and the notification is generated. The
notification can be generated directly after the state change, initiated by the publisher as it was in the previous example.
One problem of this approach is that there can be many consecutive calls to the business logic of the publisher, triggering
many notifications, resulting in multiple consecutive synchronization calls between the observable and observer. In such
cases, it could be better that the notification gets triggered by the application code that is using the publisher. This would
mean that the application could call the business logic multiple times, and after it finishes, it calls the notify procedure,
resulting in only one update procedure. To refer to the example from before, instead of users being updated every time a
new track was added, the application can add multiple tracks without triggering the update, and then as the last step, it
calls the notify procedure manually, reducing the number of updates.

When a publisher object is deleted, you need to make sure that the observers do not hold any references to that object
anymore. You cannot just delete the observers because they might relate to other objects and publishers. The observer
state should be up to date with the publisher. If the publisher does not exist anymore, then the proper handling would be
to remove all those references. This can be implemented by publisher sending a delete notification to all observers before
its deletion. The observers can then clean their state.

In a system requiring a more complex subscription mechanism, a viable option for registering observers would be to allow
subscribers to attach to specific events of an observable. The event or part of the state that the observer is interested in
would be passed into the attach() method, and on the update procedure, only that event or part of the state is passed to
the update method. This solves objects from receiving unnecessary updates.

The observer pattern is applicable to many applications requiring described behavior. It is easy to introduce new
subscriber classes without mangling the publishers code, and the same applies in the opposite direction if you develop
against interfaces. One good thing about the observable-observer relationship is that it can be established at runtime, so
there is no need to stop the program code and implement separately for every attachment. The drawback of the
relationship is that the observers are being notified in a random order. The idea is that the publisher does not know the
details of the objects that are subscribing to it, so it is understandable. You can, of course, get around it by twisting the
rules a little bit, but be aware that it usually results in a more highly coupled code, which, on the other hand, might be just
fine and unavoidable in some cases.

Identify Software Architecture and Design Patterns on a Diagram

Identify Software Architecture and Design Patterns on a Diagram

A new application for publishing short stories is in the making. You received an idea for the components that are necessary
and the connection between them in the form of a class diagram. To understand how the application works, you need to
first understand the diagram to reason about its functionalities and restrictions.
Implement Singleton Pattern and Abstraction-Based Method 

Implement Singleton Pattern and Abstraction-Based Method

After the changes to the diagram, you received the first version of the implemented program. It has some issues that need
to be resolved.

Note

In this activity, you will interact with a recording of a lab made on actual devices, rather than interact with the actual
devices. In this activity, when entering a command, the entire command must be entered exactly as stated, including
matching the case and spaces; shortcuts and tab completion are not supported. In GUIs, you will only be able to use the
menu items required for the activity. You will not be able to use other menu items.
The StoryModel changed slightly to accommodate the contract with the Observer class. It imports the Observer class and
uses it in the subscribe() method. It is used for checking whether the object that wants to subscribe to the model inherits
the Observer class. If it does not, then it cannot be put into the observers list. This way, the abstraction contract is fulfilled
from both the observer and observable side.
Section 3: Summary Challenge
Section 4: Introducing Network-Based APIs

Introduction 

In a world that is driven by the Internet and ever-increasing amounts of data, technologies that enable and standardize the
way information is exchanged are very useful. HTTP and HTTP-based application programming interfaces (APIs) form one
of the foundations of the World Wide Web and provide us with a way to communicate with remote systems. An overview
of the HTTP protocol and its relation to different types of APIs will be provided here.

HTTP Overview 

HTTP is an application layer protocol and is the foundation of communication for the World Wide Web. HTTP is based on a
client/server computing model, where the client (for example, a web browser) and the server (for example, a web server)
use a request-response message format to transfer information.

HTTP uses the application layer of the IP suite (TCP/IP). HTTP presumes a reliable underlying transport layer protocol, so
TCP is commonly used, but UDP can be used in some cases.
By default, HTTP is a stateless (or connectionless) protocol; it works without the receiver retaining any client information,
and each request can be understood in isolation, without the knowledge of any commands that came before it. HTTP does
have some mechanisms, namely HTTP headers, to make the protocol behave as if it were stateful.

The information is media-independent. Any type of data can be sent by HTTP, as long as both the client and the server
know how to handle the data content.

Request-Response Cycle

The data is exchanged via HTTP requests and HTTP responses, which are specialized data formats used for HTTP
communication. A sequence of requests and responses is called an HTTP session and is initiated by a client by establishing
a connection to the server.

Process of the request-response cycle:

1. Client sends an HTTP request to the web.

2. Server receives the request.

3. Server processes the request.

4. Server returns an HTTP response.

5. Client receives the response.

You can observe this request-response cycle if you enter the developer mode in your browser, visiting a website and
analyzing the HTTP requests and responses shown in network section.
Follow these steps to inspect the request-response cycle.

1. Visit a URL in the browser.

2. Enter the developer mode (usually the F12 key).

3. Select an HTTP session.

4. Check out the request and response headers.

5. Inspect the header data.

6. Inspect the response body data.

HTTP Request

An HTTP request is the message sent by the client. The request consists of four parts:

 Request-line, which specifies the method and location of accessing the resource. It consists of the request method
(or HTTP verb), request Universal Resource Identifier (URI), and protocol version, in that order.

 Zero or more HTTP headers. These contain additional information about the request target, authentication, taking
care of content negotiation, and so on.

 Empty line, indicating the end of the headers.

 Message body, which contains the actual data transmitted in the transaction. It is optional and mostly used in
HTTP power-on self-test (POST) requests.

HTTP requests do have some constraints. They are limited in size and URL length and will return an error if the size is
exceeded. While the HTTP standard itself does not dictate any limitations, they are imposed by the specific server
configuration. Very long URLs (more than 2048 characters) and big headers (more than 8 KB) should be avoided.

Request body sizes vary and depend on the server and method type, but it is not unusual to use a size of anywhere from a
few megabytes to a few gigabytes. Body size is determined from the request headers, specifying the content length and
encoding.
The previous example shows an HTTP request where the client queries the server resources for a customer named Joe.

HTTP Response

An HTTP response is the reply to the HTTP request and is sent by the server. The structure is similar to that of the request
and consists of the following parts:

 Status-line, which consists of the protocol version, a response code (called HTTP Response Code), and a human-
readable reason phrase that summarizes the meaning of the code.

 Zero or more HTTP headers. These contain additional information about the response data.

 Empty line, indicating the end of the headers.

 Message body, which contains the response data transmitted in the transaction.

Example of an HTTP response to a request for a customer named Joe.


HTTP URL

HTTP requests use an URL to identify and locate the resources targeted by the request. The "resource" term in the URL is
very broadly defined, so it can represent almost anything—a simple web page, an image, a web service, or something else.

URLs are composed from predefined URI components:

1. Scheme: Each URL begins with a scheme name that refers to a specification for assigning identifiers within that
scheme. Examples of popular schemes are http, https, mailto, ftp, data, and so on.

2. Host: A URL host can be a fully qualified domain name (FQDN) or an IPv4 or IPv6 public address.

3. Port: An optional parameter that specifies the connection port. If no port is set, the default port for the scheme is
taken (default port is 80 for HTTP).

4. Resource path: A sequence of hierarchical path segments, separated by a slash ( / ). It is always defined, although
it may have zero length (for example, https://www.example.com/).

5. Query: An optional parameter, preceded by the question mark (?) passed to the server that contains a query string
of nonhierarchical data.

6. Fragment: Also an optional parameter, the fragment starts with a hash ( # ) and provides directions to a secondary
resource (for example, specific page in a document). It is processed by the client only.

Two commonly mentioned terms in relation to URLs are URNs and URIs:

 URI identifies a resource: ../people/alice.

 URL also tells where to find it: http://www.example.com/people/alice.

 URN identifies a resource using a (made-up) urn scheme: urn:people:names:alice.

A URI is used to unambiguously identify a resource and is a superset of URLs and Uniform Resource Names (URNs), which
means that all URNs and URLs are URIs, but not vice versa. While the URI identifies the resource, it does not tell where it is
located.

A URN is a URI that uses the urn scheme and identifies a resource within a given namespace. Namespace refers to a group
of names or identifiers (for example, a file system, network, and so on). URNs do not guarantee the availability of a
resource.
HTTP Applied to Web-Based APIs 

Web APIs are a subset of APIs, accessible over HTTP. Web APIs are software concepts that usually consist of one or more
publicly exposed endpoints, their request and response structures, and abstraction of underlying layers.

To communicate with these endpoints in an efficient and standardized way, HTTP request and response messages are
used.

Some web-based API usage examples are:

 Resource manipulation: APIs commonly support create, read, update, delete (CRUD) actions on resources.

 Automation: More and more remote systems can be automated via exposed API endpoints, either sending data
automatically or reacting to some predefined conditions.

 System configuration: A lot of networking equipment can be remotely configured via various HTTP-based
protocols.

 Service management: Web services such as monitoring and provisioning benefit greatly from API usage due to
abstraction and standardization.
HTTP Methods

HTTP methods (sometimes also known as HTTP verbs, although they can also be nouns) are a predefined set of request
methods that represent desired actions that should be performed on the resources. They are used in HTTP requests as a
part of the request line.

Some methods are considered safe because they do not alter the state of the server. They are read only. HTTP methods
can also be idempotent; they can be called many times while always producing the same result and not cause unintended
side effects on the remote server, such as unwanted resource reservation or deletion, unintended counter increases, and
so on.
HTTP Status Codes

HTTP response status codes are a predefined set of numerical codes that indicate the status of a specific HTTP request in
the response header. Status codes are separated into five classes (or categories) by functionality. You can create your own
status codes, but it is strongly advised that you do not, because most user agents will not know how to handle them.
Following is a brief overview of status code categories and a few of the status codes that you are likely to encounter when
working with web-based APIs. A complete list of status codes is available in RFC 7231, which describes the HTTP/1.1
standard.

1xx (Informational)

 Most codes from this category indicate that the request was received and understood. They usually mean that the
request processing continues and alerts the client to wait for the final response. They are rarely used.

2xx (Successful)

 200 (OK): Standard response for a successful HTTP request. The information returned depends on the request
method.

 201 (Created): Indicates that a resource has been successfully created.

 204 (No content): The server has successfully fulfilled the request and the response body is empty. A 204 code is
useful when you want to confirm that a POST request was received by the server.

3xx (Redirection)

 301 (Moved Permanently): This and all future requests should be directed to the given URI.

 302 (Found): The requested resource resides temporarily under a different URI.

 304 (Not Modified): Indicates that the resource has not been modified since the version specified by the request
headers. Useful for reducing overhead.

4xx (Client error)

 400 (Bad Request): The server cannot process the request because of a malformed request (bad syntax, deceptive
routing, size too large).

 401 (Unauthorized): The request requires a valid authorized user. It usually means that the user is not
authenticated or that authentication failed.

 403 (Forbidden): The request was valid, but the server is refusing action. The user might not have the necessary
permissions for a resource.

 404 (Not Found): The server has not found anything matching the request URI. No indication is given whether the
condition is temporary or permanent.

 Other status codes include more specific information about the request error.

5xx (Server error)

 500 (Internal Server Error): A generic error message, given when an unexpected condition was encountered, and
no more specific message is suitable.

 501 (Not Implemented): The server does not support the functionality required to fulfil the request.

 503 (Service Unavailable): The service cannot handle the request. It is usually a temporary condition attributed to
a server crash, maintenance, overload, and so on.

 Other status codes include more specific information about the server error.

HTTP Headers

The headers are a list of key-value pairs that the client and server use to pass additional information or metadata between
them in requests. They consist of a case-insensitive name, followed by a colon (":") and then its value. There are dozens of
different headers—some defined by the HTTP standard and others defined by specific applications—so only the most
common ones will be mentioned.

There are four distinct types of headers:

 General headers:
1. Headers from this category are not specific to any particular kind of message.

2. They are primarily used to communicate information about the message itself and how to process it.

3. Cache-Control: Specifies caching parameters.

4. Connection: Defines connection persistency.

5. Date: A datetime time stamp.

 Request headers:

1. These headers carry information about the resource to be fetched.

2. They also contain the information about the client.

3. Accept-(*): A subset of headers that define the preferred response format.

4. Authorization: Usually contains a Base64-encoded authentication string, composed of username and


password for basic HTTP authentication.

5. Cookie: Contains a list of key-value pairs that contain additional information about the current session,
user, browsing activity, or other stateful information.

6. Host: Used to specify the Internet host and port number of the resource being requested. This header is
required in request messages.

7. User-Agent: Contains the information about the user agent originating the request.

 Response headers:

1. These headers hold additional information about the response and the server providing it.

2. Age: Conveys the amount of time since the response was generated.

3. Location: Used to redirect the client to a location other than the request URI from a header.

4. Server: Contains the information about the software used by the origin server to handle the request.

5. Set-Cookie: Used to send cookies from the server to the client. It contains a list of key-value pairs,
called cookies.

 Entity headers:

1. these headers contain information about the response body.

2. Allow: Lists the supported methods identified by the requested resource.

3. Content-Type: Indicates the media type of the body (also called Multipurpose Internet Mail Extensions
[MIME] type), sent to the recipient. Used for content negotiation.

4. Content-Language: Describes the language of the intended audience for the enclosed body.

5. Content-Length: Indicates the size of the body.

6. Content-Location: Used to supply the resource location for the entity that is accessible from somewhere
else than the request URI.

7. Expires: Gives the datetime after which the response is considered stale.

8. Last-Modified: Indicates the date and time at which the origin server believes the variant was last
modified.
Inspect HTTP Messages
Most popular web browsers have built-in developer tools, which help web developers debug and troubleshoot their web
site or service. The tools also are very useful for understanding the communication that takes place between the client
(your browser) and the server (web server or Representational State Transfer [REST] API endpoint).

In this activity, you will be using Google Chrome DevTools to inspect HTTP requests that are sent when loading a web page
or posting data to a web application. You will also examine the received server responses and identify and inspect various
parts of HTTP messages, like headers, status codes, and cookies.

Start Using DevTools

In this procedure, you will examine the Google Chrome DevTools. First, you will learn about the layout and different
functionalities of the developer tools. You will observe how to identify how the web browser loads the requested
resources and how the web server provides different types of messages.
HTTP Content Negotiation 

HTTP is used to deliver a wide variety of different content that varies in language, size, type, and more. Because supplying
all the content representations with every request is not practical and the remote content format is not always known,
HTTP has provisions for several mechanisms for content negotiation—the process of selecting the best representation for
a given response when there are multiple representations available.

The content is returned based on various types of "Accept" request headers. Because all the types of content are not
supported on most servers, these headers specify the preferred resource representation. If that representation is not
implemented on the server, the server should notify the client via the 406 status code (Not Acceptable). However,
depending on the implementation, some servers will in that case return a status code 200 with a default resource
representation.
HTTP headers that take care of content negotiation are:

 Accept: This header denotes the preferred media type (MIME type) for the response. A media type represents a
general category and the subtype, which identifies the exact kind of data. A general type can be either discrete
(representing a single resource) or multipart, where a resource is broken into pieces, often using several different
media types (for example, multipart/form-data).

Some useful discrete general types are:

1. Application: Any kind of binary data, that does not fall explicitly into other types. Data will be either
executed or interpreted in a way that requires a specific application or category. Generic binary data has
the type application/octet-stream, while more standardized formats include JSON (application/json) or
XML (application/xml).

2. Audio: Audio or music data (for example, audio/mpeg).

3. Image: Image or graphical data, including both bitmap (image/bmp) and vector still images, and animated
versions of still-image formats, such as animated GIF (image/gif).

4. Text: Text-only data, including any human readable content (text/plain), source code (text/javascript,
text/html), or formatted data (text/csv).

5. Video: Video data or files (for example, video/mp4).

 Accept-Charset: Sets the preferred character sets, such as UTF-8 or ISO 8859-1. It is important when displaying
resources in languages that include special characters.

 Accept-Datetime: Requests a previous version of the resource, denoted by the point in time with datetime. The
value must always be older than the current datetime.

 Accept-Encoding: Sets the preferred encoding type for the content.

 Accept-Language: The preferred natural language. Useful for various localizations.

All these headers support quality-factor weighting.

 It allows the user or user agent to indicate the relative degree of preference for that media range, using the q-
value scale from 0 to 1.

 The default value is q=1.

 A request header that prefers U.S. English over British English but still prefers British English over Indian English
would look like this:

1. Accept-Language: en-US, en-GB;q=0.9, en-IN;q=0.8, *;q=0.7

Here is an example of two HTTP requests and responses that fetch the same content but use different headers for content
negotiation. Note how the different Accept headers produce different formats of responses and how different Accept-
Language produce different terms for the same food.

Server-Driven vs. Agent-Driven Content Negotiation

HTTP provides you with several different content negotiation mechanisms. Generally, they can be split into two groups—
server-driven negotiation (proactive) and agent-driven negotiation (reactive).
Server-driven negotiation is performed with these steps:

1. A client submits a request to a server.

2. The client informs the server which media types it understands ("Accept" header and quality-factor weighting).

3. The server then supplies the version of the resource that best fits the request. Often, redirection is used to point
the client to the correct resource.

Server-driven negotiation does not scale well, so agent-driven negotiation can be used:

1. An user agent (or any other client) submits a request to a server.

2. The server responds and provides the agent with available representations on the server and their locations,
usually with a "300 Multiple Choices" response that depends on the application implementation.

3. The user agent then makes another request to the desired URL for the actual resource.
RPC-Style APIs 

Over time, several different API types have evolved to suit different needs and to solve different types of problems. The
two most commonly used API types are remote procedure call (RPC) API and REST API. In programming, usually more than
one approach to a problem is valid, which is also true for building APIs. But it is useful to know the strengths and
weaknesses of different approaches so that you can pick the best implementation depending on the job. Here, RPC-style
APIs will be introduced.

What Is an RPC-Style API?

RPC APIs are exactly what their name stands for. RPC APIs "call" a remote procedure located in a different address space
similarly as to how it would call a procedure locally. A different address space can either be on a different computer or in a
different network.

The client sends a call to the server, usually with some parameters, and then waits for the server to return a reply
message. Once the reply message is received, the results of the procedure are extracted and the client execution is
resumed. There are no limitations on concurrency, so RPC calls can also be executed asynchronously.

Each RPC-style endpoint thus represents a remote function call.

Because the procedures are executed remotely, you have to take into account the following:

 Error handling is a bit different than local error handling. All procedure errors should be handled on the remote
server, and a corresponding error should be sent to the client.

 Global variables and side effects on the remote server are sometimes unknown or hidden to a client, because it
has no access to the server address space.

 The performance of remote procedures is worse than the performance of local procedures. In addition to
procedure execution, the client and the server have to take care of overhead, which is caused by the transport.

 Authentication may be necessary, because remote procedure calls sometimes are transported over insecure
networks.

Because RPC API is only a style of building an API, many different protocols have evolved that implement remote
procedure calls.

Simple Object Access Protocol

Simple Object Access Protocol (SOAP) is considered an underlying layer of some of the more complex web services based
on Web Services Description Language (WSDL), an interface for describing the functionalities offered by web services,
which makes discovery and integration with a remote web service very straightforward.

In addition to a more strictly defined syntax, it has three main characteristics:


 Extensibility: Features can be added without major updates to the implementation. Security (WS-Security or WSS)
and WS-Addressing can be added to SOAP.

 Neutrality: SOAP is not protocol specific and is able to operate over several different transport protocols, such as
HTTP, TCP, Simple Mail Transfer Protocol (SMTP), and so on.

 Independence: It supports any programming model (object-oriented, functional, imperative, and so on), platform,
and language.

The SOAP specification defines the messaging framework, which consists of four parts.

Examples of a SOAP request and the corresponding response:


JSON-RPC

JSON-RPC is a very simple and lightweight RPC protocol encoded in JSON that defines only a few primitive data types, such
as string, integer, Boolean, and null, and commands such as params, method, id, and so on.

It also supports notifications like sending data to the server that does not require a response, which is useful for
asynchronous updates and batch requests (multiple requests inside one request body).

XML-RPC

XML-RPC is a protocol that is similar to SOAP. However, it is less structured and uses fewer constraints than SOAP. In
addition to the basic data types, it also supports some more complex types like Base64, array, datetime, and struct.

It also supports basic HTTP authentication.

Network Configuration Protocol

Network Configuration Protocol (NETCONF) is a network device configuration management protocol that provides
mechanisms to install, manipulate, and delete configurations on network devices. It also provides a mechanism for
notification subscriptions and asynchronous message delivery.
NETCONF consists of four layers:

 Content layer: Contains the actual configuration and notification data.

 Operations layer: Defines a set of base protocol operations to retrieve and edit the configuration data. Basic
operations are <get-config>, <edit-config>, <lock>, <create-subscription>, <kill-session>, and similar.

 Message layer: Provides a mechanism for encoding remote procedure calls. They are encoded in RPC invocations
(<rpc> message), replies (<rpc-reply>), and notifications (<notification>).

 Secure transport layer: Ensures a secure and reliable transport between a client and a server.

The communication between a client and the server is session-based. The server and client explicitly establish a connection
and a session before exchanging data (using XML for encapsulation), and close the session and connection when they are
finished. NETCONF servers are usually network devices like routers, switches, and so on.

NETCONF uses session-based communication.

The basic structure of a NETCONF session is as follows:

1. The client application establishes a connection to the NETCONF server and opens a session.

2. Both the server and the client exchange some information, which is used to determine compatibility and to inform
each other about their capabilities. These represent a set of functionalities that supplement the base NETCONF
specifications.

3. The client then sends one or more requests to the NETCONF server and parses its responses.

4. The client application closes the NETCONF session and the connection to the server.
gRPC

With ever-increasing popularity and reliance on web API, the open source protocol gRPC was developed by Google.

It is built with performance in mind on top of HTTP/2, which is a protocol designed to overcome many of the faults in
HTTP/1.1. The main difference between HTTP/1.1 and HTTP/2 is that the latter supports request multiplexing over a single
TCP connection and is in binary.

A special format is used for data serialization, called Protocol Buffers. This format defines the structure of the message,
which is then serialized to a binary output stream.

An example is shown here. Note that values 1, 2, and 3 do not represent the actual values, but keys as to where in the
message stream the values reside.

REST-Style APIs 

REST-style APIs are a subset of web API architecture styles in which the endpoints represent resources. When a REST API is
called, the server transfers a representation of the state of the resource to the client. Note that REST is not a protocol.

This style was created with the HTTP standard in mind and uses the HTTP methods as its set operations, such as GET, PUT,
DELETE, and so on. However, the message format syntax is not as strictly defined as with some other types (RPC), and so
the message can be serialized in many different formats. JSON is the most commonly used.

REST is optimized for use on the web. It is known for excellent performance and scalability if implemented correctly.
Coupled with greater simplicity than the other styles, it is no wonder that it has become the go-to API style on the web.

REST Overview

In order for a web API to be RESTful, it has to adhere to the following architectural constraints:
 Client/server architecture: The primary principle behind a client/server architecture is the separation of concerns,
which allows both components to evolve independently of each other as long as the interface between them is not
altered. It is considered a standard practice today.

 Statelessness: No client context needs to be stored on the server in between requests for the communication to
work. Each request from any client holds all the necessary information to service the request. If the session state is
needed, it is stored in the client and transferred to the server when the client creates a new request.

 Cacheability: Some HTTP responses on the web can be cached by the client or an intermediary agent. For a REST
API to support cacheability, all the responses have to define themselves as cacheable or not. Well-managed
caching can eliminate some client-server interactions such as improving performance. However, caching incorrect
responses can cause stale or incorrect data to be received.

 Layered system: A client should not be able to tell whether it is connected directly to an end server or to an
intermediary agent along the way. The intermediary agents can, for example, be a proxy, caching server, or a load
balancer. It improves scalability and allows an extra layer of security to be added.

 Uniform interface: Uniform interface decouples the client from the implementation of the REST service. Individual
resources are identified in the URI of the request, and the client does not need to know how the resources are
represented internally on the server. When a client holds a representation of the resource returned by the server,
it should have enough information to modify this resource. Server messages should be self-descriptive; therefore,
they should contain enough information for the client to be able to process them. The API should also conform to
Hypermedia as the Engine of Application State (HATEOAS). The client using the REST API should be able to use
server-provided links dynamically to discover all the available resources and actions it needs.

 Code on demand (optional): Servers should be able to temporarily extend or customize the functionality of a
client by transferring some code to them. An example of extending functionalities would be running Java applets
of JavaScript scripts.

RESTful APIs leverage HTTP to define the operations on the resources. While the actual operations on resources can vary a
bit depending on the context and implementation, you cannot expect to use HTTP DELETE requests to create resources
and call the API RESTful.

The example made-up API is defined by the following parameters:

 A URI that defines the resources—www.example.com/api/users


 A standard set of HTTP methods—GET, POST, PUT, PATCH, DELETE

 Representations of the resource defined by the media type

The following table shows a typical use of HTTP methods on two different resource types. As you can see, using different
methods produces different results, and all the actions are executed on a resource. Where an RPC-style API would call a
RPC "/getAllUsers," a REST-style API creates a GET request on the "Users" resource.

Many APIs only support read-only operations (GET) on collection resources and only allow modification through member
resources.

The POST method can also be used for non-CRUD requests on functional resources. For example, an HTTP POST call on API
endpoint /users/bob/logout might be used to log out the user from all their active sessions.

REST Example

Basic HTTP request creation and response parsing are pretty straightforward in most programming languages. In this
example, you can see how to use Python and its requests library to create a simple HTTP GET request to a member
resource—a user with an ID of 1(/users/1)—and print the response contents.

The response content is returned as plaintext, so you have to deserialize the string into an object with the JSON  library.

Note

Python version >= 3.6 is required for string formatting that uses fstrings. You can find out more
at https://www.python.org/dev/peps/pep-0498.
In the following example, you can see that the resource in the URI is a collection resource, a set of all the users—/users. In
this case, the result is an array of objects.

REST vs. RPC

Both REST and SOAP APIs considered the most widely used in the world of web services today and no approach is
inherently better than the other. It all depends on the problem you are trying to solve and the resources you have
available.

No API style is inherently better than the others. It all depends on the problem you are trying to solve.
Postman for REST API Consumption 

Postman is an API development environment that allows you to test, debug, monitor, document, and publish your APIs.
You can easily create a REST API request and examine the API response through the simple GUI, without having to use a
more complex method like a programming language library or command-line utility. Postman provides a set of tools that
make it easy to get started with developing your first API and improving your knowledge of REST and HTTP with a learning-
by-doing approach.

Here are some of the main features of Postman:

 Test and debug your API: You can create arbitrary HTTP requests, which help you verify the way your API behaves
with different request parameters. You can test API authentication methods, simulate client inputs, and verify that
your API provides appropriate response parameters, like HTTP status codes, response body, and cookies.

 Organize and reuse API calls: The Postman interface provides several organizational elements. A Workspace is a
view of all Postman elements such as Collections, Monitors, and Environments. It also provides an easy way for
teams to collaborate on developing and testing APIs. Team Workspaces can be shared between groups of users
(teams), while Personal Workspaces are only visible to the owner. You can also share specific elements between
different Workspaces and set specific permissions. Team members also have access to the full request history of all
the Collections in a Workspace. All changes or updates to any Workspace element are synced in real time. A set of
key-value pairs can be saved as an Environment. Environments let you customize requests using variables so that
you can easily switch between different setups without changing your requests. Saved requests can be organized
into Collections and Folders.

 Run automated tests or tasks: With the Collection Runner tool, you can create workflows that simulate your API
use cases. You can send all requests in a Collection or Folder in an arbitrary sequence and even pass data between
requests. That means, for example, you can create a workflow, where the first request authenticates to the API,
receives an authentication token, and then the workflow uses that token to execute additional requests. With the
Monitor tool, you can execute these workflows periodically so that you have a view of the performance and
responsiveness of your API. Monitor also generates reports, which can be sent by email.

 Document and publish: Postman provides a very easy way to automatically document your APIs. You can generate
the documentation for a selected Collection. The documentation includes the Collection, Folders and requests
descriptions, saved requests with all the parameters (headers, body and so on), and code examples. Your
documentation is formatted as a web page, published on the Postman servers, and can be accessed publicly. The
documentation is created with the variable values from the currently selected environment. All changes to the
Collection are synced to the documentation in real time so that you do not have to manually republish.

 Create sample code: Another very useful Postman tool is the Code Generator, which creates code samples for
your request in several popular programming languages. You can simply reuse the code in your client application
or use it with a utility, such as cURL or Wget.
When you are signed in with your Postman account, your data is synced across different devices. You can also access and
manage your account and entities, like Workspaces, Collections, and Environments, from the Postman web site.

Postman is available as a standalone application for Windows, macOS, and Linux. It can be downloaded
from http://www.getpostman.com. The website also features extensive documentation and video tutorials.

GUI Overview

Postman provides an adaptable multitabbed user interface. From the header toolbar, you can create new entities, switch
Workspaces, view the sync status, and manage your Postman account. The request builder and the response pane can be
displayed horizontally or vertically in a two-pane view selection. The sidebar can display the history of sent requests or the
list of Collections. Recently run requests are displayed in tabs. Unsaved requests are marked with a green dot. You can
duplicate requests or drag and drop them between Folders and Collections. Postman can be run in multiple windows; for
example, you can work in two Workspaces in separate windows at the same time.

In the following figure, the numbers mark these user interface parts:

1. Sidebar

2. Request tabs

3. Request builder

4. Response pane

Composing an HTTP Request

An HTTP request consists of a method, a URI and optional HTTP headers, and request body. In Postman, you can construct
arbitrary REST API requests.

Different types of authentication, such as basic HTTP authentication, OAuth 2.0, and API keys, are supported. You can also
import certificates, which can be used to establish a Secure Sockets Layer/Transport Layer Security (SSL/TLS) connection to
the API server.

You can specify HTTP request headers, where the commonly used standard headers can be entered with the autocomplete
function. Adding custom headers and values is also possible.

If your request contains some body content, you can enter it in the raw text format or use the input forms that Postman
provides. The form-data and x-www-form-urlencoded options simulate filling and submitting a web form. You enter the
key-value pairs in the same way you would fill a form on a website. The difference between these two options is that form-
data adds the multipart/form-data Content-Type header, which allows for file uploads. If you choose the raw option, you
use the Content-Type drop-down list to specify the type of content you entered in the request body pane, like JSON or
XML. The syntax highlighting adapts, and the appropriate HTTP Content-Type header is attached automatically.

You can also add cookies to your request. You specify the cookie values and domain for which you would like to use that
cookie. You can use multiple cookies for each domain.

Composing an HTTP Request (Cont.)

In the response pane, you can verify the response that the client received from your API. You can view the HTTP status
code, the response headers and body, and any cookies that were received. If you select the Pretty option in the response
body, the response is formatted and highlighted according to the received Content-Type header. If the response is HTML
formatted, you can also view it as a web page by choosing the Preview option.

You can also verify the size of the response and the response completion time, which can help you troubleshoot your API
performance or connectivity issues. You can save a response as a text file or as an Example for your saved request.
Examples contain a specific request with the corresponding response. You can save multiple Examples for each request.
Saved Examples can also be viewed in the published documentation of your Collection.
You can use several methods to create new requests in Postman: from the launch screen at Postman startup, from the
header toolbar, or by selecting the Add Request option from your existing Collection.

Use Postman 

Postman simplifies testing and troubleshooting your API, so it can really speed up the API development process. Instead of
you having to create scripts to make API calls, and then modify code each time you wish to test different user input or test
your API on a different environment, you can use Postman to create and reuse requests. Postman also allows users
without software developer skills to make their first API requests.

In this activity, you will get to know the Postman GUI and learn how to create requests and inspect responses. You will use
the already familiar API server dev.web.local. You will examine the Postman documentation for the API and perform some
requests that you already know how to execute with your web browser. This time, you will perform them by creating
requests in Postman.

Examine API Documentation

In this procedure, you will examine the documentation of a Postman collection that has been published online. Later, you
will recreate one of the requests.
You can see the created request named Server Status, which is also visible and selected in the Request tabs bar. Notice
that the HTTP request method is currently set to the Postman default value, which is GET.

Configure a Request and Examine the Response

In this procedure, you will learn how to set request parameters, add different HTTP request headers, and see how that
affects the way the server responds.
You have sent an HTTP GET request to the web server on dev.web.local/status. You did not type the protocol part of the
URL, so Postman used HTTP, the default protocol, to send the request.
Look at the Response panel. The server responded with the HTTP response status code 200 OK. The response body is some
HTML code, which could be rendered as a web page, containing the server status information; this is fine if a user is
making this request with a web browser, but it is not useful if the client is some script that would like to parse the
information and process it further. Some other format, like JSON, would be more appropriate. In the Server Operations API
documentation, you saw that there were two different response examples for the Server Status request; one of them was
a JSON-formatted string.
With no specific Accept request header set, the server is configured to provide text/html content.
The Accept request header is no longer present in the Temporary Headers section, but was set to application/json. The
server responded with the Content-Type response header set to application/json.
You can confirm that the server really responded with a JSON-formatted string in the body.

Advanced Postman Topics 

Postman allows you to organize your API calls into organizational elements, which can be reused and shared. Using
variables for request parameters enables you to run the API calls on different environments without changing the code of
the request.

Collections

Saved requests can be organized into Collections. Requests, grouped in a Collection, would typically be part of the same
project and share some common values, like the URI or authentication method and credentials. You can create variables
with a Collection-wide scope and choose if a specific request inherits this variable value from the Collection. Grouping into
Collections also makes the Workspace much cleaner and easier to manage. You can further organize saved requests into
Folders inside Collections. You can also generate documentation for your Collection. Collections can be exported as JSON
files.
Variables

Variables are used to store dynamic information. Variables in Postman enable you to easily reuse API requests. For
example, instead of hardcoding your API server IP address or FQDN, you can use a variable to reference it. If you move the
API to a different server, instead of having to change all your requests, you just change the variable value once, in the
scope where the variable is defined.

You can use variables in different elements in Postman, such as the URI, authorization parameters, headers, URL-encoded
body keys and values, and so on. In Postman, variables are defined by a variable name inside curly
brackets: {{variable_name}}. When you run an action in Postman, all occurrences of the variable name will be replaced
with the matching variable value. Variables can have different scopes inside Postman, depending on where they are
defined, from broader to narrower scope:

 Global

 Collection

 Environment

 Data

 Local

If variables with the same name are defined in two different scopes, the value of the narrower scope takes precedence.
For example, if you have a variable {{variable_name}} defined with different values inside the Collection and inside the
Environment, then all occurrences of {{variable_name}} in your request will get replaced by the value defined in the
Environment.

When you create a variable in Postman, you specify the variable name, the initial value, and the current value. When
Postman resolves a variable, for example, when a request is sent, the current value is used. When you share a Collection,
the variables are replaced with the corresponding initial value, which means that you can safely share requests that
contain private information like API keys or login credentials.

Postman also features autocomplete for inputting variables. The function is invoked when you enter a curly bracket and
provides a list of available variables, their values, and scope.

Refer to the following figure. The example shows the list of variables defined in a Collection (1). These variables are then
used in building a request (2) in the URI, the form body, and the authorization header. The example also shows the HTTP
headers and body that were sent to the API (3). You can see that the variables were substituted with their current values.
Note that the {{server}} variable (the Host part of the HTTP request) was not replaced with the current value from the
Collection scope but was overridden by the value defined for the {{server}} variable in the Environment.
Environments

In Postman, variables enable you to reuse values in different places in the Workspace. A set of variables and their values
represent an Environment. A common use case would be to use the same Collections and Monitors for development,
staging, and production APIs. You can simply change the Postman Environment from the drop-down list, and all the
variables in your requests referring to server FQDNs, listening ports, credentials and so on will get set with appropriate
values. That means that your saved requests would connect to APIs on different servers with different credentials without
having to manually change any of the request parameters. Environments can be shared and exported as JSON files.

Code Generator

After you have tested and verified that your API behaves as intended, you can incorporate the created API calls in your
client application. Postman provides a very useful code generator tool that creates code snippets, which you can reuse in
your application. The executed code will send the same HTTP request that you created in Postman, including URI, headers,
cookies, and body. Some of the supported languages and frameworks are Node.js request, Python requests, jQuery AJAX,
and cURL. You can also generate the content of a raw HTTP request.

The code generator can be run from the request builder. All the code examples also can be viewed in the published
documentation.
Troubleshoot an HTTP Error Response 

In this activity, you will learn how to use collections and variables in Postman and use Postman to troubleshoot an API. You
will import a Postman collection and run existing HTTP requests, which produce different error response codes. You will
use the existing API documentation and your knowledge of HTTP and APIs to correct the requests. You will use the API
server dev.web.local.
Use Postman Collections

In this procedure, you will import an existing Postman collection and examine the different features that the collections
provide. You will learn how to use variables and inherit request parameters from the collection.
There are two variables set—token and server. The server variable value is set to dev.web.local, and the token variable is
set to "foo". When you import a collection, the current values of the variables are copied from initial values.
This is the raw HTTP request that was sent to the server. You can see that the request was sent to host dev.web.local,
which is the value of {{server}} variable. You can also see that the Authorization header is set to the parameters specified
in the collection properties, including the value of the {{token}}.
The server responded with the response status code 400 BAD REQUEST, which indicates that the server could
not process the request because of a perceived client error—for example, a malformed request syntax. The
response body indicates that their server could not decode the posted JSON data.
Fix a Broken Request

In this procedure, you will repair both broken requests so that they produce an OK response code.
Consuming Notification Events Using Webhooks 

APIs are a great tool to use if you want to get some data across the Internet exactly when you want it. But, if your only
interest is in the changes of this data, APIs become quite impractical. An asynchronous method of data retrieval is needed
in that instance.

Web content is becoming less static; more and more things that we do on the Internet can be described as events. Regular
API calls are not very effective at tracking events and changes. Each API call only retrieves the data from the server, and
then it is up to the client to determine the state or changes that happened.

The more frequent those API calls are, the more up-to-date data you have. This is semipractical when there a lot of events
and changes, but not so much otherwise.
Creating periodic API calls for the same set of data is called polling and is in general very wasteful. Because constant polling
can use up server resources and a sizeable chunk of bandwidth, a more practical solution is needed. This is where
webhooks come in.

Webhooks are tools that allow you to react to events and changes in an application. They are commonly referred to as a
"reverse API" or a "web callback." While regular APIs feature a synchronous request and response cycle, webhooks use the
server to asynchronously notify the client about the changes. The client is notified only when an event happens, which
means that the total amount of requests is greatly reduced, and the client is notified as close to real time as possible.

The comparison of webhooks and polling is similar to the push-pull terminology. Webhooks are analogous to the push
effect (because they "push" new information to another entity), and the polling stands for the pull effect (because they
"pull" the information to themselves). Webhooks are therefore a push version of an API.

So how do webhooks work?

They work in reverse from regular APIs and invert the server/client concept. Instead of the client creating a request toward
the server, a client subscribes to a specific webhook. This is done by creating a POST request to the webhook endpoint.
The POST request usually contains more data, such as authentication data, which events the client wants to subscribe to,
which address and port should be notified, and other data. This puts the client on a list of subscribers for particular events.
When those events happen on the server, another POST request is used to notify the client and provide them with event
data. This is called a webhook notification and it needs to be caught by the client. These notifications can contain the data
of the event or just a reference to this data, which can then be read over a regular API.

Consuming a webhook requires detailed knowledge of the API. In contrast to polling the API, where similar to pressing the
reload button, each poll has a reaction (even if it is an error), webhooks only become active when they have to. Some
webhooks will pay attention to the client response and will retry sending the data, while others will make a request, then
forget about the data. This is why it is important to thoroughly test and debug the webhook, using tools like RequestBin or
ngrok.

As with APIs, webhooks present another attack target for potential attackers. Security, authentication, and authorization
mechanisms are the same as with regular APIs; use HTTPS for encryption, use different access levels for different events,
delegate credentials to users via a secure channel, and so on.

In the real world, you would commonly subscribe to a public webhook from a secure enterprise network that features a
firewall, which blocks most of the incoming requests. Therefore, your machine usually is not accessible from the public
web. To get around exposing your machine to the world, a common practice is to use webhook forwarding services or
reverse HTTP proxies that securely deliver the webhook notifications to the clients.

Take a look at a simple example of how webhooks can be used to integrate different products. Imagine that you are a
software developer in charge of integration between Cisco Webex Teams and a ticketing system. Your company uses
Webex Teams for internal communication as well as for communication with customers. The support department uses a
dedicated room for reporting errors, because the customers do not have access to a ticketing system. Each time a new
message gets posted into that room, a new ticket needs to be automatically opened in the ticketing system.

Webhooks are used in many areas:

 Marketing (newsletters, advertisement distribution)

 Monitoring (listen for alarms and errors on other systems)

 Social networks (notifications)

 Many other event-driven services

Cisco Webex Teams features a clean and useful API, so the integration will be done over it. You could implement this
integration by creating an agent that sequentially polls the API, compares the current room content with the previous, and
figures out if there were any new messages. Webex Teams also features webhooks in its API.

First, you need to subscribe to a webhook. With Webex Teams, this is easy enough:

1. Authenticate your request, just as you would when consuming the API.

2. Pick a name for your webhook subscription.

3. Set the target URL. This URL (or IP address) will be notified when the webhook triggers.

4. Select the subscription resource. This, together with the event parameter, determines about which events you will
be notified.

5. Add filters. In this case, you only care about messages, posted in a specific room, so you add a room filter.
Because Webex Teams uses a REST API, a POST request modifies a resource. A GET request on the same endpoint should
return a list of all your subscribed webhooks.

The second step is catching the notifications, which can be done by a simple listening agent or as a part of the ticketing
system. You can also find many ready-to-go containers and other products on the Internet that feature webhook listeners
(for example, the Python Flask framework or JavaScript Node.js framework).

With the listening agent set up, you can now listen to incoming webhook notifications.

The following figure illustrates how a notification from the previously mentioned Webex Teams support room would look
like.

Note the following elements of the POST request:

1. The text itself from the chat has not been added to the HTTP request. To get the message that has been posted, an
additional GET request has to be made to get the content by its ID.

2. If you needed to find out more about the person opening the ticket, you could create a GET request for the person
with the same person ID.

After the notification has been caught and parsed, a new ticket can be opened with this data, and thus, the two products
integrated.
Section 4: Summary Challenge
Section 5: Consuming REST-Based APIs

Introduction 
While Representational State Transfer (REST)-based HTTP application programming interfaces (APIs) do not use such
strictly defined specifications and rules as some other API types do (for example, Simple Object Access Protocol [SOAP]),
good practices still exist for consuming them.

"Consuming an API"  is a term that means not just coding a specific request and parsing the response, but includes all the
extra functionalities built around that basic feature, like authentication, security concerns, constraints, and certificate
handling.

Common API Constraints 

A REST API endpoint can rarely be consumed just by making an HTTP request toward it. Commonly, the API consumer has
to identify itself with various authentication and authorization techniques, take into account any availability issues and
potential service outages, and sometimes also needs to specify additional parameters to get the content it requires. It all
depends on the design of a particular REST API, though, so using good practices when implementing an API is important.

API endpoints are commonly used to return various lists of data. These lists can be fairly short or incredibly long. HTTP
responses that are used to return this data can be limited in size by the server, so the amount of data that can be returned
with a single API call is not unlimited. This fact creates a need to split up the returned content into more manageable
pieces. Also, larger responses take more time to get to the destination or to be processed, which is not something you
want to handle if you can avoid it.

Imagine you have an API endpoint, called /users, that returns a list of (several thousand) users for a set of services. An
HTTP GET request toward that endpoint might return all the users in a single response. However, if the size of the
response is limited (for example, with a Content-Length HTTP header or by the server max HTTP response body size) and
the actual response body exceeds that size, the response should contain an error (in this case, a  500  Internal Server Error
or 413  Request Entity Too Large error).

The following example shows how content can be broken up by using URL parameters by setting the page size as a limit
and the page number as the offset. This technique is called offset pagination and is very simple to implement, because it
mostly involves safely passing those parameters to the Structured Query Language (SQL) query as shown.

As you can see, this type of pagination is stateless, because all the data needed is contained in the request. It becomes less
useful for larger data sets, because an offset of 1 million rows would mean the query will comb through 1 million rows
before finding the data it needs.

Sometimes, the API takes care of pagination by itself and provides links to relevant pages in a response. These links
commonly contain references to itself, the next page, and previous page. If the reference to the next page in those cases is
either empty or null, you can presume you are currently viewing the last page.
Not all APIs will use pagination, though. It depends on the implementation of a specific API. In the case of Cisco, pagination
is a part of the Cisco Web API standards and is implemented with RFC 5988.

A concept that expands on pagination is filtering the content, which also makes use of URL parameters. Similarly, content
can also be sorted with parameters.

The following examples show how content can be retrieved from an API by a specific parameter or by a condition, or how
it can be sorted in a specific way.

All these operations save the time and processing power of the client consuming the API. If the client was in this case
receiving all the users every single time and then parsing the results client-side, the process would be pretty inefficient,
especially with larger data sets.

Ideally, your API should be running as smooth and stable as possible. This is generally true if your API was designed well,
has enough resources to handle all the client requests, and there is no malicious activity happening. However, when there
is a sudden increase in an API usage, or someone is intentionally trying to make the API misbehave, a mechanism is needed
to slow down the number of requests being made.

This technique is called rate limiting. It effectively limits the number of specific requests that can be fulfilled by an API. This
number varies depending on the use of the API or even between the different endpoints.

Rate limiting can be implemented either on the server-side or on the client-side.


Client-side rate limiting is usually implemented in the client application and limits the client from performing a large
number of tasks that are costly for the API itself (with a timer, loading bar, or something similar). Server-side limiting is
somewhat more effective, because client-side rate limiting is useful only if the API is used as it was intended. Limiting API
calls on the server side can prevent denial of service (DoS) attacks that intend to disable the API by flooding it with a huge
number of requests. It can also prevent misuse of sensitive or destructive API calls; for example, while renaming multiple
resources is a normal operation, rapidly deleting multiple resources might be a sign that the specific client has been
compromised. Sometimes, API rates are limited (or throttled) simply because the number of users has increased a lot in a
short amount of time and the API needs to be scaled up before it is running at optimal efficiency again.

Requests are commonly limited on a per-user basis (for example, each user can create 1000 API calls per day), and it is not
unusual to see API vendors offering premium packets, which increase the daily allowance for that user. Requests also are
usually limited per second for security reasons (DoS). Some APIs are also limited regionally; for example, a native webstore
located in Italy can get suspicious, if there is a sudden surge of API calls originating from a non-Italian speaking country.

API calls can be throttled with:

 HTTP headers: Headers like X-RateLimit-Limit and X-RateLimit-Remaining are used to keep track of the number of
used and remaining API calls for a period of time.

 Message queues: Incoming API calls can be put into a queue, which makes sure the API endpoint itself is not
overloaded.

 Software libraries and algorithms: Many libraries and algorithms have been created for the purpose of rate
limiting, such as leaky bucket, fixed window, and Sliding Log.

 Reverse proxies and load balancers: Load balancers and reverse proxies (like NGINX) feature rate limiting as a
built-in feature.

When the API call limit has been reached for a particular user, the user should be notified of that, using a 429 Too Many
Requests HTTP response header. This response commonly comes with a Retry-After header that tells the client how long
they have to wait before trying again.

Even though APIs are expected to have a high uptime, their service will sometimes be unavailable to the client. Possible
causes include the API being overloaded with requests, the network connection between the client and the server
experiencing difficulties, some updates being performed on that system, and so on. Instead of just making an API call, it
makes sense to expect these kind of problems by checking the status (or health) of the API. This check can be done by
sending an HTTP request to a specific API endpoint (something like /api/status), which replies with a lightweight response.

In case the API is down, you might get an HTTP response from the 4xx or 5xx category (most commonly, 404 Not Found or
500 Internal Server Error).

If an error happens, it also makes sense to wait for a short while instead of immediately retrying the API call, because most
errors on the server might not be resolved instantaneously.

The following figure is a very basic snippet of a code that a client might use to check the API health.
Here is a simplified example where the client retries reading users 10 times with 30-second intervals in between. The client
checks for a response for HTTP code 200 each try, because it means that the request was successful and has not timed out.

Note that even though you are making a request toward the API endpoint named "users," it does not guarantee that the
returned response indeed contains the users object. Response content should always be double-checked and not just
inferred.

There are several different approaches to the length of the timeouts. The most popular are linear timeout (for example, try
it every 60 seconds) or exponential backoff (for example, try it after 1, 2, 4, 8, 16, or 32 seconds, up to a certain maximum).

Timeouts produce HTTP errors like 408 Request Timeout or, if the server is acting only as a proxy, a 504 Gateway Timeout
HTTP error. More generic timeouts (for example, the API endpoint is not even connected to the network) may also include
the 404 Not Found error.

Some API endpoints (or API gateways that handle the distribution of API calls) that are used for the purpose of uploading
data (for example, images on social media platforms or large XML documents) can feature payload limits. By limiting and
reducing the size of the body within the HTTP request sent by the client, you reduce the chance of that request being
corrupted while transporting, increase the speed with which it can be processed, and limit the damage that a malicious
user can inflict by abusing large request bodies (for example, forcing the server to go out of memory, because requests are
kept in memory).

If a payload limit is exceeded, the server should return a 413 Request Entity Too Large error.
API Authentication Mechanisms 

API endpoints are regularly exposed to the public Internet in one way or another, so they are not something that should be
left unchecked for the public to use. Not only do you have to prevent sensitive data being exposed to unauthorized users,
APIs are also commonly a target for other types of attacks. Therefore, it is imperative that only authenticated users have
access and permission to use them. Several authentication mechanisms are in use today.

Before talking about different API authentication mechanisms, a distinction has to be made between authentication and
authorization, because the two terms often get mixed up or used interchangeably. Many API platforms also offer
authentication and authorization as a combined existing feature, so the line between the terms is sometimes thin.

Authentication is the act of proving the identity of someone. Each identity is a unique combination of properties (names,
IDs, and so on) that in turn represents a unique entity—a person, user, product, and more.

Authorization, on the other hand, specifies the rights and privileges that a specific entity has over resources; it defines the
access policy for an entity.

Take a look at a driver license, which is an example of both authentication and authorization. It authenticates you as a
person and authorizes you to operate certain vehicles. In the context of the information technology, authentication makes
sure that you are who you say you are, and authorization makes sure that you have the right to the data that you are
trying to access.

Communication between the API consumer and the API server happens with the exchange of HTTP requests and
responses, so it makes sense for authentication to use some of the existing HTTP mechanism. Today, most of the REST APIs
use HTTP headers to store various authentication data.

REST APIs use HTTP to communicate, so it makes sense to use HTTP headers to store authentication data.

While the implementations may vary, three general approaches exist:

 Basic HTTP authentication: Uses built-in HTTP authentication.

 API key authentication: Client adds a pregenerated key to HTTP headers.

 Custom token authentication: A dynamically generated token is used.

Sometimes, publicly available APIs require no authentication (for example, weather data, government registries, and other
data available to the public), but they are usually rate limited by network address or aggregated client information.

API key authentication uses a unique, pregenerated, cryptographically strong string as the authentication key, which is
encoded in Base64 encoding schema, so it can be transported using HTTP.

The key itself is stored either in the HTTP request headers as a cookie or as an individual header, or as a URL parameter.
When the API request arrives at the server, the key is first parsed from the request, decoded from Base64, and compared
to a database table, containing valid keys.

These authentication keys are generated on the server on demand by the API administrator. A common practice is to
generate the keys on a per-user or per-service basis. It means that you should avoid using a single key that has
authorization for every possible API action and using it for everything. The scope of the keys should be limited to their
intended use.

API keys are usually only given to the users at the time of their creation for security reasons. If a user has to figure out
which key does what later, they can still identify the keys by their prefixes; for example, only the first five characters of the
key are displayed, which mostly eliminates duplicates but still guarantees uniqueness. If the key gets lost, a new one can
easily be generated. This authentication method is not as safe as it could be. Because the authentication key is an
immutable string that stays the same until it is deleted from the server, it can be intercepted by an attacker and used to
create API calls in the name of the user for whom the key was generated.

The following example shows how the generated keys look on a Cisco Meraki platform and how the said keys would be
used as an URL parameter or as an HTTP cookie. Using the URL parameter to send an API key is not a good practice,
though, because the key can get exposed much more easily.

Here is another example of how you would actually create a new user, using API key authentication, implemented in
Python with the help of the Python Requests library.

Note how the API key is added to the request as an HTTP header.
In case the API key is invalid or not valid anymore, a 401 Unauthorized HTTP error will notify you. If the key is valid but just
not authorized for the resource you are trying to access, a 403 Forbidden error should appear. Because the
implementation of error codes is mostly up to the developer, you might encounter more generic HTTP codes for such
errors with some APIs (for example, just returning a code 400 or even 200 and then appending a message such as
"Authentication unsuccessful" as the response body).

The problem with both the basic HTTP authentication and API key authentication is that the authentication details have to
be sent (and therefore processed and compared) with every API call. This action creates a weak point in security, because
every request contains this data and is thus a viable attack surface.

Custom token (or access token) authentication solves this problem by using a custom token instead of this authentication
data. In addition, the token is then commonly used for authorization, too.

The process of authentication is as follows:

1. The client tries to access the application.

2. Because the client is not authenticated, it redirects it to an authentication server, where the user provides the
username and password.

3. The authentication server validates the credentials. If they are valid, a custom, time-limited, and signed
authentication token is issued and returned to the user.

4. The client request now contains the custom token, so the authenticated request is passed to the API service.

5. The API service validates the token and serves the required data.
The authentication server is often an external service (for example, OpenID) that is used in combination with an
authorization mechanism (for example, OAuth 2.0). The tokens themselves often contain some authorization data as well.
Authentication servers are commonly used to issue tokens that can be used with many different services from different
vendors (for example, using Google authentication to log in to an unrelated site that has implemented Google sign-in).

The term that is commonly used for this type of authentication (and authorization) is single sign-on (SSO). SSO is gaining in
popularity, because it is very useful for services that require third-party authorization (for example, sharing an article from
a news site on a social media account) or for businesses that offer several different services but want users to use the
same identity for all of them (for example, Google Apps, your company enterprise environment, and so on).

The pros of custom token authentication include:

 Faster response times: Only the token (and maybe its session) has to be validated instead of the credentials.
Sessions are often kept in very fast in-memory databases (for example, Redis), while credentials are kept in
nonvolatile memory relational databases.

 Simplicity of cross-service use: This reduces the number of the authentication systems needed in an organization
and unifies authentication and authorization policies across the services.

 Increased security: The credentials are not passed with every request, but rather only once in the beginning and
periodically after that when the token expires. This way, it reduces the chance of success and viable time frame for
any man-in-the-middle or cross-site attacks.

While offering a step forward in most areas of authentication, custom token authentication is a bit more complex than the
other types. There is also less granular control over individual tokens, because in case the private key of the authentication
server gets compromised, all the tokens must be invalidated.

Using HTTP Authentication 

HTTP provides a framework for basic authentication and access control, which is often a good enough solution for simple
APIs.

To understand how you can use the HTTP authentication in your API, it is important to know how HTTP authentication
actually works. When an unauthenticated (and consequently unauthorized) client makes a request toward the HTTP
server, the server in turn challenges the received HTTP request. The challenge is done by using an error code (401
Unauthorized) and the WWW-Authenticate HTTP header in the response. This header defines which authentication
method should be used to gain access to the resource specified in the request, because several methods exist.

The client must then send another request, this time using the Authorization header, which contains the authentication
type and the authentication credentials. The realm is a string that denotes some protected space on the server to which
only authorized users have access; think of the realm as a group of resources, a specific server, and so on.
If the credentials within the request Authorization header are valid, the server responds with a 200 OK HTTP response (or
403 Forbidden if the opposite occurs).

The following UML sequence diagram depicts the exchange of HTTP messages between the client and the server.

HTTP authentication uses several different authentication schemes:

 Anonymous (no authentication)

 Basic (Base64-encoded credentials as username:password)

 Bearer (HTTP implementation of custom token authentication)

 Digest (MD5-hashed credentials)

 Mutual (two-way authentication)

 More uncommon schemes (HMAC, HOBA, AWS, OAuth, Negotiate, and so on)

It is important to note that no HTTP authentication schema is secure by itself. At the very least, Transport Layer Security
(TLS) should be used to encrypt the connection (forming an HTTPS connection).

Some schemas are inherently less secure than others. For example, the Basic schema uses Base64 to encode credentials,
but because Base64 is just an encoding, not an encryption; the username and password can easily be decoded. Hashed
credentials are more secure because they require decryption, which can be a big obstacle for any potential attackers.

The following figure is a Python-based example of basic HTTP authentication, using the requests and Base64 modules.
Note how the credentials are encoded before the Authorization header is put together.
Basic HTTP authentication parameters can also be sent via the URL. The format looks like the following:
http://username:password@www.example.com/users. However, this method can get rejected by some APIs because it is
rarely used.

In the real world, it is common to see the use of proxies for the role of access control, which is why HTTP authentication
includes specific headers to deal with proxy authentication.

Instead of the WWW-Authenticate header, the Proxy-Authenticate header (with corresponding error code 407 Proxy
Authentication Required) is used, and instead of the Authorization header, the Proxy-Authorization header is used. Both
proxy headers are single hop only (compared with the regular WWW-Authenticate and Authorization headers, which are
end to end), meaning that they can be combined in a single request.

Proxies, however, do not work well if many of them are chained in front of an API. HTTP tunneling must instead be used.

The following UML sequence diagram depicts the exchange of HTTP messages between the client and the server with a
proxy in between.

Utilize APIs with Python 

APIs on different systems may use different types of authentication before a client can access its resources. You will learn
how to write, handle, and send the requests in Python and use three types of authentication to access the server resources
—basic authentication, authentication with an API key, and authentication with a bearer token.
The imports and variables are already in place; the server API is running on http://dev.web.local. The send_request
function handles the sending of a request and the to_base64 encodes a string into Base64 encoding. The functions
basic_auth, invoke_info_v1, and invoke_info_v2 will be used for authentication to the server. Some of the code is
commented to reduce the number of errors when starting the script.

Construct a POST Request

Constructing a POST request by using the requests library requires you to handle the failed responses and timeouts. You
will implement the needed code to handle the requests and return the JavaScript Object Notation (JSON) structure of the
received data.
The function receives three arguments—the url, headers, and data—which are needed for the POST request. Track the
number of request retries in case the request times out, and raise an exception on the third failed retry. The handling of
the response that you have implemented returns the JSON data or raises an error if the request has failed.

Implement Authentication to Obtain an API Key

The server that you will be accessing is able to authenticate a session with an API key. To get to the key, you will first need
to log in using a basic authentication method with an encoded username and password.
Leveraging HTTPS for Security 

HTTP never has been considered a trusted connection. Even if the content transported by HTTP requires no authorization,
an attacker with access to the network can still intercept and modify any HTTP messages that pass through it. This reason
is why HTTPS should be used in its place for everything you do on the Internet.

HTTPS is an extension of HTTP that encrypts the communication between the client and the server.

The primary motivation for the emergence of HTTPS was the lack of privacy when communicating over HTTP. Before
HTTPS, the messages could be intercepted, decoded, forged, dropped, and manipulated in several other ways. Since its
inception, it has been used for secure payments, emails, and sensitive transactions. Widespread use started a bit later; the
added encryption and decryption was too CPU-intensive for everyday use. In 2018, however, it overtook HTTP as the most
common way of transporting data across the Internet. Most browsers now actually notify you if you are using a website
without a certificate installed.

For encryption, TLS is used. TLS has replaced the older and deprecated Secure Sockets Layer (SSL). TLS is a cryptographic
algorithm that ensures a reliable, private, and secure connection using public key cryptography. It uses a security
mechanism called digital certificates.

HTTPS and certificates also are used to ensure the identity of the accessed website or API. While HTTPS needs to establish
an SSL/TLS session to begin the communication, it is still considered a stateless protocol. Port 443 is used for
communication by default (while HTTP uses port 80).

HTTPS uses public key cryptography, an encryption and decryption technique that uses key pairs and requires public and
private keys, which are not the same but are loosely mathematically related. Keys are strings of numbers and letters. A
public key is publicly known and is used to encrypt messages that can then be decrypted only by the private key of the
receiver. This is called asymmetric cryptography. If both parties possessed a key that could encrypt and decrypt the
message, it would be called symmetric cryptography.
A suitable analogy for public key encryption is that of a box with two keys. One key (public key) can only close the lock,
while the other (private) can only open it. If someone with a public key puts something in that box and then locks it, only
the person with the private key can then open it and retrieve the contents. You can be sure who put something in the box,
because it has been locked with a key that is publicly known, and you can be sure that nobody else other than you can
open the box and access the content, because only your key can open the lock on the box once it has been closed.

One of the most commonly used encryption algorithms is called Rivest, Shamir, and Adleman (RSA) and is based on the
idea that it is very difficult to factorize a large number. The public key consists of two large prime numbers, and the private
key is also derived from them. If someone could factorize the large number, the private key would become compromised.
While the private key in asymmetric cryptography can be brute-forced, the time frames required for modern (2048-plus-
bit) keys are often in thousands of millions of years because of sheer size and the number of possibilities.

Digital certificates provide identity for a digital entity. They are similar to what passports are for people and certify that a
public key really belongs to a specified entity. That is why they are also sometimes called public key certificates. If the
public keys were exchanged before encrypting the actual connection, a risk would exist that the keys would be intercepted
or replaced, and the consequential communication would not be secure. To solve this problem, public keys are exchanged
through a trusted third party and incorporated into these certificates. These trusted third parties are called  certificate
authorities  (CAs).

CAs are trusted businesses that specialize in issuing digital certificates. The CA business is fragmented worldwide, with
many local, national, and regional providers having sizeable shares of issued certificates, due to local regulatory standards.
Having thousands of authorities you should trust is not practical, so globally trusted CAs exist (Comodo, IdenTrust,
GoDaddy, Let's Encrypt, and so on) that are much smaller in number because of much stricter validation standards.

The trust for local CAs is inferred through the chain of trust (certificate chain), which contains multiple certificates that sign
the one above it. For example, when you add a certificate to your website, you can buy it from a local CA, who then signs
your web page certificate. Your certificate is an end-entity certificate. Because the local CA also needs their certificate
signed, they get it done by a national CA. A national CA has their certificates signed by a larger entity, such as an
international CA. These certificates are called intermediary certificates. And finally, these CAs have their certificates signed
by a trusted root CA. These are added by default in most browsers and operating systems and are called root certificates.

The following figure depicts a certificate chain for the Cisco home page as viewed in Mozilla Firefox Certificate Viewer.
Certificates contain predefined information that is defined in the X.509 standard and includes:

 Public key of owner

 Distinguished name (DN) of owner

 DN of CA

 Valid from and expiry dates

 Unique serial number of certificate

 Protocol information

Depending on the scope, digital certificates can be further classified as one of the three types:

 Single domain: Applies to one hostname only (such as www.example.com).

 Wildcard: Applies to an entire domain and its subdomains (example.com will include domains like
mail.example.com and admin.example.com).

 Multidomain: Applies to several different domains (either single domain or wildcard).

When a client and a server want to begin the encrypted connection, either for browsing or consuming an API, it is done via
a series of messages, which is called a TLS handshake.
Note that the different encryption algorithms will generate keys differently. The previous example is valid for RSA-based
encryption. Some other encryption types avoid using private keys for handshake; keyless SSL (for example, Diffie-Hellman
encryption) is useful for cloud-hosted services with high privacy standards.

Even though HTTPS is considered secure, several attacks still exist on HTTPS itself, the TLS handshake, or the digital
certificates.

Almost every attacker using this attack surface tries to achieve the man-in-the-middle (MITM) attack, where a malicious
user intercepts, listens to, and modifies the communication between two entities. While HTTPS encryption counters the
most basic MITM attacks, where the attacker simply intercepts and reads unencrypted messages, more advanced versions
of the attack are still possible:

 HTTPS spoofing: Can be used to send fake certificates to the client, thus making the client trust your connection as
the real one.

 SSL hijacking: Describes copying fake authentication keys to the client and server, having full access to the
communication channel.

 SSL Stripping: Tries to convert the HTTPS connection to a regular HTTP by interrupting the TLS handshake and
exploiting redirects.

 Downgrade attacks: Focus on abusing backward compatibility, which many protocols feature. It tries forcing the
system to fall back to an older or less protected version of software in order to exploit any known vulnerabilities
that they feature.

 Exploiting software bugs: Can have disastrous consequences, as was seen with the Heartbleed bug that exploited
a vulnerability in OpenSSL library.

 Phishing attacks: Used to try to trick the user into accepting some fraudulent certificates as trusted (either with
fake websites or fake emails).

In addition to encryption, MITM can be prevented by using common sense and not opening any suspicious attachments,
double-checking unexpected redirections, and making sure you only confirm and trust valid certificates during pop-ups. As
far as system administration goes, it is very important to keep your systems and APIs up to date to retrieve certificates only
from legitimate CAs, and use advanced certificates that feature extended validation, because they are considered more
trustworthy.
Handling Secrets for API Consumption 

Imagine an API call as a transfer of a physical bag of money from one bank to another. You can hire an extremely reliable
security company for transport (HTTPS) and use an incredibly resilient safe to store the money (for example, high-security
data center) to stay as secure as possible. However, both measures will prove ineffective if you leave the keys by your main
door and use the same keys for the maintenance room as you do for the main safe, or if the security guards debate their
detailed schedule at a local bar.

This analogy should give you an idea that a security vulnerability is often not the one that is most obvious, but rather the
one who makes the most mistakes—you.

API secrets, or in general, any kind of credentials and authentication data, is one of the most sensitive pieces of
information, because it often grants you partial or full access to a system or an API. Yet, they are sometimes found written
on sticky notes, in credentials.txt files, or as an admin:admin combination. These are, however, beginner mistakes when it
comes to credential management, and as a software developer, you need to be aware of best practices regarding
advanced credentials handling.

One of the most basic rules is avoid hardcoding. Hardcoding is the practice of embedding variable values directly into the
source code. While it is tempting—and sometimes useful or even mandatory—to store the credentials directly where they
are needed, it should be avoided and is considered a very bad idea for multiple reasons:

 Loss of portability: Hardcoded credentials only work for a specific user or API endpoint, which means your code
cannot be used for multiple instances with different credentials.

 Security vulnerability: If the credentials are contained as a plaintext, they can be seen by anyone with access to
the code repository or to the system on which said code is installed.

 Increased time to change: Credentials can be harder to change, because they require changes in the code. This
commonly means that not only the operations department has to be involved, but the software department as
well. Also, the code or the module has to be redeployed, which can result in service disruption.

Spot the error or errors in the following code snippet. What if the API key changes? What if you want to reuse the code for
another API?
When you include dynamic data in your code that has a possibility to change and is not a part of business logic itself, try to
stick to softcoding that data. Softcoding is a technique of obtaining variable values from external resources (configuration
files, preprocessor, external constants, and so on) and makes your code less domain-specific.

The following code snippet does the same as the previous snippet, with a few differences:

 The API key is read from a configuration file with the use of the configparser library. This way, it is easier to change
the API key, because only a configuration file needs to be updated.

 The API URL is read from the application context. An application context is a commonly used software concept and
represents the runtime configuration of the application. It usually contains data that would be different between
instances (such as different names, IP addresses, different credentials, current state of components, and more).

 The result limit is still hardcoded, but only as a default value, and it can be overwritten with a function parameter.

There are some specific cases where values can be hardcoded, most notably mathematical formulas and constants (for
example, number Pi or a 90-degree angle), domain rules (for example, legal drinking age in a certain country), and
business-specific rules.

Similarly to hardcoded credentials being a security vulnerability, incorrectly implemented logging can also give out
sensitive information where it should not.

Logging should be implemented in a way that it does not give out sensitive information to unauthorized users:

 Use a true-false switch for logging sensitive information. It is usually used for applications running in different
environments (for example, logging everything in a development environment but logging very little in production
environments).

 Use postprocessing to mask logs. A common practice is to "catch" the default logs, read and process them, and
mask sensitive information. While smart logging tools can do that by themselves, regular expressions are also
commonly used to catch sensitive information (for example, mask everything after a '&password=' string).
 Have different access levels for different users. Show only the general errors and logs for regular users, and allow
only administrators to see low-level logging.

Another example of where sensitive data shows up where it should not is using credentials as URL parameters. While
credentials are encrypted, when using an HTTPS connection, even if they are contained in the URL (for example, a POST
request on '/api?username=admin&password=Pizza123'), they can still be seen in various server logs, bookmarks, recently
visited websites, and more. Credentials should always be sent as parameters inside the request body.

One of the last lines of defense comes in play in case of unauthorized or unexpected access to your system—encryption at
rest.

Generally, data is in one of three states—in transit, in process, or at rest. Data in transit is traveling from one place to
another, usually via a computer network, and uses various protocols (HTTPS, SSH, and so on) to protect it. Data in use is
active data that is either being processed or is frequently processed, and is commonly stored in computer memory. Data at
rest, however, is data that is stored in a digital form on a nonvolatile physical medium (for example, a database, a
spreadsheet, and so on) and poses a tempting target for any attackers. If a malicious user gains access to data at rest, only
encryption will prevent them from seeing the actual data.

Encryption at rest therefore means the encoding and protection of data when it is stored somewhere.

Data is encrypted with symmetric encryption. The same encryption key encrypts and decrypts the data as it is written to
storage. Different data sets use different encoding keys so that in case the encryption key gets leaked, not all data is
compromised at once. Effective key management becomes important, for which dedicated cryptographic key servers with
access control and auditing are used.
Three different encryption methods are commonly used:

 Full-disk encryption: Encrypts all the data on the disk, usually with the help of the operating system. The
encryption key commonly is a password that the administrator enters when disk encryption starts. Some disks
even feature physical chips that are used for disk encryption (self-encrypting drives [SEDs]). Accessing any data or
API credentials stored there, or even just booting up a system that is installed on that disk, requires the encryption
key.

 File system encryption: Alternatively, only certain files, file systems, or partitions can be encrypted. In this way,
only the sensitive data can be encrypted (for example, documents, credentials, and so on), while the operational
data (for example, operating system, office programs) is not. Symmetric encryption is used here as well.

 Database encryption: Because data commonly is stored in a database, the database software often allows you to
encrypt the data at the application level. Similarly to disk and file encryption, a password is needed to
transparently access the data.

Section 5: Summary Challenge


Section 6: Introducing Cisco Platforms and APIs

Introduction 

Cisco provides a rich portfolio of networking and data center solutions, which in turn require the ability to easily manage
and operate the solution consisting of many different technologies and services. The products can range from simple on-
premises management products, powerful cloud-based products, technology-specific products aiming to ease the
operation of compute resources, collaboration solutions, or security solutions. In modern environments, you will aim to
utilize the application programming interfaces (APIs) of these products to be able to better integrate them in simple and
powerful end-to-end orchestrated solutions.

Cisco Network Management Platforms 

Several management products exist that can help manage service provider or enterprise networking solutions, or both.
Cisco Digital Network Architecture (DNA) can be used in enterprise environment to address all requirements of operating
an enterprise network (WAN and LAN). Cisco Application Centric Infrastructure (ACI) enables you to build an agile and
dynamic data center environment where Cisco ACI can be integrated with northbound systems to dynamically provision
data center networking resources to individual applications. Cisco Network Services Orchestrator (NSO) can be used to
orchestrate and automate any diverse and multivendor environment, thus providing a single point to orchestrate services
end to end and making the underlying network transparent.

The following figure illustrates a service provider network, although a similar illustration could be used to illustrate an
enterprise network (by omitting some service provider-specific access and aggregation devices). Several requirements are
used when designing networking and data center solutions. However, in modern environments, you aim to simplify the
operations of the services and enable a dynamic and flexible approach where, ideally, the management tasks can be
pushed all the way to the end users (application consumers). Having APIs enables the combining of different management
platforms into orchestrated environments where application (service) instances can be provisioned, modified, or deleted
in a single transaction.

Note the following requirements:

 Agility

 Flexibility

 Scalability

 Simplicity

 Security

 Availability

 Automation
 Interoperability

 API

Cisco Network Management Platforms

The following figure illustrates three management platforms from Cisco and how they fit into an enterprise and/or service
provider environment.

Cisco DNA

Intent-based networking (IBN) built on Cisco DNA takes a software-delivered approach to automating and assuring services
across your WAN and your campus and branch networks. Cisco DNA enables you to streamline operations and facilitate IT
and business innovation.

Cisco DNA is an open, extensible, software-driven architecture. Cisco DNA uses five fundamental new design principles for
the networking software stack:

 Virtualize everything to give organizations freedom of choice to run any service anywhere, independent of the
underlying platform—physical or virtual, on premises, or in the cloud.

 Designed for automation to make networks and services on those networks easy to deploy, manage, and maintain
—fundamentally changing the approach to network management.

 Pervasive analytics to provide insights on the operation of the network, IT infrastructure, and the business—
information that only the network can provide.

 Service management delivered from the cloud to unify policy and orchestration across the network—enabling the
agility of cloud with the security and control of on-premises solutions.

 Open, extensible, and programmable at every layer, integrating Cisco and third-party technology, open APIs, and
a developer platform, to support a rich ecosystem of network-enabled applications.
By enabling this architecture with services that can be delivered from the cloud, you use the cloud for what it does best—
namely, lowering cost, ease of use, scale, and speed.

Cisco DNA is built on a set of design principles with the objective of providing:

 Insights and actions to drive faster business innovation

 Automaton and assurance to lower costs and complexity while meeting business and user expectations

 Security and compliance to reduce risk as an organization continues to expand and grow

Cisco DNA is delivered across three layers:

 Layer 1 is the network element layer: Here, you have physical and virtual devices that bring together the network.
A core principle at the network layer is virtualization. You can use Cisco Enterprise Network Functions
Virtualization (E-NFV), which builds the full software stack from the infrastructure software that can reside on
servers, to virtualized network functions like routing, firewalls, and the orchestration tools to support E-NFV on
physical and virtual devices. The evolved Cisco IOS XE Software is much more open and programmable with model-
driven APIs. In addition, you can more easily tap into the intelligence provided by the operating system and ASIC to
support customized applications.

 Layer 2 is the platform layer: Here, use controllers to fully abstract the network and automate all-day 0, 1, and 2
functions. Through centralized policy control, you can allow IT to provide the business intent and have the
controller drive enforcement dynamically through the network. Also at this level is where you can gather rich data
analytics. A single analytics platform is used to provide structured data and open APIs that both Cisco and third
parties can use to contextualize insights—relevant for businesses to better understand user behavior or Internet of
Things (IoT) data, as well as IT to troubleshoot issues or identify threats faster.

 Layer 3 is the network-enabled applications layer: Layer 3 supports important business services like collaboration,
mobility, and IoT. Both Cisco and third parties can write once and gain the intelligence of the network to better
understand patterns (by correlating user, app, device data) for use cases that range from capacity planning to
testing customer promotions.

As you build to the design principles, you will use the cloud for the following:

 Cloud managed: Using the cloud to securely manage all elements in the network through a single-pane view

 Cloud edge: Providing critical network functions at the edge to support businesses moving their operations to the
cloud (like Amazon Web Services and Azure)
 Cloud delivered: Enabling flexible subscription models where possible, minimizing the infrastructure burden

Cisco DNA supports the critical northbound and southbound APIs to enable the broadest ecosystem to be supported.

The following figure illustrates the core components comprising the Cisco DNA solution.

The automation software provides policy-based orchestration, which can in turn be integrated with other northbound
systems.

Cisco DNA Center is the network management system, foundational controller, and analytics platform at the heart of the
Cisco intent-based network. Beyond device management and configuration, Cisco DNA Center is a set of software solutions
that provide:

 A management platform for all your network

 A software-defined networking (SDN) controller for automation of your virtual devices and services

 An assurance engine to guarantee the best network experience for all your users

Cisco DNA Center software resides on the Cisco DNA Center Appliance and controls all your Cisco devices—both fabric and
nonfabric.
Design: Design your network using physical maps and logical topologies for a quick visual reference.

Policy: Define user and device profiles that facilitate highly secure access and network segmentation, based on business
needs.

Provision: Use policy-based automation to deliver services to the network based on business priority and to simplify
device deployment.

Assurance: Combine deep insights with rich context to deliver a consistent experience and proactively optimize your
network.

Cisco DNA Center allows IT teams the ability to automatically provision through Cisco DNA Center Automation, virtualize
devices through Cisco Enterprise Network Functions Virtualization (E-NFV), and lower security risks through segmentation
and Encrypted Traffic Analysis (ETA). Furthermore, Cisco DNA Center Assurance collects streaming telemetry from devices
around the network to ensure alignment of network operation with the business intent. In doing this, Cisco DNA Center
Assurance optimizes network performance, enforces network policies, and reduces time that is spent on mundane
troubleshooting tasks. Cisco DNA Center Platform provides 360 degree extensibility with a broad ecosystem of partners
that allow you to make your network agile and fully in-tune with your business priorities. Cisco DNA Center is the only
centralized network management system to bring all this functionality into a single pane of glass.

Cisco ACI

Cisco ACI is a data center SDN solution that is used to allow applications to dynamically request data center network
resources. Cisco ACI consists of the data center networking infrastructure that is controlled by a Cisco Application Policy
Infrastructure Controller (APIC).
Cisco APIC is the main architectural component of the Cisco ACI solution. It is the unified point of automation and
management for the Cisco ACI fabric, policy enforcement, and health monitoring. The controller optimizes performance
and manages and operates a scalable multitenant Cisco ACI fabric.

Cisco ACI is a centralized application-level policy engine for physical, virtual, and cloud infrastructures, and it provides the
following capabilities:

 Detailed visibility, telemetry, and health scores by application and by tenant

 Designed around open standards and open APIs

 Robust implementation of multitenant security, quality of service (QoS), and high availability

 Integration with management systems such as VMware, Microsoft, and OpenStack

 Cloud APIC appliance for Cisco Cloud ACI deployments of public cloud environments

Cisco APIC is a single point of automation and management for the Cisco ACI fabric:

 Centralized controller for ACI fabric

 Web HTML5 GUI and RESTful API (XML or JavaScript Object Notation [JSON])

 Application-oriented network policy

 Extensive third-party integration (65+ partners)

 ACI App Center extends functionality


Designed for automation, programmability, and centralized management, the Cisco APIC itself exposes northbound APIs
through XML and JSON. It provides both a CLI and GUI, which utilize the APIs to manage the fabric holistically.

Cisco NSO

Whether architecting a service orchestration tool chain or building a DevOps environment, the underlying automation
strategies have typically expected developers to understand how infrastructure works and infrastructure owners to be
conversant with application development and service creation. Real-world experiences show that this approach is flawed.
Cisco NSO enabled by Tail-f offers a more realistic approach by serving as a bridge between application or service owners
and infrastructure owners, letting each team operate in their native environment, yet still collaborate effectively together.
Years of operational experience have produced a platform with features and operational capabilities that these teams will
find valuable as part of any automation initiative, either enterprise or service provider.

Cisco NSO is a model-driven (YANG) platform for automating your network orchestration. It supports multivendor
networks through a rich variety of network element drivers (NEDs). Cisco NSO supports the process of validating,
implementing, and abstracting your network configuration and network services, providing support for the entire
transformation into intent-based networking.

The following figure illustrates how Cisco NSO acts as a bridge between application owners and infrastructure owners.
Scalable Sophistication

Automation tooling is like math. Elementary school arithmetic is fine if you need to double a pie recipe, but you will need
something like calculus if you are trying to land a rocket on Mars. Similarly, you need tooling that is simple enough to get
started easily but powerful enough for your more sophisticated initiatives. As your goals get more complicated, you need
to make sure that you have the means to adequately express them. Scale is not just about increasing complexity; it is also
about being able to handle an ever-increasing number of services, apps, and devices.

Flexible and Adaptable Edges

Change is inevitable, so your bridge must be able to easily accommodate new apps and tools on the "north" side of the
bridge. Similarly, your bridge must be able to accommodate changes in infrastructure on the "south" side, such as new
vendors, virtualization or containerization, and adoption of cloud technologies.

Normalization

The bridge must be able to abstract the view from the other side—hide the complexity and heterogeneity that is reality.
App owners should be able to ask for resources without caring about the implementation details. Similarly, infrastructure
owners should see requests in a consistent way regardless of the app, tool, or system making the requests. Because of this
normalization and abstraction role, the bridge must also serve as the authoritative source of truth for both sides of the
bridge as to what is actually occurring.

Developer-Centricity

The quality of the programmatic controls available to both sides of the bridge will dictate what can be accomplished and
how quickly it can be accomplished. Together, these four principles help deliver on the promise of DevOps. They allow the
entire organization to work more cohesively while allowing each team to function autonomously. Infrastructure owners
can change and optimize their resources without breaking apps and services. Similarly, new apps and services can be rolled
out more quickly and without creating infrastructure churn. Costs are lowered through increased agility, efficiency, and
productivity. Customers are happier with products and services that are more relevant, delivered more quickly, and with
more predicable quality.
Cisco NSO 5 represents nearly a decade of accumulated wisdom in automating large, complex tier-1 environments. While
NSO has its roots in the service provider market, in recent years, more large enterprises are seeing the need for a proven
automation solution. The following figure provides an overview of the elements of NSO and how they relate.

At a very high level, Cisco NSO has three components:

1. A model-based programmatic interface that allows for control of everything from simple device turn-up and
configuration management to sophisticated, full life-cycle service management

2. A fast, highly scalable, highly available configuration database that serves as a single source of truth

3. A device abstraction layer that uses NEDs to mediate access to both Cisco and more than 150 non-Cisco physical
and virtual devices.

The following figure illustrates the full DevOps environment encompassing all stages in a services life cycle. You may use a
phased approach to get there.
The proposed three-phase approach makes it easier to get to full DevOps environment in a more controllable manner:

Phase 1: Use NSO as a programmable network interface.

Use NSO to provide a single API into the network. Operations gain a network provisioning and configuration power tool
with the ability to perform networkwide CLI and configuration changes from a single interface, in a single transaction,
instead of having to individually touch multiple boxes and use different device-specific commands.

Phase 2: Use NSO for service abstraction.

NSO draws on device and service models to begin more fully automating service activations and changes. You see an end-
to-end view of the service as a whole, instead of just seeing the individual device configurations.

Phase 3: Use NSO for DevOps infrastructure automation.

As you make the people and process changes to support agile development and continuous integration and continuous
deployment (CI/CD), NSO can support that change by enabling everyone involved in the service—product developers,
network engineers, provisioning and operations teams—to work together to design and execute new services and
changes, quickly and continuously.

Cisco Compute Management Platforms 

Cisco provides three options for managing compute solutions such as Cisco Unified Computing System (UCS) and Cisco
HyperFlex—Cisco UCS Manager, Cisco UCS Director, and Cisco Intersight.

There are three levels of management power to deploy on Cisco Unified Computing platforms:

 Cisco UCS Manager for simple infrastructure management

 Cisco UCS Director for more powerful infrastructure orchestration

 Cisco Intersight for cloud-based management of Cisco UCS and Cisco HyperFlex

Cisco UCS Manager:

 Automates and treats infrastructure as code to improve agility

 Unifies management of Cisco UCS Blade and rack servers, Cisco UCS Mini, and Cisco HyperFlex

 Speeds up daily operations and reduces risks with policy-driven, model-based architecture

 Truly single pane of glass API


 Automatic hardware discovery

 Flexible, programmable, policy-driven setup

 Policy-driven firmware maintenance

 Service profile enabling agile, flexible, simple hardware provisioning, management, monitoring, and maintenance

 Cisco UCS Central utilizes Cisco UCS Manager APIs to enable orchestration of multiple Cisco UCS domains

Cisco UCS Director:

 Provides the foundation for infrastructure as a service (IaaS), including a self-service portal for end users

 Supported by independent hardware and software vendors through open APIs

 Operates across infrastructure stacks in the data center, edge scale, and Mode 2 environments globally

Cisco Intersight:

 Cloud-hosted management for Cisco UCS and Cisco HyperFlex

 Simplifies systems management across data center, remote office-branch office (ROBO), and edge environments

 Unique recommendation engine delivers actionable intelligence

 Tight integration with Cisco Technical Assistance Center (TAC) makes support easier

Cisco UCS Manager

The XML API facilitates integration with a wide range of third-party management tools from more than a dozen vendors.
The API allows for custom integration and portals using the Cisco PowerTool for PowerShell and the Python software
development kit (SDK). The API also serves as the interface to higher-level Cisco UCS tools, including Cisco UCS Director
and Cisco UCS Performance Manager.

Cisco UCS Central Software provides the following advantages to help make operations and analysis easier compared with
a single Cisco UCS Manager:

 Unified control plane for all the elements in the system—centralized logs for compute, network, and storage

 Single source of truth accessible to tools via API

 Centralizes global policies, service profiles, inventory, ID pools, and templates for up to 10,000 servers
Operationally, Cisco UCS Central provides centralization capability for the core aspects of Cisco UCS Management, such as
policies, profiles, inventory, faults, firmware updates, and consoles. It uses Cisco UCS Manager technology to simplify
global operations with centralized inventory, faults, logs, and server consoles. Cisco UCS Central also provides the
foundation for high availability, disaster recovery, and workload mobility

Cisco UCS Central extends the unified management domain for IT administrators, spanning thousands of servers across the
data center and around the world. It uses the same model-based architectural framework as Cisco UCS Manager, with an
extended API for global automation among Cisco UCS domains. With Cisco UCS Central, the underlying policy-based
management philosophy of Cisco UCS Manager is globalized. This delivers the ability to centralize the management of
multiple Cisco UCS domains whether they are in a single data center or spread across multiple data centers. This product
provides a way in which the management of a growing infrastructure can continue to be simple.

Cisco UCS Director

The figure shows the application of Cisco UCS Director to orchestrate a large environment by integrating with Cisco UCS
Manager in any form—integrated, supervisor, standalone, or central. Cisco UCS Director is a heterogeneous platform for
private cloud IaaS. It supports various hypervisors along with Cisco and third-party servers, network, storage, converged,
and hyperconverged infrastructure across bare-metal and virtualized environments.

The Cisco UCS Director REST API allows an application to interact with Cisco UCS Director, programmatically. These
requests provide access to resources in Cisco UCS Director. With an API call, you can execute Cisco UCS Director workflows
and change the configuration of switches, adapters, policies, and other hardware and software components.

The API accepts and returns HTTP messages that contain JSON or XML documents. The JSON or XML payload that is
contained in an HTTP message describes a method or managed object in Cisco UCS Director. You can use any programming
language to generate the messages and the JSON or XML payload.

Cisco Intersight

The Cisco Intersight software as a service (SaaS) platform makes systems management smarter and simpler. Intelligence
and automation make daily activities easier and more efficient. Cisco Intersight delivers efficiency of operations for Cisco
UCS, HyperFlex, and third-party infrastructure from the data center to the edge.

Cloud-based data center management:

 Global, multisite, data center, edge

 Recommendation engine

 Real-time analytics and machine learning

 Forecasting
DevOps enabled:

 Continuous integration and delivery

 Continuous monitoring

 OpenAPI Specification

 Python and PowerShell SDK

The Cisco Intersight virtual appliance provides the same SaaS benefits while enabling customers to decide which data is
sent back to Cisco to conform to organizational requirements.

OpenAPI Specification Support

Cisco Intersight includes an API that supports the OpenAPI Specification (formerly known as the Swagger Specification), a
powerful definition format to describe RESTful APIs. Support for the OpenAPI Specification provides users with access to
an interoperable REST API with tools that automate the generation of the Cisco Intersight API documentation
(https://intersight.com/apidocs), API schemas, and SDKs. The Cisco Intersight API includes fully functional Python and
PowerShell SDKs.

The API is an integral part of the broader open connector framework Cisco has established to enable the Cisco Intersight
ecosystem to evolve. The ecosystem will eventually support a wide range of Cisco and third-party DevOps software.
Cisco Compute Management APIs 

Cisco offers a range of APIs for its compute (data center) portfolio. They can be divided into these groups:

 Cisco Unified Computing System (UCS) Management: Cisco Intersight, Cisco UCS Manager, Cisco UCS Director

 User Integrations: Ruby, PowerShell, Python

Cisco UCS is a programmable infrastructure component for the data center. A programmable infrastructure provides an
API to the installed software stack (hypervisor, operating system, application) for identification, monitoring, configuration,
and control.

The API allows you to integrate the system into higher-level, data center-wide management systems as a single, logical
entity. Server, network, and I/O resources are given characteristics, and they are provisioned and configured on demand
by applying a service profile to them. This technology supports a unified model-based management of the system. All
properties and configurations of this infrastructure are programmable through Cisco UCS Manager, including the
following:

 MAC addresses and UUIDs

 Firmware revisions

 BIOS and Redundant Array of Independent Disks (RAID) controller settings

 Host bus adapter (HBA) and network interface card (NIC) settings

 Network security

 Quality of service (QoS)

Cisco UCS Manager PowerTool Suite is a library of Microsoft PowerShell cmdlets that allow you to retrieve and manipulate
Cisco UCS Manager managed objects. The Cisco UCS Manager API interactions can be categorized in several distinct
sections:

 Sessions

 Methods

 Queries and Filters

 Configurations and Transactions

 Event Subscription
Cisco UCS PowerTool Suite provides services and cmdlets for all the Cisco UCS Manager API interaction categories. Cisco
UCS PowerTool Suite also provides cmdlets that allow inspection of object metadata and object hierarchical containment,
and object cmdlet action capabilities with pipeline object definitions. The PowerTool Suite helps automate all aspects of
Cisco UCS Manager, Cisco UCS Central Software, and Cisco UCS Integrated Management Controller (IMC). There is a cmdlet
library for Cisco UCS Central, which manages Cisco UCS Manager systems, and a cmdlet library to manage the Cisco UCS
IMC component of the Cisco UCS C- and E-Series servers.

Cisco UCS Director offers the following APIs:

 Custom task development API

 Cisco UCS Director REST API

Custom task development API uses CloupiaScript scripting language.

With the Cisco UCS Director REST API, you can:

 Execute Cisco UCS Director workflows

 Modify switch configuration

 Modify adapter configuration

 Modify policies

Cisco UCS Director is a unified infrastructure management solution that provides management from a single interface for
compute, network, storage, and virtualization layers. Cisco UCS Director uses a workflow orchestration engine with
workflow tasks that support the compute, network, storage, and virtualization layers. It also supports multitenancy, which
enables policy-based and shared use of the infrastructure.

Cisco UCS Director has an API within the application for custom task development and a REST API for other scripts and
applications to talk to Cisco UCS Director. The custom task development API uses a language that is called CloupiaScript.
This language is a version of JavaScript with Cisco UCS Director Java libraries that enable orchestration operations.
CloupiaScript supports all JavaScript syntax. CloupiaScript also supports access to a subset of the Cisco UCS Director Java
libraries to allow custom tasks to access Cisco UCS Director components. Because CloupiaScript runs only on the server,
client-side objects are not supported.

The Cisco UCS Director REST API allows an application to programmatically interact with Cisco UCS Director. Requests from
this API provide access to resources in Cisco UCS Director. With an API call, you can execute Cisco UCS Director workflows
and change the configuration of switches, adapters, policies, and other hardware and software components.

The Cisco Intersight API is a programmatic interface that uses the REST architecture to provide access to the Cisco
Intersight Management Information Model. Cisco Intersight API accepts and returns messages that are encapsulated
through JSON documents and uses HTTP over Transport Layer Security (TLS) as the transport protocol. Cisco Intersight
provides the benefits of cloud-based management that customers have come to appreciate with SaaS products. For
example, the Cisco Intersight API is automatically updated when new features are deployed to the cloud and provide
programmatic access to new IT infrastructure capabilities.

The Cisco Intersight REST API supports these methods:

 GET

 POST

 PATCH

 DELETE

All the data from the application is represented in the Cisco Intersight Management Information Model. Some examples of
application areas that you can manage with the API include the following:

 Cisco UCS Servers

 Server components such as DIMMs, CPUs, graphics processing units (GPUs), storage controllers, and Cisco IMC
 Cisco UCS Fabric Interconnects

 Firmware inventory

 Cisco HyperFlex nodes and HyperFlex clusters

 Virtual machines

 VLANs and virtual storage area networks (VSANs)

 Users, roles, and privileges

The Cisco Intersight API is based on the OpenAPI specification, which defines a programming language-agnostic interface
description for describing, producing, consuming, and visualizing RESTful web services. The OpenAPI Specification for Cisco
Intersight allows both humans and computers to discover and understand the capabilities of the service without requiring
access to source code, additional documentation, or inspection of network traffic.

When delivered, Cisco Intersight provides downloadable SDK packages for popular programming languages such as Python
and PowerShell. SDKs for dozens of other programming languages can be generated with the help of the open source
OpenAPI tools, including Swagger code generators. Separate documents cover available programming-language SDKs. The
Cisco Intersight API is an integral part of the broader open connector framework that Cisco has established to enable the
Cisco Intersight ecosystem to evolve. The ecosystem will eventually support a wide range of Cisco and third-party DevOps
software.

Cisco Collaboration Platforms 

Cisco offers various solutions for collaboration—the more traditional telephony-based solutions and the more modern
collaboration solutions to support online meetings and teamwork.

On-premises managed collaboration platforms:

 Cisco Unified Communications Manager:

1. Desktop phones

2. PC

3. Smartphone apps

 Cisco Finesse:

1. Browser-based agents for call centers

Cloud managed collaboration platforms:

 Cisco Webex Meetings:


1. Dedicated collaboration devices

2. PC

3. Smartphones

 Cisco Webex Teams:

1. Dedicated collaboration devices

2. PC

3. Smartphones

Cisco Finesse

Cisco Finesse is a next-generation agent and supervisor desktop that is designed to provide a collaborative experience for
the various communities that interact with your customer service organization. It also helps improve the customer
experience while offering a user-centric design to enhance customer care representative satisfaction.

For IT professionals, Cisco Finesse offers transparent integration with the Cisco Collaboration portfolio. It is standards-
compliant and offers low-cost customization of the agent and supervisor desktops.

Cisco Finesse provides:

 An agent and supervisor desktop that integrates traditional contact center functions into a thin-client desktop

 A 100 percent browser-based desktop implemented through a web 2.0 interface; no client-side installations
required

 A single, customizable "cockpit," or interface, that gives customer care providers quick and easy access to multiple
assets and information sources

 Open web 2.0 APIs that simplify the development and integration of value-added applications and minimize the
need for detailed desktop development expertise

The figure illustrates the architecture of the Cisco Finesse solution.

Running in the browser are:

 Cisco Finesse Administration: The Cisco Desktop Administrator that utilizes the cfadmin (Administrator Web
Application Resource [WAR]) and the finesse (REST API WAR) for configuring things for Finesse.
 Cisco Finesse: The Cisco Agent and Supervisor Desktops. It also uses the REST API WAR for the API, but its artifacts
are served from the desktop (Agent Desktop WAR).

Inside Cisco Tomcat is:

 Shindig: The OpenSocial container that serves as a proxy for all requests being made to the web apps.

 Contact Center Express (CCE) realm: An authentication realm that is plugged into Cisco Tomcat for doing
authentication of all web requests. In CCE, this uses the Administrative Workstation Database (AWDB) for these
permissions.

 WAR files: These are the primary web applications that are running inside Cisco Tomcat, providing functionality to
the Desktop Administrator and the Agent and Supervisor Desktops:

1. cfadmin: Provides the dynamic and static resources (HTML, JavaScript, Cascading Style Sheets [CSS]) for
the Cisco Finesse Desktop Administrator.

2. 3rdpartygadget: Provides the hosting of third-party gadgets within the Finesse server to allow third-party
integrators to deploy gadgets without the need for an external web server.

3. desktop: Provides the dynamic and static resources (HTML, JavaScript, CSS) for the Cisco Agent and
Supervisor Desktops.

4. finesse: Provides the Finesse REST API and the funneling of events by doing Extensible Messaging and
Presence Protocol (XMPP) publishes to the Cisco Finesse Notification Service.

Cisco Webex

Cisco Webex brings together several enterprise solutions for video conferencing, online meetings, screen share, and
webinars:

 Webex Meetings is a multiplatform video conferencing solution.

 Webex Teams is a collaboration solution for continuous teamwork with video meetings, group messaging, file
sharing, and whiteboarding.

 Webex Devices improve team collaboration and the Webex Meetings and Webex Teams experience.

The following options are available for Cisco Webex Meetings and Cisco Webex Teams:

 Cisco Webex Board is an all-in-one whiteboard, wireless presentation screen, and video conferencing system for
smarter team collaboration.

 Cisco Webex Room Devices are intelligent video conferencing devices for meeting rooms of all sizes.

 Cisco Webex Desk Devices are simple-to-use compact video conferencing devices designed for desktops.

 Software running on PC.


Cisco Webex Meetings provides a meetings XML API to integrate Webex Meetings services with your custom web portal or
application.

Use Cisco Webex Teams API to create bots, embed videos, or programmatically create teams.

Cisco Collaboration APIs 

Cisco offers a wide range of APIs for its collaboration portfolio. Collaboration portfolio and APIs can be divided into
multiple groups:

 Cloud collaboration: Apple iOS, Cisco Webex Teams, Cisco Webex Meetings

 On-premises collaboration: Cisco Meeting Server

 Contact center: Cisco Unified Contact Center Express, Cisco Finesse, Cisco SocialMiner, Cisco Remote Expert
Mobile

 Contact center enterprise: Cisco Computer Telephony Integration (CTI) Protocol, Cisco CTI operating system

 Audio and video endpoints: Room devices, Jabber Web SDK, Cisco Jabber Bots SDK
 Call control: Java Telephony Application Programming Interface (JTAPI), Cisco Telephony Application Programming
Interface (TAPI), Cisco Unified Communications Manager Session Initiation Protocol (SIP), Cisco WebDialer, Cisco
Unified Routing Rules XML Interface

 Management: Administrative XML Layer (AXL), User Data Services (UDS), Cisco Emergency Responder

 Cloud calling: Broadworks, Cisco Hosted Collaboration Solutions (HCS)

 Instant messaging (IM) and presence: Cisco UC Manager IM & Presence Service

 Voicemail: Cisco Unity Connection

Cisco Unified Communications Manager

Cisco Unified Communications Manager exposes multiple APIs that can be used by developers to interact with the Cisco
Unified Communications Manager server and its users and devices:

 AXL API

 Cisco Emergency Responder API

 Platform Administrative Web Services (PAWS) API

 Cisco Unified Communications Manager Serviceability XML API

 User Data Services (UDS) API

AXL API:

 XML/SOAP-based API

 Manage Cisco Unified Communications Manager configuration

AXL API uses basic authentication:

 Create new application user in Cisco Unified Communications Manager admin page

AXL API can be used to do the following:

 Configure devices and directory numbers

 Configure users

 Configure voicemail

The AXL API is an XML/Simple Object Access Protocol (SOAP)-based API that provides a mechanism for managing
configuration of the Cisco Unified Communications Manager. Developers can use AXL to create, read, update, and delete
objects such as gateways, users, devices, and more. Examples of Cisco Unified Communications Manager objects that can
be provisioned with AXL include:

 Cisco Unified Communications Manager groups

 Call Pickup groups

 Device pools

 Dial plans

 Directory numbers

 Locations

 Phones

 Regions

 Users
 Voicemail ports

AXL API limits access to its services through HTTP 1.0 basic access authentication. The advantage of HTTP basic access
authentication is that it is simple to implement because it uses only HTTP headers.

HTTP basic access authentication requires authorization credentials in the form of a username and password before
granting access to a specific URL. The username and password are passed as Base64-encoded text in the header of a
subsequent HTTP transaction.

To authenticate a user, use an end-user account created by the Cisco Unified Communications Manager administrator:

 It is recommended that a user and group for your application is created, rather than using the admin user.

 Create a special application user for AXL access.

 Create a user group for AXL access.

 Put the AXL user in this user group.

The UDS API is a REST-based set of operations that provide authenticated access to user resources and entities such as
user devices, subscribed services, and more from the Cisco Unified Communications Manager database. UDS is designed
for web applications and can run on any device.

You can create a custom directory search using UDS APIs or manage user preferences and settings. You can also allow
users to enable or disable certain features like Do Not Disturb or Call Forward.

Actions that can be accomplished by using the UDS service include:

 Directory search for users

 Manage Call Forward and Do Not Disturb

 Set language and locale

 Subscribe to IP phone service applications

 Reset PIN/password credentials

 Configure remote destinations

Cisco Finesse

Cisco Finesse offers multiple REST APIs:

 Cisco Finesse desktop APIs

 Cisco Finesse configuration APIs

 Cisco Finesse serviceability APIs

 Cisco Finesse notifications

Cisco Finesse desktop APIs are used by agents and supervisors to communicate between the Cisco Finesse desktop, Cisco
Finesse server, Cisco Unified Contact Center Enterprise, or Cisco Unified Contact Center Express to send and receive
information about the following entities:

 Agents and agent states

 Calls and call states


 Teams

 Queues

 Client logs

Cisco Finesse desktop APIs use basic authentication to authenticate with Cisco Unified Contact Center server.

Cisco Finesse configuration APIs allow the configuration of these entities:

 System, cluster, and database settings

 Finesse desktop and call variable layout

 Reason codes and wrap-up reasons

 Phone books and contacts

 Team resources

 Workflow and workflow actions

Cisco Finesse configuration APIs require administrator credentials (the application user ID and password) to be passed in
the basic authorization header. If a user repeatedly passes an invalid password in the basic authorization header, Cisco
Finesse blocks user access to all configuration.

Cisco Finesse serviceability APIs allow users to perform these actions:

 Get system information: Status of the Cisco Finesse system (includes the deployment type), installed licenses,
system authentication mode, and other information

 Diagnostic information: Performance information and product version

 Run-time information: Number of logged-in agents, active tasks, and active dialogs

Cisco Finesse notifications is a mechanism that sends notifications to clients that subscribe to a specific class or resource.
For example, a client that is subscribed to user notifications receives a notification when an agent signs in or out of the
Cisco Finesse desktop, information about an agent changes, or an agent state changes. Notification payloads are XML-
encoded.

Cisco Finesse clients can interface directly with the Cisco Finesse notification service to send subscribe and unsubscribe
requests. Clients subscribe to notification feeds published to their respective nodes.

Cisco Webex Teams

Cisco Webex Teams APIs allow you to:

 Create conversational bots

 Embed video calls

 Manage spaces

Cisco Webex Teams exposes a REST API that can be used by developers to interact with Cisco Webex Teams. Before using a
REST API, you must create a Cisco Webex Teams developer account, where you will get an access token, which you can
later use when invoking API calls toward Cisco Webex Teams. This access token should be used only for development and
testing purposes and not for production usage.

Cisco Webex Teams REST APIs can manage:

 Rooms

 People

 Memberships

 Messages
When making requests to the Cisco Webex REST API, an authentication HTTP header is used to identify the requesting
user. This header must include an access token. This access token may be a personal access token, a Bot token, or an
OAuth token. Cisco Webex Teams APIs support these methods:

 GET

 POST

 PUT

 DELETE

For methods that accept request parameters, the platform accepts either application/json or application/x-www-form-
urlencoded content type and application/json as a return type.

Cisco Webex Meetings

Cisco Webex Meetings APIs provide developers the ability to include Webex Meetings functionality into their custom
applications:

 REST API

 XML API: Complete suite of Cisco Webex functions such as scheduling, user management, recording management,
attendee management, reports, and more

 URL API: An HTML form-based set of functions that allow developers to offer basic Cisco Webex Meetings
functionality from their custom web portals

 Teleconference service provider API: Provides external teleconference service providers to integrate their
teleconferencing service with Cisco Webex Meetings

Cisco Webex Meetings XML API enables you to manage XML services, which can be used to set up and manage Cisco
Webex Meetings collaboration services. Cisco Webex Meetings services, such as creating a user, modifying information
about a user, creating a meeting, exchanging files, and so on, are implemented via a set of operations. You can use these
Cisco Webex Meeting operations to facilitate specific aspects of Cisco Webex Meetings online sessions. You can deploy
these Cisco Webex Meetings services through the exchange of well-formed XML documents between your application or
service, and a Cisco Webex Meetings Server. To access Cisco Webex Meetings services through the exchange of XML
documents, you must design your application to send XML request documents to the Cisco Webex Meetings XML server
and to process the received responses.

With Cisco Webex XML API, you can manage the following services (entities):

 User service: Authenticate, create, delete, and update users.

 General session service: Create contacts, get API versions, get session information, and list open sessions.

 Meeting service: Create meetings, create teleconference sessions, delete meetings, and get meetings.

 Training session service: Check lab availability, create training sessions, delete training sessions, and get lab
information.

 Event session service: Create and delete an event, send an invitation email, and upload an event image.

 Support session service: Create a support session and get feedback information.

 History service: List the event attendee and event session history.

 Site service: Get site, list time zone, and set site.

 Meeting attendee service: Create meeting attendee, get enrollment information, and list meeting attendees.

 Meeting type service: Get and list meeting type.

Cisco Webex Meetings URL API is based on HTTP and provides a convenient mechanism to offer browser-based external
hooks into the Cisco Webex Meetings services. It is typically used for enterprise portal integrations supporting basic
interactions such as single sign-on (SSO), scheduling meetings, starting and joining simple meetings, and inviting attendees
and presenters.

With Cisco Webex URL API, you can manage the following services (entities):

 Manage user accounts: Create a new user account, edit an existing user account, and activate and deactivate user
accounts.

 Login/logout: Use an authentication server to log in or log out from Cisco Webex Meetings hosted websites.

 Managing meetings: Schedule a meeting, edit a meeting, list all scheduled meetings, list all open meetings, and
join an open meeting.

 Modifying my Webex meetings page: Modify user information and manage a user contact list.

 Using attendee registration forms: Create a registration form and determine current required, optional, or do-not-
display settings for a registration page.

 Managing attendee lists: Add attendees to a list of invited users and remove attendees from a list of invited users.

 Playing back a recorded event: Allow an attendee to get a list of recorded events for playback.

 Reporting: Send email notifications with attendee information.

The Cisco TAPI Service Provider (TSP) API allows TSPs to integrate their audio-conferencing service with Cisco Webex. Cisco
Webex TSP partners can offer Cisco Webex services to their customer base, while retaining the audio-conferencing
business and the customer relationship.

Cisco Webex Devices

Cisco Webex Devices expose the Experience API (xAPI), which allows an integrator to programmatically invoke commands
and query the status of devices that run the Cisco Collaboration Endpoint software and RoomOS.

The xAPI enables users to customize and extend the touch interface through the use of in-room controls. Users can
personalize the user interface and, for example, create custom interactions with peripherals in a meeting room.

The behavior of Webex Devices can be further customized by creating macros. A macro is a small JavaScript program,
created by the user, which runs natively on the Webex device. Macros can register and react to any event that is exposed
on the API of the device—for example, a system event, status change, or configuration change. If you combine the use of
macros with custom user interface panels, you can augment the user interface with local functionality—for example,
speed dial buttons.

The xAPI supports these protocols:

 Serial port

 SSH

 HTTP

 WebSocket

The xAPI supports these request formats:

 Text

 XML

 JSON

Cisco Security Platforms 

The Cisco security portfolio includes several network-, server-, or endpoint-based security solutions. Many of these
solutions also provide APIs, making security solutions programmable and adaptive.

Cisco provides a broad security portfolio of products to address different threats in different security categories:

 Network security

 Endpoint security

 Identity management

Cisco Firepower

Cisco Firepower Next-Generation Firewall (NGFW) is the Cisco next-generation network security appliance, offering
services such as URL filtering, application control and visibility, advanced malware protection, and so on. The Cisco
Firepower NGFW also offers Cisco Intrusion Prevention System (IPS) services in a single agile platform. The appliance runs a
unified image of Cisco Firepower Threat Defense (FTD) and Cisco ASA adaptive security appliance code to offer all the
NGFW services and IPS services from Cisco Firepower, plus features such as Network Address Translation (NAT), VPNs, and
so on from the ASA appliance.

Managed by using the central Cisco Firepower Management Center (FMC), cloud-based Cisco Defense Orchestrator, or the
local Cisco Firepower Device Manager (FDM), the NGFW provides threat protection through real-time contextual
awareness, full-stack visibility, and intelligent security automation. Cisco FMC provides deep analytic capabilities and API
integration that is not provided by Cisco FDM. Both Cisco Firepower NGFW and Cisco FMC can be physical or virtual
appliances. Cisco Firepower NGFW Virtual (NGFWv) and Cisco FMC are available for Amazon Web Services (AWS), Kernel-
based Virtual Machine (KVM), Microsoft Azure, and VMware vSphere environments. Physical or virtual FMCs can manage
virtual or physical Cisco Firepower NGFW appliances.

The operating system that is running on Cisco Firepower NGFW is Cisco FTD. Cisco FMC can be used to manage the Cisco
FTD system. The Cisco Firepower System requires both the managed device (Cisco Firepower NGFW) that sees the traffic
that you are monitoring, and Cisco FMC. Cisco FMC is a network appliance that provides a centralized management
console and database repository for your Cisco Firepower deployment. Cisco FMC aggregates and correlates network
traffic information and performance data, assessing the impact of events on particular hosts. You can monitor the
information that your devices report, and assess and control the overall activity that occurs on your network. Cisco FMC
also controls the network management features on your devices—switching, routing, NAT, VPN, and so on.

You need to register Cisco Firepower NGFW with Cisco FMC. After the communication channel is set up between Cisco
FMC and Cisco Firepower NGFW, basic information is exchanged between the two appliances, as shown in the figure. If
you change the policy configuration on Cisco FMC for a managed device, that policy change does not take effect until you
deploy that policy (known as a deploy or apply). You can deploy the policy immediately or later.

Cisco AMP and Cisco Threat Grid

Cisco Advanced Malware Protection (AMP) accelerates security response by providing visibility and clarity to previously
unknown artifacts, and by seeing a threat once and blocking it everywhere.
The power of the Cisco AMP architecture is the integration between components. Any component in the AMP architecture
can submit a file to Cisco Threat Grid for dynamic analysis. Files can also be submitted manually by users. When Threat
Grid issues a conviction, the Cisco AMP Cloud informs all AMP components worldwide of the conviction.

Cisco Umbrella

Cisco AMP for Endpoints, or AMP4E, is a lightweight connector that can be installed on Windows, Linux, Mac, and Android
devices (and Clarity for iOS is available on Apple iOS devices). You will not always have the option of deploying AMP4E on
end devices, though.

Consider a Bring Your Own Device (BYOD) scenario, guest devices, and IoT devices. For some devices, such as guest or
some BYOD devices, you will not have administrative access to install and manage endpoint security. Other devices, such
as manufacturing or medical equipment, might not have sufficient resources to run an endpoint protection agent or might
not be using an x86-compatible operating system.

In several cases, you will have to adapt your endpoint security strategy to include devices that you cannot manage, which
presents another challenge that you as a security practitioner must solve for your organization: "How do you manage
endpoint security for an endpoint that you cannot manage?"

Cisco Umbrella is a Domain Name System (DNS)-based security mechanism that can provide common security for both on-
premises and off-premises to provide endpoint security. Cisco Umbrella on-premises deployments do not require an agent
to be installed on the endpoint.
The firewall is good for blocking inbound threats, but consider threats that originate inside the network. These threats
cannot be stopped by perimeter security because they are already behind the firewall.

Cisco Umbrella can block client connections at the application layer, regardless of network connection or perimeter
security, by preventing the client from resolving DNS for the remote destination. If the client cannot resolve DNS for the
remote destination, it cannot establish a connection and download the malware. Cisco Umbrella can be used to prevent
connections to malicious websites or to websites that violate corporate security policy.

Cisco ISE

Cisco Identity Services Engine (ISE) is a network access control and policy enforcement platform that allows you to provide
highly secure network access to users and devices. It helps you gain visibility into what is happening in your network, such
as who is connected and which applications are installed and running. It also shares vital contextual data, such as user and
device identities, threats, and vulnerabilities, with integrated solutions from Cisco technology partners so that you can
identify, contain, and remediate threats.

The following figure illustrates the five primary functions that are provided by Cisco ISE.

The Cisco ISE ERS API allows ISE database management.

Cisco ISE provides five primary functions to help secure your network and enable your business:

 ISE Visibility includes device posturing and profiling of endpoints on the network. An endpoint could be a mobile
device or a server, or it could be a printer, a robotic arm in a manufacturing plant, or an IP-connected door lock.

 Guest Access Management includes Hotspot, Sponsored Guest, and Guest Self-Registration.

 Device Administration provides a centralized authentication, authorization, and accounting (AAA) server for
accessing network devices such as routers, switches, load balancers, firewalls, and other devices that implement
TACACS and RADIUS.

 Access Control provides discretionary access to network resources or enclaves, based on highly customizable
criteria.

 Secure BYOD and Enterprise Mobility includes capabilities of end users to self-register their mobile or BYOD
endpoints.

The Cisco ISE External RESTful Services (ERS) API is designed to allow external clients to perform create, read, update, and
delete (CRUD) operations on ISE resources. ERS is based on the HTTP protocol and REST methodology.
Cisco Security APIs 

Most Cisco security products provide APIs to enable users to enhance the provided functionality or embed the products
into larger automated solutions, reducing the need for manual actions.

Security use cases using various security products:

 Cisco Firepower for firewalling

 Cisco Umbrella for endpoint protection

 Cisco AMP for endpoint protection

 Cisco Threat Grid for threat intelligence

Cisco Firepower

There are three Cisco Firepower devices that provide an API:

 Cisco FTD: A Rest API that automates configuration management and execution of operational tasks on FTD
devices.

 Cisco FMC: Context-rich APIs for exchange of network and endpoint security event data and host information.

 FXOS Firepower Chassis Manager: REST API for the Firepower 9300 Chassis. It includes both configuration and
monitoring APIs for platform and Firepower chassis services.

Cisco FTD API Use Cases

You can use the Cisco FTD API to perform these tasks:

 Configure policy and settings:

o Manage policy objects

o Manage firewall policy

o Manage device settings

 Configure logging and cloud integrations:

o Configure syslog logging (IPS, file/malware, connection)

o Configure connections to the cloud


o Configure smart licensing

 Automate your device configuration:

o Ansible automation

o Python or programming language of your choice

o Industry-standard OAuth authentication

Cisco FTD API

The following figure illustrates the structure of a REST call toward Cisco FTD.

Cisco FTD has a REST explorer to help you identify the proper REST syntax.

Cisco Umbrella REST API

The Cisco Umbrella Enforcement API is designed to give technology partners the ability to send security events from their
platform, service, or appliance within a mutual customer environment to the Umbrella cloud for enforcement. You may
also list the domains and delete individual domains from the list. All received events will be segmented by the mutual
customer and used for future enforcement.

To successfully integrate, you must format your events to meet the public format. This API is a REST API and follows
RESTful principles in implementation.

The API is restricted to HTTPS and is hosted and available to make requests at https://s-platform.api.opendns.com.

All responses are served as JSON and authentication is required for all requests. The API makes extensive use of query
strings to retrieve and filter resources.

Umbrella REST API Example

To gather the lists of domains already added to the shared customer domain list, run a GET request against the domains
endpoint of the API.
The reply is provided using JSON.

Cisco ISE ERS API

This online SDK is an application programming reference that explains the operations, parameters, and responses of the
Cisco ISE ERS API. ERS is designed to allow external clients to perform CRUD operations on ISE resources, ERS is based on
the HTTP protocol and REST methodology. Here, you will learn how to easily enable ERS and to start building your own
applications. Note: The ERS is limited to Active Directory operations only for ISE-PIC nodes.

First, you need to enable ERS to start listening for API requests using Secure Sockets Layer (SSL) on port 9060. Clients trying
to access this port without enabling ERS first will face a timeout from the server. Therefore, the first requirement is to
enable ERS from the ISE admin user interface. Go to Administration > Settings > ERS Settings. Check the Enable ERS for
Read/Write radio button.

The figure illustrates the use of the REST API to access a resource.
Refer to the documentation for the list of available resources given the version of Cisco ISE.

Multiple filters can be used and evaluated using an operator, and concatenated to the URL using the dot character:

 EQ: Equals

 NEQ: Not Equals

 GT: Greater Than

 LT: Less Then

 STARTW: Starts With

 NSTARTSW: Not Starts With

 ENDSW: Ends With

 NENDSW: Not Ends With

 CONTAINS: Contains

 NCONTAINS: Not Contains

Example:

Cisco AMP Use Cases

Cisco AMP for Endpoints provides comprehensive protection against the most advanced attacks. It prevents breaches and
blocks malware at the point of entry, then detects, contains, and remediates advanced threats that evade front-line
defenses and get inside your network.

Cisco AMP results can be one of the following:

 Prevent: Strengthen defenses using global threat intelligence, and block both fileless and file-based malware in
real time.

 Detect: Continuously monitor and record all file activity to detect malware.

 Respond: Accelerate investigations and automatically remediate malware across PCs, Macs, Linux, servers, and
mobile devices (Android and iOS).

The following lists identify the main Cisco AMP API use cases.

Ingest Events:

 Store events in third-party tools


 Archive extended event history

 Correlate against other logs

Search Environment:

 Find where a file has been

 Determine if a file was executed

 Capture command-line arguments

Basic Management:

 Create groups

 Move computers

 Manage file lists

Cisco AMP API

Cisco Threat Grid uses REST to provide programmable access.

There are many different actions that you can perform.

Cisco AMP API Example: Domain Feeds

The figure illustrates how to retrieve a limited number of events.

The reply is provided using JSON.


Cisco Threat Grid API Use Cases

Threat intelligence is evidence-based information about a threat that can help protect against it in some way—block it
entirely, mitigate the risk, identify its perpetrator, or even just to detect it. In the context of cyberthreat intelligence, it is
usually things like lists of indicators of compromise, or known bad actors, and so on.

Common use cases for threat intelligence include the following:

 Prevention (for example: Block known bad guy)

 Detection (for example: Wait, why is someone on the network talking to known bad guy?)

 Response (for example: Block known bad guy, and see what you need to clean up)

 Identify adversary tools, techniques, and procedures:

1. Tools (What does bad guy use?)

2. Techniques (How do they use it?)

3. Procedures (What does their higher-level operation look like?)

 Attribution (for example: Who is this bad guy?)

The following lists identify the main Threat Grid API use cases.

Sample Analysis:

 Submit files for analysis

 Parse results for indicators

 Take action in the environment

Context and Enrichment:

 Associate indicators with a malware family

 Link a payload delivery to a Word document

 Correlate host and network indicators

Threat Hunting:

 Find naming patterns in files or domains

 Map out infrastructure used in a campaign

 Collect command-line arguments used by malware

Cisco Threat Grid API

Cisco Threat Grid uses REST to provide programmable access.


Threat Grid API call construction:

 Root: https://panacea.threatgrid.com/api/

 API version: "v2" or "v3"

 API endpoint (rest of path)

 Parameters (query string):

1. API key: from account details page

2. Other parameters as available and required

Threat Grid API responses are using the JSON format and options for curated feeds (CSV, Snort, STIX).

Cisco Threat Grid Example: Domain Feeds

Indicator of compromise (IOC) feeds are lists of paths, artifact checksums, URLs, IPs, registry keys, and so on that have
been associated with an IOC.

The figure illustrates how to retrieve the IOC feeds for domains providing the time range of interest.
Cisco Network Management Platforms in Cloud 

Another way of managing your infrastructure is to make use of cloud-based management tools, which further simplify the
solution by not even requiring you to maintain the management product. There are two Cisco products that use cloud-
based management: Cisco Meraki and Cisco Software-Defined WAN (SD-WAN).

Cisco Meraki advantages:

 Provides speed and agility

 Simplified infrastructure management

 Using or planning to introduce WAN circuits such as cable, DSL, broadband, 4G/LTE

 Any SD-WAN project with low to medium routing complexity

 Mobile device management

Cisco SD-WAN advantages:

 More powerful options

 Traffic segmentation

 Using existing Cisco infrastructure

 Needed with high routing complexity

Cisco Meraki

Cisco Meraki is a complete cloud-managed networking solution:

 Wireless, switching, security, and mobile device management (MDM), centrally managed over the web

 Built from the ground up for cloud management

 Integrated hardware, software, and cloud services


The Cisco Meraki Dashboard API is an interface for software to interact directly with the Meraki cloud platform and Meraki
managed devices. The API contains a set of tools that are known as endpoints for building software and applications that
communicate with the Meraki Dashboard for use cases such as provisioning, bulk configuration changes, monitoring, and
role-based access controls. The Dashboard API is a modern, RESTful API using HTTPS requests to a URL and JSON as a
human-readable format. The Dashboard API is an open-ended tool can be used for many purposes. Here are some
examples of how it is used today by Meraki customers:

 Add new organizations, administrators, networks, devices, VLANs, and Service Set Identifiers (SSIDs).

 Provision thousands of new sites in minutes with an automation script.

 Automatically onboard and off-board teleworker devices for new employees.

 Build your own dashboard for store managers, field technicians, or unique use cases.

Cisco SD-WAN

Through the Cisco SD-WAN vManage console, you can quickly establish an SD-WAN overlay fabric to connect data centers,
branches, campuses, and colocation facilities to improve network speed, security, and efficiency. After setting templates
and policies, Cisco SD-WAN analytics identifies connectivity and contextual issues to determine optimal paths for users to
get to their destination, regardless of their connectivity.

Whether hosted in the cloud or on premises, Cisco vBond and vSmart orchestration and controller platforms authenticate
and provision network infrastructure, verifying that the devices connecting to your SD-WAN are authorized. Once
connected, SD-WAN platforms find the best path to bring users closer to the applications they need, managing overlay
routing efficiency, adjusting in real time to reflect policy updates, and handling key exchanges in Cisco full-mesh, encrypted
delivery.

Cisco SD-WAN supports third-party API integration, allowing for even greater simplicity, customization, and automation in
day-to-day operations. In addition, Cisco SD-WAN includes the common routing protocols that are critical for all enterprise
SD-WAN deployments, such as Border Gateway Protocol (BGP), Open Shortest Path First (OSPF), Virtual Router
Redundancy Protocol (VRRP), and IP version 6 (IPv6).
Through a single dashboard called vManage, Cisco SD-WAN provides:

 Transport independence: Guaranteeing zero network downtime, Cisco SD-WAN automates application flexibility
over multiple connections, such as the Internet, Multiprotocol Label Switching (MPLS), and wireless 4G Long-Term
Evolution (LTE).

 Network services: Rich networking and security services are delivered with a few simple clicks. WAN optimization,
cloud security, firewalling, IPS, and URL filtering can be deployed wherever needed across the SD-WAN fabric from
a single location.

 Endpoint flexibility: Cisco SD-WAN can simplify connectivity across branches, campuses, data centers, or cloud
environments, extending the SD-WAN fabric wherever you need it to go. Whether physical or virtual, various Cisco
SD-WAN platforms give you unparalleled choice, ensuring that your specific business needs are met.

Section 6: Summary Challenge


Section 7: Employing Programmability on Cisco Platforms

Introduction 
Web-based user interfaces and CLIs are traditional management mechanisms that are designed for use by human
operators. Some tasks are more suited to one interface, while other tasks are achieved more easily with the other
interface. In the same way, understanding different application programming interfaces (APIs) of Cisco devices and
platforms will allow you to choose the optimal API for your use case.

Automating Cisco Network Operations


Since the very beginning of computer networking, network configuration practices have centered on a device-by-device
manual configuration methodology. In the early years, this approach did not pose much of a problem, but more recently,
this method for configuring the hundreds if not thousands of devices on a network has been a stumbling block for efficient
and speedy application service delivery. As the scale increases, it becomes more likely that any changes that are
implemented by humans are going to have a higher chance of misconfigurations, whether simple typos, applying a new
change to the wrong device, or even missing a device altogether. Performing repetitive tasks that demand a high degree of
consistency unfortunately introduces a risk for error. And the number of changes humans are making is increasing,
because there are more demands from the business to deploy more applications at a faster rate than ever before.

The solution lies in automation. The economic forces of automation are manifested in the network domain via network
programmability and software-defined networking (SDN) concepts. Network programmability helps reduce operating
expenditures (OPEX), which presents a very significant portion of the overall network costs and speeds up service delivery
by automating tasks that are typically done via the CLI. The CLI is simply not the optimal approach in large-scale
automation.

Network automation plays a very crucial part in simplifying day-to-day operations and maintenance. With automating
everyday network tasks and functions, and with managing repetitive processes, human errors are reduced and network
service availability is improved. The operations teams can respond and handle trouble tickets faster and can even act
proactively. Network automation also lowers costs by giving operations teams the ability to migrate mundane and
repetitive tasks to automated processes.

A pivotal part of automating any network operations task is the ability to manage the network programmatically by using
APIs.

The value of network programmability, and use cases, suggests possibilities for network programmability solutions.
Network automation is used for various common tasks. Several of the most common are:

 Device provisioning: Device provisioning is likely one of the first things that comes to the minds of engineers when
they think about network automation. Device provisioning is simply configuring network devices more efficiently,
faster, and with fewer errors because human interaction with each network device is decreased.

 Device software management: Controlling the download and deployment of software updates is a relatively
simple task, but it can be time-consuming and may fall prey to human error. Many automated tools have been
created to address this issue, but they can lag behind customer requirements. A simple network programmability
solution for device software management is beneficial in many environments.

 Compliance checks: Network automation methods allow the unique ability to quickly audit large groups of
network devices for configuration errors and automatically make the appropriate corrections with built-in
regression tests.

 Reporting: Automation decreases the manual effort that is needed to extract information and coordinate data
from disparate information sources to create meaningful and human-readable reports.

 Troubleshooting: Network automation makes troubleshooting easier by making configuration analysis and real-
time error checking very fast and simple even with many network devices.

 Data collection and telemetry: A common part of effectively maintaining a network is collecting data from
network devices and telemetry on network behavior. Even the way data is collected is changing, because many
devices now can push data (and stream) off box in real time in contrast to being polled in regular time intervals.

A very common real-world scenario, which can greatly benefit from network automation, is collecting data from the
network for describing the network devices themselves, or providing information about the hosts and clients connected to
the network. Cisco offers comprehensive APIs on its management platforms. Utilizing these APIs for collecting information
about the network simplifies operational workflows and enables integration with business and operational support
systems.

Automating Device Operational Life Cycle


Network engineers are tasked with deploying, operating, and monitoring the network as efficiently, securely, and reliably
as possible.

 The first challenge is getting a device onto the network. This part is commonly referred to as "Day 0" device
onboarding. The key requirement is to get the device connected with as little effort as possible. Depending on the
operational mode and security requirements, either a small subset of the configuration or a "full" initial
configuration will be deployed.

 Once the device is provisioned, the configuration of the device needs to be maintained and upgraded. "Day 1"
device configuration management is responsible for the ongoing configuration. Changes to the configuration need
to be reliable, efficient, and auditable.

 The next challenge is monitoring the network and measuring performance. "Day 2" device monitoring provides a
view of how the network is operating. Based on this view, some changes may be required.

 Lastly, optimizations to the device are made, such as extensions to the device capabilities, enhancements to
operational robustness, or patches and software upgrades. Referred to as "Day n," it implies an iterative approach,
looping back through Days 0, 1, and 2.

Network Programmability Options

There are different network programmability options available today, as presented in the figure.
On the left side of the figure is how network management applications and monitoring tools access the device today, using
CLI and Simple Network Management Protocol (SNMP), NetFlow, and so on. This approach has evolved in different
directions.

When SDN started to evolve, Cisco and other vendors began offering vendor-specific APIs to program and control the
existing network devices. As you can see in the figure (option 1), the control and data planes are still in the same box, the
same as in a traditional approach. An example would be a Nexus API (NX-API) interface that is used in Cisco Nexus Data
Center Switches. Later, open APIs (Network Configuration Protocol [NETCONF], Representational State Transfer
Configuration Protocol [RESTCONF], and so on.) were added to vendor-specific APIs.

Option 2a shows a pure SDN environment where a control plane has been separated to a controller. OpenFlow was the
first protocol for communication between the controller and the data plane, but it required a hardware upgrade to
understand OpenFlow commands. Today, there are various APIs that can be used. NETCONF, for example, is one of the
most popular network configuration protocols, but others can be used (for example, Path Computation Element Protocol
[PCEP] and Interface to the Routing System [I2RS]).

The limitations of a pure SDN approach have led to a hybrid approach (option 2b), which today is used by most vendors,
including Cisco. A control plane is still needed on the network devices so that it can independently run some network
protocols (routing, for example). Also, the controller uses an abstraction layer between the applications and the network
devices. Applications can communicate with a controller in a programmable way and achieve automation through it.

Option 3 represents an overlay approach, which is commonly using the Virtual Extensible LAN (VXLAN) protocol. The main
idea is that the existing devices are kept intact and that a virtual network using overlays is created. Automation (and
programmability) is achieved on top of the virtual network. Examples of such an approach are Cisco SD-Access and Cisco
SD-WAN solutions.

Cisco IOS XE Device-Level APIs 

Network automation is the process of automating the configuring, managing, testing, deploying, and operating of physical
and virtual devices within a network. With everyday network tasks and functions automated and repetitive processes
controlled and managed automatically, network service availability improves. Any type of network can use network
automation. Hardware- and software-based solutions enable data centers, service providers, and enterprises to implement
network automation to improve efficiency, reduce human error, and lower operating expenses. The basis for network
automation lies in network operating systems like Cisco IOS XE Software for enterprise environments and Cisco Nexus
Operating System (NX-OS) for data center environment support of different automation tools.

Cisco IOS XE is an intent-based network operating system:

 Optimized for enterprise networks

 Wired and wireless access, aggregation, core, and WAN


 Open and flexible

 Standards-based APIs

Cisco IOS XE Software addresses these needs as the single operating system for enterprise switching, routing, wired, and
wireless access. It delivers a transformational level of automation and programmability, reducing business and network
complexity.

A trend in the networking industry is to focus on business goals and applications. Intent-based networking (IBN)
transforms a hardware-centric, manual network into a controller-led network that captures business intent and translates
it into policies that can be automated and applied consistently across the network. The goal is for the network to
continuously monitor and adjust network performance to help ensure desired business outcomes. IBN builds on SDN
principles, transforming from a hardware-centric and manual approach to designing and operating networks that are
software-centric and fully automated and that add context, learning, and assurance capabilities.

IBN captures business intent and uses analytics, machine learning, and automation to align the network continuously and
dynamically to changing business needs. That means continuously applying and assuring application performance
requirements and automating user, security, compliance, and IT operations policies across the whole network. Cisco IOS XE
Software represents a basis for intent-based networking. Its standards-based programmable interfaces automate network
operations, which provide a way for intent to be translated into configuration. It also provides deep visibility into user,
application, and device behaviors, which gives contextual data that is needed for assurance.

Cisco IOS XE Software is the single operating system for enterprise switching, routing, wired, and wireless access. It
provides open, standards-based programmable interfaces to automate network operations and brings deep visibility into
users, applications, and device behaviors. Automating device life-cycle management through Cisco IOS XE
programmability, as shown in the following diagram, assists network engineers to reduce business and network
complexity.
Some examples of the benefits in different stages of the network device life cycle that uses Cisco IOS XE programmability
are as follows:

 Network device onboarding is a manual and time-consuming operation, requiring highly skilled engineering
personnel. Onboarding is tedious and repetitive in nature. Automated device onboarding with programmable
workflows lowers the cost and time required to provision a network device, eliminating errors and allowing the
use of lower-level engineering personnel and associated resources. With automated device onboarding,
individuals with no networking knowledge or expertise can install the physical device. Initial configurations are
deployed programmatically, providing a simple, secure, and integrated option to ease new branch or campus
rollouts. Cisco IOS XE Software supports the following capabilities to automate device onboarding: Zero Touch
Provisioning (ZTP), Cisco Network Plug and Play (PnP), and Preboot Execution Environment (PXE).

 Network devices located in dynamic environments, such as the cloud, need to avoid being the bottleneck in service
provisioning. Rapid and repeatable service delivery is crucial in this environment. The Cisco IOS XE device APIs,
which include NETCONF, RESTCONF, and the gRPC Network Management Interface (gNMI), enable automated
configuration changes. Network devices running on Cisco IOS XE Software support the automation of configuration
for multiple devices across the network using data models. Data models are developed in a standard, industry-
defined language called "Yet Another Next Generation" (YANG), which can define configuration and state
information of a network. Model-based interfaces coexist and interoperate with existing device CLI, syslog, and
SNMP interfaces. Also, Python provides a means to programmatically interact with a device. Python scripting of
Cisco IOS XE devices is available on-box on some platforms and releases, but often, it is more efficient or desirable
to run scripts off the box.

 Events such as link flaps, power supply failure, and configuration drift are difficult to act on in a consistent manner.
The Cisco IOS XE Guest Shell feature assists IT organizations to develop an entirely new suite of applications that
help with existing operational challenges. Python scripts running on Guest Shell can trigger alerts to IT network
operation centers (NOCs) when new critical events are detected, automatically create service tickets, and mitigate
the issues by dynamically applying configurations. For example, oil and gas producers have many remote locations
where Internet or WAN access bandwidth is very limited and expensive. Data collected from drilling operations
needs to be centralized in a corporate data center. Cisco IOS XE Application Hosting provides a solution to host the
compression applications at the edge of the network so that the data can be compressed to consume less
bandwidth. Model-driven telemetry, a new approach for network monitoring in which operational, configuration,
event, and flow data are streamed from network devices continuously using a push model, provides near real-time
access to operational statistics. Cisco IOS XE streaming telemetry allows pushing data off the device to an external
collector at a much higher frequency more efficiently, and data on-change streaming.

Cisco IOS XE Operational Approaches

There are three operational approaches to programmatically integrate a network element:


 Via a controller such as Cisco Digital Network Architecture (DNA) Center

 Via a configuration management tool (that is, DevOps)

 Directly to the device

Each comes with various benefits and trade-offs. However, Cisco IOS XE Software has been designed to enable all three
integration options.

Through controller integration, programmatic control of the underlying network elements is abstracted via an
intermediary to simplify automation efforts. Controllers are purpose-built, exposing only a subset of the element features
and functionality of the underlying network, as the underlying capabilities are abstracted through the controller.
Controllers usually expose REST APIs for northbound integration. Their southbound interfaces may not be based on open
protocols, potentially limiting integration with the Cisco IOS XE standards-based network configuration interfaces. Some
controllers, such as Cisco DNA Center, are designed to provide closed-loop feedback, allowing the controller to dynamically
adjust network configurations based on changing network context.

Configuration management tools enable DevOps workflows and access to the full feature set of the device. In a DevOps
workflow configuration, changes are "modeled" and run through comprehensive validation in a simulation environment
prior to deployment. Configuration management tools are used not only to manage network devices but also to manage
compute and application resources. Their input is in the form of a simplified data model to provide human readability.
Their southbound interfaces may not use standard-based network configuration interfaces. Configuration management
tools do not provide closed-loop feedback. Instead, configuration changes are extensively tested and validated in a
simulation environment before being pushed into production. Validation testing and configuration pushes are
orchestrated through continuous integration tool chains.

Do it yourself (DIY) or direct integration, as the name implies, involves a direct programmatic control of each network
element. While manageable with a few devices, this approach is more challenging in networks with more devices. Cisco
IOS XE devices support RFC 6241 NETCONF and RFC 8040 RESTCONF network configuration protocols, providing the option
of XML-based or JavaScript Object Notation (JSON)-based integration. With direct integration, configuration changes are
made on a single set of devices, then replicated across the entire network. Direct integration is helpful for monitoring
network devices and ensuring the changes made did not create undesired behaviors.

Cisco IOS XE APIs

Model-driven programmability of Cisco devices allows you to automate the configuration and control of those devices or
even use orchestrators to provide end-to-end service delivery (for example, in cloud computing). Data modeling provides a
programmatic and standards-based method of writing configurations to network devices, replacing the process of manual
configuration. Although configuration using a CLI may be more human-friendly, automating the configuration using data
models results in better scalability.
To manipulate and automate on the data models supported on a network device, a network management protocol needs
to be used between the application client (such as an SDN controller) and the network devices. Cisco IOS XE devices
support multiple protocols such as NETCONF, RESTCONF, and gNMI via a corresponding programmable interface agent for
these protocols.

When a request from client is received via NETCONF, RESTCONF, or gNMI protocol, the corresponding programmable
interface agent converts the request into an abstract message object that is distributed to the underlying model
infrastructure. The appropriate model is selected, and the request is passed to it for processing. The model infrastructure
executes the request (read or write) on the device data store returning the results to the originating agent for response
transmission back to the requesting client.

NETCONF on Cisco IOS XE Software

There has been a NETCONF interface on the Cisco IOS XE platform for quite some time, but prior versions of software
supported sending only CLI commands over NETCONF. In most recent versions of NETCONF, the configuration is modeled
using YANG, and as such, the NETCONF interface is now extremely robust in that native XML objects can be sent and
received from the device.

NETCONF on IOS XE Software is based on an XML representation of YANG models.

NETCONF on IOS-XE Software supports NETCONF 1.1 and common operations such as <get-config>, <get>, <copy-config>,
<commit>, <validate>, <lock>, <unlock>, <edit-config>, and <delete-config>.
Cisco IOS XE Software supports two data stores—running and candidate. It also supports locking of the data stores as well
as configuration rollback.

To get started with NETCONF on IOS XE devices, you must prepare the client and the server. The client must support
NETCONF over SSH because IOS XE Software supports NETCONF as an SSH subsystem. On a Linux system, the default SSH
configuration works just fine.

NETCONF over SSH is initiated by the exchange of a hello message between the client and the server. After the initial
exchange, the client sends XML requests to which the server responds with XML responses.

After the NETCONF server sends their capabilities, you (the XML client) must send a hello message with the capabilities
that the client supports.

Once you send your capabilities, you can start sending XML documents to perform the equivalent of show and
configuration commands as seen in the following examples. After connecting directly to the NETCONF server, you can test
XML documents for collecting data and configuration changes. It can ensure that they are properly formatted and adhere
to the YANG models supported by the IOS XE device before you start using them in network applications.

RESTCONF on Cisco IOS XE Software

RESTCONF exposes a web-based interface in a consistent fashion in that it is just like any other REST API. The only
differences are that you need to use specific headers and that the URL and data is driven by YANG models.

Two of the common headers are Content-Type and Accept when working with RESTful APIs. They are often set to
application/json or application/xml. While there are a few variations supported for Cisco IOS XE Software, the common
ones are:

 application/vnd.yang.data+json for JSON

 application/vnd.yang.data+xml for XML

The Cisco IOS XE implementation supports both running and candidate data stores. Similar to NETCONF, RESTCONF
supports editing the candidate data store and committing the changes.

Constructing a URL is no different for RESTCONF as it is for common REST APIs. You need to understand the methods, entry
points, resources, queries, and so on that are supported. For RESTCONF, they are determined by YANG models. The full
running configuration of the IOS-XE device is modeled.

When using Python, the RESTCONF module should be no different than for other REST APIs.

gNMI on Cisco IOS XE Software

The gRPC Network Management Interface (gNMI), developed by Google, provides the mechanism to install, manipulate,
and delete the configuration of network devices and also to view operational data. The content provided through gNMI
can be modeled using YANG. gNMI uses JSON for encoding data in its content layer.

Also developed by Google, gRPC is a remote procedure call for low-latency, scalable distributions with mobile clients
communicating to a cloud server. gRPC carries gNMI and provides the means to formulate and transmit data and
operation requests.

Python on Cisco IOS XE Software

Cisco IOS XE Software supports both on-box and off-box Python. On-box refers to the location of the Python interpreter
that is running within the IOS XE Guest Shell. Off-box represents an externally hosted Python interpreter. Using an off-box
Python interpreter requires authentication to access the device, while on-box Python removes this burden as Guest Shell is
preauthenticated.
Scripts can be executed externally from the switch:

 Configuration management automation

 Telemetry and operational data

 Controller use cases including Cisco DNA Center and Cisco Network PnP

Scripts can be executed locally on the switch within the Guest Shell:

 Provisioning automation

 Automating Embedded Event Manager

 Application development

 IoT

Executing Python code directly on the device is referred to as on-box Python. On-box Python can be executed interactively,
or scripts can be run within the Guest Shell. A Guest Shell container is a built-in Linux Container (LXC) running on Cisco IOS
XE systems with Python version 2 preinstalled. The Python interpreter within Guest Shell can be run in interactive mode,
which takes commands and runs them as they are entered. Additional Python libraries such as "requests" and "ncclient"
can be installed.

Note

Guest Shell is an execution space running within an LXC, designed to run Linux applications including Python. It also
supports Day 0 device onboarding. The Guest Shell environment is intended for tools, Linux utilities, and manageability
rather than network routing and switching functions. Guest Shell shares the kernel with the host Cisco IOS XE system.
While users can access the Linux shell and update scripts and software packages in the container root file system, users
within the Guest Shell cannot modify the host file system or processes. Decoupling the execution space from the native
host system allows customization of the Linux environment to suit the needs of the applications without affecting the host
system.

Note

Each Cisco IOS XE device has hardware resources available for Guest Shell. These specifications end on the available
hardware. Cisco IOS XE Software running on a Cisco Catalyst 9000 switch reserves dedicated memory and CPU resources
for Guest Shell, while it does not reserve any storage, which is shared with all the other Cisco IOS XE processes. Guest Shell
uses internal by default, but it will use external Solid State Drive (SSD) storage if available.

The following are some use cases for on-box Python:

 Provisioning automation: During the provisioning automation process, a Python script is downloaded to the Cisco
IOS XE devices and executed within the Guest Shell. This script completes the initial action of the device.

 Embedded Event Manager (EEM): A Python script can be executed in response to an event detected by EEM. For
example, if a critical interface goes down, then a Python script can be executed to alert the administrator or return
the interface to an operational state.

 Application Development: Ability to develop and execute on-box Python scripts.


 Internet of Things (IoT): Cisco IOS XE devices power Cisco Catalyst industrial switches and routers that are
purpose-built for IoT environments, and on-box Python can be used to build and manage IoT applications at the
edge.

Often, it is more efficient or desirable to run scripts off the box. Off-box scripts might be run on a server or even on your
laptop. Cisco IOS XE Software provides several interfaces for Python scripters.

The following are some use cases for off-box Python:

 Configuration management automation: Configuration management is the practice of defining performance,


functional, and physical attributes of a product and then ensuring the consistency of the configuration of a system
throughout its life. With these tools, you can define and enforce configuration related to system-level operations
(that is, authentication, logging, image), interface-level configuration (such as VLAN, QoS, and security), routing
configurations (such as Open Shortest Path First [OSPF] or Border Gateway Protocol [BGP] specifications), and
much more.

 Telemetry and operational data: Model-driven telemetry is a new approach for network monitoring in which data
is streamed from network devices continuously, using a push model, and provides near real-time access to
operational statistics. Applications can subscribe to specific data items they need by using standard-based YANG
data models over NETCONF-YANG. Cisco IOS XE streaming telemetry allows pushing data off the device to an
external collector at a much higher frequency more efficiently, and it allows data on-change streaming.

 Controller use cases including Cisco DNA and Cisco Network PnP: The function of controllers is to abstract and
centralize administration of individual network devices, reducing or eliminating the need for device-by-device
configuration and management.

 IoT: Cisco IOS XE-powered Catalyst industrial switches and routers purpose-built for IoT environments and on-box
Python can be used to build and manage IoT applications at the edge.

Cisco NX-OS Device-Level APIs 

Cisco NX-OS Software running on the Cisco Nexus switches is a data center-class operating system built with modularity,
resiliency, and serviceability. Focused on the requirements of the data center, Cisco NX-OS provides a feature set that
fulfills the switching and storage networking. Cisco NX-OS simplifies the data center operating environment and provides a
unified operating system that is designed to run all areas of the data center network, including storage, virtualization, and
Layer 3 network protocols. Cisco NX-OS contains open source software (OSS) and commercial technologies that provide
automation, orchestration, programmability, monitoring, and compliance support.
Some examples of the benefits in the different stages of the network device life cycle that uses Cisco NX-OS
programmability are as follows:

 Automated device onboarding automates the process of installing and upgrading software images and installing
configuration files on Cisco Nexus devices that are being deployed in the network for the first time. It reduces the
manual tasks required to scale the network capacity. Network automation that started at Day 0 with PowerOn
Auto Provisioning (POAP) and PXE can be extended by tools like Puppet, Chef, and Ansible,

 Day 1 provisioning covers incremental and ongoing configuration changes. During this phase, flexible configuration
management and automation allow changes to be accomplished in an efficient way. Management of endpoints
and segmentation are examples. The division between Day 0 and Day 1 configuration can be very fluid, because
the initial configuration can span from simple management access to an extensive configuration to enable a
network device to participate in a data center network fabric. Using APIs and configuration agents, operators can
affect configuration changes in a more programmatic way. Cisco provides various tools and frameworks to enable
developers to automate and program Cisco Nexus devices, including NX-API REST and CLI interfaces, and also open
interfaces like NETCONF, RESTCONF, and gRPC. Also, Python provides a means to programmatically interact with a
device. Python scripting of Cisco NX-OS devices is available on-box on some platforms and releases, but
oftentimes, it is more efficient to run scripts off the box. Using tools for day-to-day management, monitoring and
configuration changes, and IT automation with dynamic configuration management can optimize the work of
infrastructure operations teams and at the same time mitigate the risk of error-prone keyboard input.

 At Day 2, visibility and monitoring become extremely important. In most environments, Day 1 and Day 2
operations run in parallel and extend through the entire life cycle of the network device, and appropriate tooling is
necessary to achieve these tasks efficiently. Open Cisco NX-OS supports a wide range of third-party telemetry
applications and can support a pull or push model for telemetry data to be extracted from the devices.

 Configuration management tools also are part of the larger process of device life-cycle management—from
planning and implementing to operations, upgrades, and eventual device decommissioning. Also, Guest Shell, a
specialized container that is prebuilt and installed within the system, allows customers and third-party application
developers to add custom functionality directly on the device in a secure, isolated environment.

With Cisco NX-OS, the device functions in the Unified Fabric mode to provide network connectivity with programmatic
automation functions. Cisco NX-OS contains OSS and commercial technologies that provide automation, orchestration,
programmability, monitoring, and compliance support. Cisco NX-OS supports several capabilities to aid programmability.
The Cisco Open NX-OS Model-Driven Programmability (MDP) architecture is an object-oriented software framework aimed
at development of management systems. The MDP object model is an abstract representation of the capabilities,
configuration, and operational state of elements and features on a Cisco Nexus switch.

The open Cisco NX-OS Software is designed to allow administrators to manage a switch such as a Linux device.

The open Cisco NX-OS Software stack addresses several functional areas to meet the needs of a DevOps-driven automation
and programmability framework:

 NX-API REST interacts with network elements through RESTful API calls. It allows for a data model-driven approach
to network configuration and management. Both the NX-API CLI and NX-API REST back-end use the NGINX HTTP
server.

 NX-API CLI provides the ability to embed Cisco NX-OS CLI commands in a structured data format (JSON or XML) for
execution on the switch via an HTTP or HTTPS transport. The data returned from the calls will also be formatted in
JSON or XML, making it easy to parse the data with modern programming languages.

 Traditional tools like CLI and protocols such as SNMP.

 Modern transport protocols like NETCONF, RESTCONF, and gRPC that use respective agents, which provide secure
transport and southbound interface to the Data Management Engine (DME).

 Cisco NX-OS SDK (NX-SDK) is a C++ abstraction and plug-in library layer that streamlines access to infrastructure
for automation and custom application creation, such as generating custom CLIs, syslogs, event and error
managers, interapplication communication, and route manager. You can use C++, Python, or Go for application
development with NX-SDK.

Cisco NX-OS has a comprehensive number of both native and open YANG models that allow you to manage the rich
feature set that Cisco NX-OS provides. Data models provide a structured and well-defined base that facilitates
programmatic interaction with Cisco NX-OS devices. The list of supported models includes native, OpenConfig, and IETF
models.

The Cisco Open NX-OS MDP architecture is an object-oriented software framework aimed at development of management
systems. The MDP object model is an abstract representation of the capabilities, configuration, and operational state of
elements and features on a Cisco Nexus switch. The object model consists of various classes that represent different
functions and their attributes on the switch. The data management framework consists of the DME, clients of the DME
("northbound" interface), and back-end processes and applications.
The DME holds the repository for the state of the managed system in the Management Information Tree (MIT). The MIT
manages and maintains the whole hierarchical tree of objects on the switch, with each object representing the
configuration, operational status, accompanying statistics and associated faults for a switch function. The MIT is the single
source of truth for the configuration and operational status of Cisco NX-OS features and elements. Object instances, also
referred to as managed objects, are stored in the MIT in a hierarchical tree:

Cisco Open NX-OS MDP is object-oriented, and everything within the model is represented as an object. Objects store the
configuration or operational state for open Cisco NX-OS features associated with the data model. Within the model,
objects can be created concerning other objects. References may be among various networking constructs, such as
interfaces and VLANs, and relationships between these components. Trunked VLAN interfaces represent an example of
related, hierarchical objects.

Programmability Options on Cisco Nexus Devices

Programmability Options on Cisco Nexus devices include a few different options.

 Onboard Python

 EEM

 NX-API CLI and REST

 NX-SDK = Currently Nexus 9000 Series only

 NX-Toolkit = Currently Nexus 9000 and 3000 Series only

 Guest Shell and Bash

 Configuration Management Tools (Puppet, Chef, and Ansible)

Onboard Python

Cisco Nexus switches have various APIs to enhance off-box scripting capabilities, but there is also the ability to run Python
scripts directly on each switch as well. There is a native Python execution engine that allows you to use the dynamic
Python interpreter directly from the switch command line. Also, you can also run standalone scripts from the command
line.

Onboard Python characteristics include:

 Use it for event-based activity, where polling may not be possible.

 Python on Cisco Nexus is useful for automating tasks:

1. CLI commands

2. Generate syslogs

3. Process information and act on it quickly

 Integrate with EEM, Scheduler; get some data from the box and work on it.
Note

The Cisco Nexus switches support all features available in Python v2.7.5.

To enter the interactive Python interpreter, simply type in the word python and press Enter.

The Python environment on each Cisco Nexus Series switch comes with a preinstalled Python module called cisco. You can
use standard helper functions on this module to see a list of its available methods and attributes and how to use them.

There are three core APIs, or methods, that are available to use within the Cisco Python module: cli(), clip(), and clid(), as
shown in the following table:
Embedded Event Manager

EEM monitors events that occur on your device and takes action to recover or troubleshoot these events, based on your
configuration. EEM has the following characteristics:

 A subsystem to automate tasks and customize the device behavior

1. Event > Notification > Action

 Many built-in system policies

 Useful for collecting more data and debugging issues, especially when unpredictable

 Can be scheduled at a specific time or intervals

EEM consists of three major components:

 Event statements: Events to monitor from another Cisco NX-OS component that might require some action,
workaround, or notification.

 Action statements: An action that EEM can take, such as sending an email, or disabling an interface, to recover
from an event.

 Policies: An event paired with one or more actions to troubleshoot or recover from the event.
The EEM feature can be used to repeatedly execute scripts on a given schedule, possibly to examine interface counters or
cyclic redundancy check (CRC) errors, or you can even use EEM to dynamically execute a script when a given CLI command
is executed, as shown in the figure.

NX-API CLI and REST

On Cisco Nexus devices, CLIs are run only on the device and used far too often to manage data center networks.

NX-API improves the accessibility of these CLIs by making them available outside of the switch by using HTTP and HTTPS.
You can use the NX-API as an extension to the existing Cisco Nexus CLI. The NX-API CLI API is great for network engineers
getting started with the API because it still makes use of commands. It sends commands to the device, wrapped in HTTP or
HTTPS, but receives structured data back.

You have the ability to send show, configuration, and Linux commands directly to the switches using NX-API CLI.

Cisco NX-SDK provides a simple, flexible, modernized, and powerful tool for off-the-box third-party custom application
development to gain access to Cisco Nexus infrastructure functionalities, which when run inside or outside the Cisco Nexus
switches allow the custom applications to run natively, just like any other native Cisco Nexus applications. It is appropriate
for DIY automation to develop custom applications to fit your needs, and by that, decoupling application development
from Cisco Nexus releases. NX-SDK offers various functionalities, like the ability to generate custom CLIs, syslogs, event
manager, high availability, route manager, streaming telemetry, and more.

NX-SDK provides an abstraction/plug-in library layer, and consequently, decouples the application from the underlying
infrastructure being used. It is easy and simple to change the infrastructure without affecting the applications. Therefore,
NX-SDK is being used for developing native Cisco applications as well.

It is built using C++ language. Other language (Python, Go, Ruby, and so on) bindings also will be provided for NX-SDK;
therefore, custom applications can be developed and built in any language of choice. Starting from NX-SDK v2.0.0, NX-SDK
applications can run anywhere (inside or outside of Cisco NX-OS).

The NX-Toolkit is a set of Python libraries that allow basic configuration of the Cisco Nexus switch. It is intended to allow
users to quickly begin using the REST API and accelerate the learning curve necessary to begin using the switch.

Guest Shell and Bash

Cisco Nexus devices support direct Bourne Again Shell (Bash) access. With Bash, you can access the underlying Linux
system on the device and manage the system.

In addition to the NX-OS CLI and Bash access on the underlying Linux environment, the Cisco Nexus devices support access
to the Guest Shell, a decoupled execution space running within LXC. With the Guest Shell, you can add software packages
and update libraries as needed without affecting the host system software.

Cisco NX-OS supports direct Linux shell access and LXCs. With Linux shell access, you can access the underlying Linux
system on a Cisco NX-OS switch and manage the underlying system. You can also use LXCs to securely install your own
software and to enhance the capabilities of the Cisco NX-OS switch. For example, you can install bare-metal provisioning
tools like Cobbler on an NX-OS device to enable automatic provisioning of bare-metal servers from the top-of-rack switch.
Cisco NX-OS devices support Docker functionality within the Bash shell and container orchestration with Kubernetes.
Cisco Controller APIs 

As the networks grow, traditional management becomes more challenging. Advanced systems are too complex to be easily
managed by traditional tools. To meet these challenges, systems can utilize APIs for configuration and monitoring within
the system and with external management utilities.

The SDN architecture differs slightly from the architecture of traditional networks. The different software programs
represented in the SDN architecture are composed of three stacked layers called the infrastructure, control, and
application layer.

 Infrastructure layer: Contains network elements (physical and virtual devices that deal with customer traffic).

 Control layer: Represents the core layer of the SDN architecture. It contains SDN controllers, which provide
centralized control of the devices in the infrastructure layer.

 Application layer: Contains SDN applications that communicate their network requirements toward the controller.

The SDN controller uses APIs to communicate with the application and infrastructure layers. Communication with the
infrastructure layer is defined with the southbound interfaces, while services are offered to the application layer using the
northbound interfaces.
Solutions that contain an SDN controller:

 Cisco Meraki

 Cisco DNA Center

 Cisco Application Centric Infrastructure (ACI)

 Cisco Software-Defined WAN (SD-WAN)

 Cisco Network Services Orchestrator (NSO)

Cisco Meraki

Cisco Meraki supports a large portfolio of networking devices, from routing and switching to smart security cameras, and
industry-leading wireless infrastructure. It has a cloud-based platform to which all the devices connect using secured
connection and from where they can be managed. The Cisco Meraki cloud-based management platform has a robust set of
APIs that can be used to monitor and manage the infrastructure.

The Cisco Meraki cloud-based platform is a solution that offers customers a single user interface to manage all their Meraki
supported devices. When a new device from the Meraki product family is installed on the network, it requires minimal to
no effort to connect it to the management platform. The web user interface has very limited options that can be
configured compared with the Cisco Catalyst product family, making it simpler to deploy and manage. After the device
comes online, it initiates a Secure Sockets Layer (SSL) tunnel to the management platform letting it know that it is available
to be claimed and configured. From there, the device will be managed via the cloud-based management platform.

Cisco Meraki offers a wide portfolio of API capabilities. It offers a REST API with which you can:

 Retrieve data about the Meraki infrastructure

 Configure Meraki devices

Cisco Meraki offers five primary types of APIs:

 Dashboard API

 Scanning API

 mV Sense API

 External Captive Portal API

 Wireless Health API

Dashboard API provides methods to interact directly with the Meraki cloud platform and Meraki managed devices. Using
the API, some of the use cases are as follows:

 Add new organizations, administrators, networks, devices, VLANs, and more

 Onboard and off-board employees

 Build your dashboard for store managers or field technicians

Scanning API enables Cisco Meraki users to detect and aggregate real-time data for custom applications. The Scanning API
delivers data in real-time from the Meraki cloud and can be used to detect Wi-Fi and Bluetooth Low Energy (BLE) devices
in real-time. The elements are exported via an HTTP POST of JSON data to a specified destination server.

mV Sense API provides a collection of endpoints to interact with Meraki cameras, zone, and analytics.

External Captive Portal API extends the power of the built-in Meraki splash page functionality by providing complete
control of the content and authentication process offering to redirect login and authentication to client-provided servers,
using your own authentication, authorization, and accounting (AAA) servers as well.

Wireless Health API allows you to retrieve wireless health information such as connection health, connection failures, and
network latency.
Cisco Meraki API Example

Thanks to widely available smart devices equipped with Wi-Fi and BLE, Cisco Meraki wireless access points can detect and
provide location analytics to report on user foot traffic behavior. It can be especially useful in multisite retail or enterprise
deployments where administrators or departments beyond IT wish to learn more about trends and user engagement.
Coupled with traditional reporting from the Wi-Fi network on client devices, applications, and websites, Cisco Meraki
provides a holistic view of online and offline user traffic. In addition to the built-in location analytics view, the Scanning API
enables Cisco Meraki customers to detect and aggregate real-time data for custom applications.

Use case: Meraki cloud estimates the location of the client.

The Scanning API delivers data in real time from the Meraki cloud and can be used to detect Wi-Fi (associated and
nonassociated) and BLE devices in real time. The elements are exported via an HTTP POST of JSON data to a specified
destination server. The raw data is aggregated from all access points within a network on the Meraki cloud and sent
directly from the cloud to an organization data warehouse or business intelligence center. The JSON posts occur
frequently, typically batched every minute for each access point.

Using the physical placement of the access points from the Map & Floorplan on the Dashboard, the Meraki cloud
estimates the location of the client. The geolocation coordinates (latitude, longitude) and X,Y location data accuracy can
vary based on several factors and should be considered a best-effort estimate. Access point placement, environmental
conditions, and client device orientation can influence X,Y estimation; experimentation can help improve the accuracy of
results or determine a maximum acceptable uncertainty for data points.

Cisco DNA Center

Cisco DNA Center is the network management and command center for Cisco DNA, an intent-based network for the
enterprise. It supports the expression of business intent for network use cases, such as base automation capabilities in the
enterprise network.

Cisco DNA Center provides open programmability APIs for policy-based management and security through a single
controller. It provides an abstraction of the network, which leads to simplification of the management of network services.
This approach automates what has typically been a tedious manual configuration.

The Analytics and Assurance features of Cisco DNA Center provide end-to-end visibility into the network with full context
through data and insights. Cisco customers and partners can use the Cisco DNA Center platform to create applications that
use the native capabilities of Cisco DNA Center. You can use Cisco DNA Center Intent APIs, Integration Flows, Events and
Notification Services, and the optional Cisco DNA Center Multivendor SDK to enhance the overall network experience by
optimizing end-to-end IT processes, reducing total cost of ownership (TCO), and developing new value-added networks.

Cisco DNA offers the following REST APIs:

 Intent API
 Software Image Management (SWIM) API

 PnP API

 Operational tools

 Authentication API

 Integration API

Intent API is a northbound REST API that exposes specific capabilities of the Cisco DNA Center platform. It provides policy-
based abstraction of business intent, allowing you to focus on an outcome to achieve instead of struggling with the
mechanisms that implement that outcome. The RESTful Cisco DNA Center Intent API lets you use HTTPS verbs (GET, POST,
PUT, and DELETE) and JSON syntax to discover and control your network. Intent API can be divided into multiple groups:

 Site Hierarchy Intent API: Retrieves site hierarchy with network health information.

 Network Health Intent API: Retrieves network devices by category, with health information on each of the devices
returned. Additional request paths retrieve physical and virtual topologies.

 Network Device Detail Intent API: Retrieves detailed information about devices retrieved by time stamp, MAC
address, universally unique identifier (UUID), name, or nwDeviceName. Additional REST request paths allow you to
retrieve additional information, such as functional capabilities, interfaces, device configuration, certificate
validation status, values of specified fields, modules, and VLAN data associated with specified interfaces. You can
also add, delete, update, or synchronize specified devices.

 Client Health Intent API: Returns overall client health organized as wired and wireless categories. It returns
detailed information about a single client.

SWIM API enables you to retrieve information about available software images, import images into Cisco DNA Center,
distribute images to network devices, and activate images that have been installed on devices.

PnP API enables you to manage PnP projects, settings, workflows, virtual accounts, and PnP-managed devices.

Operational tools enable you to configure and manage CLI templates, discover network devices, configure network
settings, and trace paths through the network. Operational tools can be divided into these groups:

 Command Runner API: Enables you to retrieve the keywords of all CLIs that Command Runner accepts, and it lets
you run read-only commands on a network device to retrieve its real-time configuration.

 Network Discovery API: Provides programmatic access to the Discovery functionality of Cisco DNA Center. You can
use this API to create, update, delete, and manage discoveries and their associated credentials. You can also use
this API to retrieve the network devices that a particular discovery job acquired.

 Template Programmer API: Enables you to perform create, read, update, and delete (CRUD) operations on
templates and projects that the template programmer uses to facilitate design and provisioning workflows in Cisco
DNA Center. You can use this API to create, view, edit, delete, and version templates. You can also add interactive
commands to templates, check the contents of templates for syntactical errors or blacklisted commands, deploy
templates, and check the status of template deployments.

 Path Trace API: Simplifies resolution of network performance issues by tracing application paths through the
network and providing statistics for each hop along the path. You can use this API to analyze the flow between two
endpoints on the network, retrieve the results of a previous flow analysis, summarize all stored flow analyses, or
delete a saved flow analysis.

 Task API: Queries Cisco DNA Center for more information about a specific task that your RESTful request initiated.
Often, a network action may take several seconds or minutes to complete, so Cisco DNA Center completes most
tasks asynchronously. You can use the Task API to determine whether a task completed successfully; if so, you can
then retrieve more information from the task itself, such as a list of devices provisioned.

 File API: Enables you to retrieve files from Cisco DNA Center; for example, you might use this API to get a software
image or a digital certificate from Cisco DNA Center.
All Cisco DNA Center platform REST requests require proof of identity. The Authentication API generates a security token
that encapsulates the privileges of an authenticated REST caller. Cisco DNA Center authorizes each requested operation
according to the access privileges associated with the security token that accompanies the request.

The role of the Integration API is to allow Cisco DNA Center to connect to other systems. Integration capabilities are part
of westbound interfaces. To meet the need to scale and accelerate operations in modern data centers, IT operators
require intelligent, end-to-end workflows built with open APIs. The Cisco DNA Center platform provides mechanisms for
integrating Cisco DNA Assurance workflows and data with third-party IT Service Management (ITSM) solutions.

Cisco DNA Center: List Devices

The Cisco DNA Center Intent API is a northbound REST API that provides a consistently structured way to access Cisco DNA
Center workflows for automation and assurance. The Intent API is hierarchically structured into functional domains and
subdomains. To retrieve a list of devices on the network, you need to examine the Devices subdomain of the Know Your
Network domain.

The Devices subdomain enables API clients to perform CRUD operations on the devices in the network. A wide range of
parameters for filtering the response are supported, such as hostname, management IP address, MAC address, and
software version.

The complete Intent API reference is available at https://developer.cisco.com/docs/dna-center/api/1-3-1-x/. The


documentation provides example requests with detailed descriptions, available parameters, response codes, and
examples of response data.

One of the most common scenarios is to gather information about all devices connected to the network that are in this
case managed by Cisco DNA Center. You might need that information to get a general overview of the operational status
for the devices, or you require a detailed view of the performance of any device or client over time and from any
application context. Another example could include a software image upgrade. In all these cases, you first need a list of all
devices.

Use case: Operational status of every network device connected to Cisco DNA Center

The following figure shows a response model of the network-device resource.


The following code is an example of a GET request to the Intent API. The API call is made to the /network-device resource,
and JSON-formatted data with a list of devices and their properties is returned. Properties include detailed information
about devices by time stamp, MAC address, name, software type and version, or management IP address.
Additional REST request paths would allow you to retrieve additional information, such as functional capabilities,
interfaces, device configuration, certificate validation status, values of specified fields, modules, and VLAN data associated
with specified interfaces. You can also add, delete, update, or synchronize specified devices.

Cisco ACI

Cisco has taken a foundational approach to building a programmable network infrastructure with Cisco ACI. The ACI
infrastructure operates as a single system at the fabric level, controlled by the centralized Cisco Application Policy
Infrastructure Controller (APIC). With this approach, the data center network as a whole is tied together cohesively and
treated as an intelligent transport system for the applications that support business. On network devices that are part of
this fabric, the operating systems have been written to support this system view and provide an architecture for
programmability at the foundation.

Cisco ACI provides programmability for the data center fabric as a whole, including hardware and software devices, by
using integrated protocol and device packages with scripts for third-party devices:

 Built-in programmability in both software and hardware

 Entire data center switching infrastructure can be programmed as a single fabric

 Declarative model enforces desired state

Instead of opening up a subset of the network functionality through programmatic interfaces, like previous SDN solutions,
the entire ACI infrastructure is opened up for programmatic access. It is achieved by providing access to the Cisco ACI
object model. The ACI object model represents the complete configuration and runtime state of every software and
hardware component in the entire infrastructure. The object model is made available through standard REST API
interfaces, making it easy to access and manipulate the configuration and runtime state of the system.

The API accepts and returns HTTP or HTTPS messages that contain JSON or XML documents. You can use any programming
language to generate messages and JSON or XML documents that contain the API methods or managed object
descriptions. In addition to the standard REST interface, Cisco provides several open source tools or frameworks such as
ACI toolkit, Cobra (Python), ACIrb (Ruby), Puppet, and Ansible to automate and program the APIC. On top of the REST API
are a CLI and GUI for day-to-day administration.
At the top level, the Cisco ACI object model is based on promise theory, which provides a scalable control architecture with
autonomous objects responsible for implementing the desired state changes provided by the controller cluster. This
approach is more scalable than traditional top-down management systems, which require detailed knowledge of low-level
configurations and the current state. With promise theory, desired state changes are pushed down, and objects implement
the changes, returning faults when required.

Beneath this high-level concept is the core of Cisco ACI programmability: the object model. The model can be divided into
two major parts: logical and physical. Model-based frameworks provide an elegant way to represent data. The Cisco ACI
model provides comprehensive access to the underlying information model, providing policy abstraction, physical models,
and debugging and implementation data.

The logical model itself consists of the objects—configuration, policies, and runtime state—that can be manipulated and
the attributes of those objects. In the Cisco ACI framework, this model is known as the MIT. Each node in the MIT
represents a managed object or group of objects. These objects are organized in a hierarchical way, creating logical object
containers.

Cisco ACI offers different SDKs:

 Cobra—Cisco ACI Python SDK:

1. Python implementation of the API

2. Provides native bindings for all the REST functions

3. Objects in Cobra are one-to-one representations of the MIT

4. Provides methods for performing lookups and queries and for object creation, modification, and deletion

5. Offers full functionality, better suited for more complex queries and incorporating Layer 4-to-Layer 7
devices, initial fabric builds, and so on

 Cisco ACI toolkit:

1. Python libraries for basic configuration of Cisco APIC

2. Exposes a small subset of the Cisco APIC object model

3. Not full functionality as Cobra SDK

 Cisco APIC REST to Python adapter:

1. Converter for XML and JSON code to Python

2. Often used with API Inspector

 ACIrb:

1. Ruby implementation of the Cisco APIC REST API

2. Enables direct manipulation of the MIT through the REST API, using standard Ruby language options

Besides REST API, ACI also offers a Cisco NX-OS style CLI to configure and manage ACI in a traditional CLI way. Moquery is a
CLI object model query tool, while Visore is an object store browser (GUI).

Note

When you perform a task in the Cisco APIC GUI, the GUI creates and sends internal API messages to the operating system
to execute the task. By using the API Inspector, which is a built-in tool of the Cisco APIC, you can view and copy these API
messages. An administrator can replicate these messages to automate key operations, or you can use the messages as
examples to develop external applications that will use the API.

Cisco ACI REST API Example

The Cisco REST API is the interface into the MIT and allows manipulation of the object model state. The same REST
interface is used by the APIC CLI, GUI, and SDK, so that whenever information is displayed, it is read through the REST API,
and when configuration changes are made, they are written through the REST API. The REST API also provides an interface
through which other information can be retrieved, including statistics, faults, and audit events. It even provides a means of
subscribing to push-based event notification so that when a change occurs in the MIT, an event can be sent through a web
socket.

Standard REST methods are supported on the API, which includes POST, GET, and DELETE operations through HTTP. The
POST and DELETE methods are idempotent, meaning that there is no additional effect if they are called more than once
with the same input parameters. The GET method is nullipotent, meaning that it can be called zero or more times without
making any changes (or that it is a read-only operation).

Payloads to and from the REST interface can be encapsulated through either XML or JSON encoding. In the case of XML,
the encoding operation is simple: The element tag is the name of the package and class, and any properties of that object
are specified as attributes of that element. Containment is defined by creating child elements.

The object-based information model of Cisco ACI makes it a very good fit for REST interfaces: URLs and URIs map directly
to distinguished names identifying objects on the tree, and any data on the MIT can be described as a self-contained
structured text tree document encoded in XML or JSON. The objects have parent-child relationships that are identified
using distinguished names and properties, which are read and modified by a set of CRUD operations.

Objects can be accessed at their well-defined address, their REST URLs, using standard HTTP commands for retrieval and
manipulation of Cisco APIC object data.

Object instances are referred to as managed objects. Every managed object in the system can be identified by a unique
distinguished name. This approach allows the object to be referred to globally. In addition to its distinguished name, each
object can be referred to by its relative name. The relative name identifies an object relative to its parent object. The
distinguished name of any given object is derived from its own relative name that is appended to the distinguished name
of its parent object.

The distinguished name enables you to unambiguously identify a specific target object. The relative name identifies an
object from its siblings within the context of its parent object. The distinguished name contains a sequence of relative
names. Distinguished names are directly mapped to URLs. Either the relative name or the distinguished name can be used
to access an object, depending on the current location in the MIT. Because of the hierarchical nature of the tree and the
attribute system used to identify object classes, the tree can be queried in several ways for obtaining managed object
information. Queries can be performed on an object itself through its distinguished name, on a class of objects such as a
switch chassis, or on a tree level to discover all members of an object.

The URL format used can be represented as follows:

 http:// | https://: By default, only HTTPS is enabled.

 host: This component is the hostname or IP address of the APIC controller—for example, "APIC."

 :port: This component is the port number for communicating with the APIC controller if a nonstandard port is
configured.

 /api/: This component specifies that the message is directed to the API.

 mo | class: This component specifies the target of the operation as a managed object or an object class.

 DN: This component is the distinguished name of the targeted managed object—for example,
topology/pod-1/node-201.
 className: This component is the name of the targeted class, concatenated from the package and the class in the
context of the package; for example, dhcp:Client is dhcpClient. The className can be defined in the content of a
distinguished name—for example, topology/pod-1/node-1.

 json | xml: This component specifies the encoding format of the command or response body as JSON or XML.

 ?options: This component includes optional filters, selectors, or modifiers to a query. Multiple option statements
are joined by an ampersand ("&").

A Uniform Resource Identifier (URI) provides access to a target resource. The first two sections of the request URI specify
the protocol and access details of the APIC. The literal string /api indicates that the API is to be invoked. The next specifies
whether the operation is for a managed object or a class. Next, either the fully qualified distinguished name for object-
based queries or the package and class name for class-based queries is specified. The final mandatory part of the request
URI is the encoding format: either .xml or .json. The REST API supports a wide range of flexible filters, useful for narrowing
the scope of your search to allow information to be located more quickly. The filters themselves are appended as query
URI options, starting with a question mark ("?") and concatenated with an ampersand ("&"). Multiple conditions can be
joined to form complex filters.

With the capability to address and access an individual object or a class of objects with the REST URL, you can achieve
complete programmatic access to the entire object tree and the entire system.

One of the most common use cases for using API is monitoring the Cisco ACI Fabric. Proactive monitoring is a very
important piece of the network administrator job, but it is often neglected because resolving immediate problems in the
network usually takes priority. However, because the APIC makes it incredibly easy to gather statistics and perform
analyses, it will save network administrators both time and frustration.

For example, if you want to learn about the details of all available fabric nodes (Cisco ACI leaf and spine switches),
including the state, IP address, and so on, you can use the following Cisco ACI REST API call using the class query:
Example: Find all the switch nodes in the fabric and their details
Besides manual querying, information can also be gathered automatically and then policies are used and can be reused in
other places, which can lead to minimization of human error and effort. As you can see in the following figure, when you
start using automation when monitoring the ACI fabric, you can build various applications that can execute different tasks
if there is a specific change in the network.

Use case: Proactive monitoring of the ACI fabric

As you have seen, the Cisco ACI platform has a robust REST API. Anything that you can do via the GUI, you can do via the
API. However, using the raw API can be tedious and cumbersome, because you need to know and configure low-level
details such as which HTTP verb is being used, the URI, headers, and encoding supported. In addition, you need to take
care of any error handling within any custom code you write when using a native REST API—for example, with the Python
requests module. To simplify application development with ACI, Cisco has developed Cobra, a robust Python library for the
APIC REST API. Objects in the Cobra library (SDK) are a one-to-one mapping to the objects within the Cisco ACI MIT.

If you are planning to dive deeper into Cobra, the best place to start is the official Cobra documentation that Cisco has
hosted on readthedocs. These documents review everything from the installation to showing examples, and they even
include a Frequently Asked Questions section to give a quick start to individuals looking to test Cobra.

To access Cisco APIC using Cobra, you must log in with valid user credentials. Cobra currently supports
username/password-based authentication in addition to certificate-based authentication. To make configuration changes,
you must have administrator privileges in the domain in which you will be working. A successful login returns a reference
to a directory object that you will use for further operations.

You can use the Cobra SDK to manipulate the MIT generally though this workflow:

 Identify the object to be manipulated.

 Build a request to read, change attributes, or add or remove children.

 Commit the changes made to the object.


A common workflow for retrieving data using Cobra is as follows:

 Create a session object.

 Log in to Cisco APIC.

 Perform lookup by class or distinguished name.

In the same fashion, a common workflow for configuring the Cisco ACI Fabric is as follows:

 Create a session object

 Log in to Cisco APIC.

 Create a configuration object by first looking for an object by distinguished name or class and then creating the
new object. As you can see in the figure, you also need to reference the parent object when building a
configuration object.

 Create a configuration request object.

 Add your configuration object to the request.

 Commit.

Cisco SD-WAN

Cisco SD-WAN software provides a REST API, which is a programmatic interface for controlling, configuring, and monitoring
the SD-WAN devices in an overlay network. You access the REST API through the vManage web server.
The Cisco SD-WAN vManage GUI itself uses the same REST API to perform actions, as is exposed northbound. Taking that
into consideration, it means that you can always easily find an API call that achieves the same goal as a certain sequence of
GUI clicks, just by inspecting the HTTP requests that your browser makes. The same API calls can be performed from an
outside client application.

The vManage API uses the JSON data model to represent the data, associated with a resource. Request and response
bodies always contain JSON-formatted strings.

The API documentation is provided as a part of the vManage controller and is accessible at the URL with the following
scheme [https://vmanage-ip-address:8443/apidocs], depending on the IP address of each individual setup.

When you use a program or script to transfer data from a vManage web server or perform operations on the server, you
must first establish an HTTPS session to the server. To do so, you send a call to log in to the server with the following
parameters:

1. URL to send the request to: Use https://{vmanage-ip-address}/

2. Request method: Specify a type of request

3. API call input: The input is an application—for example, for the Content Type, specify application/x-www-form-
urlencoded

4. API call payload

REST API URL consists of three parts:

 Server (hostname or IP address)

 Resource (location of the data or object of interest)

 Parameters (details of scope, filter, often optional)

All REST API calls to vManage contains the root /dataservice.


In the vManage REST API, resources are grouped into collections, which are present at the top level of the API. There are
multiple categories:

 Device actions: Manage device actions like reboot, upgrade, and lxcinstall

1. Example URI: /device/action/

 Device inventory: Retrieve device inventory information, including serial numbers

1. Example URI: /system/device/

 Device configuration: Create feature and device configuration templates, create and configure vManage clusters

1. Example URI: /template/

 Certificate managements: Manage certificates and security keys

1. Example URI: /certificate

 Monitoring: View status, statistics, and other information about operational services in the overlay network

1. Example URIs: /alarms, /statistics, /event

 Real-time monitoring: Retrieve, view, and manage real-time statistics and traffic information

1. Example URIs: /device/app-route/statistics, /device/bfd/status

 Troubleshooting: Troubleshoot devices, determine the effect of the policy, update software, and retrieve software
version information

1. Example URIs: /device/action/software, /device/tools/ping

 Cross-domain integration: APIs to integrate with Selsius Digital Access (SDA) and ACI

1. Example URI: /partner

Here is an example of the AP documentation:

The following figure shows the response part of the /device  resource documentation.
The documentation examples include the model schema of the response class, response status codes, and error messages.
You also can run the requests directly from the documentation page.

A common REST principle is that APIs should be self-descriptive and self-documenting. The resource collections and
resources in the Viptela REST API are self-documenting, describing the data you need to make a call and the response from
each call. However, the collections and resources do assume that you are familiar with the Viptela overlay network
concepts, software and hardware features, and capability.

Cisco SD-WAN Webhooks

The Cisco SD-WAN platform provides webhooks that allow third-party applications to receive data when a specified event
occurs.

Webhooks enable a push-model mechanism to send notifications in real time. Another way of getting information is to
frequently poll the data from vManage by using its REST API. However, by using webhooks, vManage can send an HTTP
POST request to the external system in real time once a certain event occurs.

Cisco SD-WAN REST API Example: List Devices

Part of network automation is to first gather information about all available devices under Cisco SD-WAN management.
Listing all the devices is therefore an important step in the whole process of using network automation, regardless of the
use case.

To list all the devices that are managed by vManage, request data from /dataservice/device resource. It is achieved by
making an HTTP GET request to the vManage server.
Automating complex network configuration processes is a great way to propagate errors at extremely high speed to all
corners of your data center. Orchestration platforms can be great tools in the right hands, but small errors have a way of
doing greater damage in profound ways; for example, a power chain saw can do more damage with the slightest
miscalculation. What’s needed is to couple orchestration platforms with rapidly emerging network verification technology.
Network verification can now be completely automated, so you are not introducing additional manual processes to slow
down your orchestration. But you can verify that everything is accurate and deployed correctly at light speed.

But what is network verification? You define the policy checks that you need to have in place, and the platform verifies in
minutes or even less whether the current network configurations deviate from any of the policies. And all the procedure is
done by using APIs—in the case of SD-WAN in this example, REST API.

Use case: Deployment verification


Cisco NSO

Cisco NSO exposes a northbound REST API, which can be used by developers to perform operations. It supports the
following methods:

 GET: Retrieve information from Cisco NSO

 POST: Add or modify the configuration on Cisco NSO

 PUT: Modify the configuration on Cisco NSO (replace)

 PATCH: Modify the configuration on Cisco NSO (merge)

 DELETE: Remove the configuration from Cisco NSO

The Cisco NSO REST API allows you to manage service and device configurations.

Although REST API is easy to use, you would sometimes need a more programmable approach to access and modify data in
the Cisco NSO configuration database. For this purpose, three more SDKs are available in Cisco NSO:

 Python SDK

 Java SDK

 Erlang SDK

These SDKs can be used for accessing data in Cisco NSO, modifying the state in the configuration database, and serve for
mapping logic between the service and device configuration. While configuration templates in Cisco NSO offer only static
binding of variables from a service instance, using one of the SDKs, you can also perform calculations or connect to the
external system.

Using the NSO Python API, the following Python versions are supported:

 Python 2.7.5 or later

 Python 3.4 or later

Cisco NSO API Use Case Example

Among the main benefits of deploying Cisco NSO are faster service deployment and the deployment of configuration
management systems. Cisco NSO also makes networks more scalable, because new devices can be added and configured
with minimal effort. The same is true for device replacement; devices can be replaced quickly with little to no additional
configuration.

Cisco NSO API Use Case example:


Cisco NSO provides rapid deployment of provisioning and configuration management systems (for example, networkwide
end-to-end configuration of VPN service).

The figure illustrates a traditional Layer 2 or Layer 3 Multiprotocol Label Switching (MPLS) VPN where Cisco NSO can be
deployed to manage provider edge routers and, optionally, also customer edge routers. The orchestrated solution typically
would involve integration with a front-end portal to simplify service management.

Use the Cisco Controller APIs


In this activity, you will use APIs to perform some useful tasks on network controllers that could be integrated in any
automated workflow. You will use the Cisco Meraki Dashboard API to list all the clients on a specific network. You will use
the Cisco NSO REST API to list the devices that NSO manages. A Postman collection has been preprovisioned in your
Student VM. You will examine the API documentation and expand the collection with proper requests to perform the
described actions.
Note:

The documentation structure in the figure may vary from the structure in the current version of the API documentation.

List Network Clients Seen by Cisco Meraki

In this procedure, you will retrieve a list of clients from the Meraki Dashboard API. As you saw in the documentation, to
retrieve information about clients, you need to define a network ID for which you would like to retrieve data. Networks are
associated with specific organizations. Each organization has its own API key, and only networks belonging to that
organization can be managed when authenticating with that API key. Organizations also have an organization ID. In the
preprovisioned Postman collection, authorization is already set up and the appropriate organization ID is configured in the
existing request. First, you will run this request to get a list of networks that are associated with this organization. Then,
you will create a new request to retrieve a JSON-formatted list of clients on a network.
Authorization is already configured with an API key added to a custom request header. The API key value is set in a
collection variable.
The Meraki API server responded with the status code 200 OK, which means that the request was successfully
authenticated and correctly formed.
The response body shows a JSON-formatted list of clients on the network.
The example demonstrates a sample request to fetch the running configuration of a service instance. An appropriate
Accept header is sent with the request, which instructs the NSO API to return JSON-formatted data.
The example provides a sample request to return a part of the running configuration for a device managed by NSO. Note
that an Accept request header was not set, so the API defaults to responding with XML content. Note that the shallow
parameter was sent with the request.

List Network Devices Known to Cisco NSO

In this procedure, you will create a request to fetch a list of devices managed by Cisco NSO. You will learn how to control
the depth of information the API returns and how to request different response formats. First, you will authenticate to the
NSO API.
Automating Cisco Webex Teams Operations 

Developers, operators, systems, and network administrators all use some form of digital communication to convey
information between them in some form or another. Having a conversation is a real-time experience, regardless of the
media used, and most of the messaging applications today offer text, audio, and video support out of the box. IM and real-
time chat applications are not new technologies that have just been invented. What is different about these applications
now is how you use them.

The rise of API availability throughout the industry allows countless possibilities of integrating products and services into
the existing infrastructure. From network devices to web applications, online search services and storage providers, many
of them offer exposed and well-documented programmable interfaces that are essential for providing automated
operations of these services.

The key to utilizing a collaboration platform like Cisco Webex Teams for performing automated operational workflows is
using chatbots.

A chatbot is a software service that imitates human conversations for solving various tasks—for example, providing
customer support or educational assistance. Chatbots can also be integrated into existing operational workflows, so they
can, for example, provide administrators and operators with notifications and status messages directly in the messaging
workspace. Integration with the infrastructure can be further expanded so that the operator can instruct the bot to
perform some action or gather some data from the network, servers, or applications. In this way, the operator can execute
tasks and react to events without leaving the central collaborative workspace environment. Workflows can be further
automated with notifications being sent to the bot whenever some interesting event occurs. The bot can then react in real
time and perform some action—for example, reconfigure a resource or create a trouble ticket.

Chatbot service providers can automatically supply chatbots with access to some of the following information:

 Member activity

 Channel information

 Chatbot information

 Metadata

 Statistics

Benefits of using chatbots in operational workflows include:

 Centralized infrastructure management

 Conversational approach to operations

 Tighter collaboration

 Transparent workflow

A collaboration platform can make bot-based interactions possible by providing a northbound API and implementing event
notifications. Cisco Webex Teams assures it by providing a REST API and enabling webhooks.

Cisco Webex Teams REST API

To use the Cisco WebEx Teams API, you must first generate an access token. A personal access token is retrieved from the
Cisco Webex Teams developers website at https://developer.webex.com/docs/api/getting-started. The personal token is
temporary, so if you require permanent access to the Webex Teams API, you should create a bot and use its token, which
does not expire. The access token has to be sent with all API calls and is set in the value of the authorization HTTP request
header. The token also serves as an identifier of the participant; for example, when you send a request to list rooms where
a person is present, you do not need to send the person ID. The API matches the value of the access token to the person
and responds with the appropriate data.
The Webex Teams API supports paginating responses, managing file attachments, and markdown formatting of messages.

The developer website also features full API documentation and references with examples and the capability to execute
API calls from the web page.

Webex Teams SDK

The webexteamssdk is a community-developed Python library for working with Webex Teams APIs. The library abstracts
the Webex Teams REST API and wraps all the API requests and returned JSON objects within native Python objects and
methods, and as such makes working with the API a native Python experience.

The Webex Teams SDK has these functions:

 Simplifies authentication

 Provides default arguments

 Supports automatic pagination and rate limiting

 Manages file attachments

 Provides comprehensive error reporting

The Webex Teams SDK project is maintained at https://github.com/CiscoDevNet/webexteamssdk.

WebexTeamsAPI Class

The WebexTeamsAPI class hierarchically organizes Webex Teams APIs, following the same structure and naming as
described in the Webex Teams API documentation (people, rooms, messages, and so on). Each subclass implements
several methods, like list and get, which you can use to perform desired tasks.
The following example is the most basic Python code that you need to perform an API call with the webexteamssdk library:

You import the WebexTeamsAPI class, create an instance of that class (a connection object), and call the me() method of
the people subclass. If the access token is not specified when initializing the connection object, the WebexTeamsAPI class
tries to read an environment variable WEBEX_TEAMS_ACCESS_TOKEN. Calling the me() method creates a GET request to
the API and returns an object, containing information about the person authorized by the access token, like the ID, display
name, and email address. The following example shows the output from the previous code.

Spaces (Rooms)

Spaces or rooms, as they are referred to in the API, are virtual meeting places where people and bots collaborate and post
messages. This API is used to manage the rooms themselves. You can create, delete, or update a room—for example,
change the room title.

There are two types of spaces:

 Direct: A private space used by two participants.

 Group: Space shared by a team. To create a team room, you need to specify a teamId parameter key and value in
the POST payload.

The following code is a request to list the rooms to which the authenticated user belongs, and the received response:
The following code is the same example but uses the webexteamssdk:

You can limit the response to just direct or group spaces by specifying the type parameter.

Memberships

To list participants (members) of any room that you are in, or to invite someone to a room, you use the memberships API.
Memberships can also be updated by making a user a moderator, or delete so that a user leaves (or is kicked out of) a
room.

The following code is a request to list all the members in a room, with a specified room ID:
The following code is the same example but uses the webexteamssdk:

You would invite a user to a space by making a POST request:

Messages

You can post, read, or delete messages in a room in which you are a member by using the messages API. Messages can be
also sent directly to a user by providing the toPersonId or toPersonEmail parameters.

The following code creates a new message in a room, specified by the room ID. You can also attach files or format the
message with markdown:
The response contains the ID of the message, which can be referenced for future handling, retrieving, or deleting.

The following code is the same example but uses the webexteamssdk:

DevNet Developer Resources 

Cisco DevNet is a developer program that provides tools to help you produce applications built on top of Cisco products.
DevNet is much more than a simple website; it is a fully integrated developer program consisting of a website, an
interactive developer community, coordinated developer tools, integrated discussion forums, and sandboxes.

Cisco DevNet is a developer program consisting of:

 DevNet Sandbox

 DevNet Learning Labs


 Cisco DevNet GitHub organization

 DevNet API Information

 DevNet Support forums, chat with Cisco Webex Teams, and case-based support

The DevNet developer program includes support for much more than networking. The program includes content on
automation that pertains to Cisco networking, collaboration, compatibility testing, IoT, cloud, security, and data center.
There is a plethora of content that is geared toward hands-on labs.

DevNet Sandbox

DevNet Sandbox provides sandbox environments:

 Reserve a lab or use an always-on environment

 https://developer.cisco.com

Cisco DevNet team offers sandbox environments that enable you to get quick hands-on access to equipment for testing
new platforms and APIs. Various sandbox types exist across many technologies. Certain sandboxes exist that are always
on, while other sandboxes need to be reserved.

DevNet Learning Labs

DevNet Learning Labs offer you:

 Step-by-step directions on using newer technologies


 Labs grouped into modules

 Labs grouped by technology

 Challenges

 https://developer.cisco.com/learning

In addition to sandbox environments, Cisco DevNet offers Learning Labs. These labs offer step by step directions on how to
use newer technologies as well.

CiscoDevNet GitHub Organization

Cisco DevNet also has a GitHub community. You can find code samples (including samples that are used in Learning Labs),
scripts, libraries, YANG models, and various open source projects hosted on the DevNet GitHub site. You should get
familiar with the current projects and be sure to follow this community to be aware of when new GitHub repositories are
created.
DevNet API Information

One of the most widely used Cisco DevNet resources is the DevNet API information page. Here, you can find all the APIs
that are exposed on Cisco devices.
APIs are grouped into multiple sections:

 IoT

 Cloud

 Networking

 Data Center

 Security

 Mobility

 Open Source

 Collaboration Services

DevNet Support
DevNet Support options offer multiple types of support for developers who are creating solutions using Cisco APIs:

 Knowledge Base: Free articles that cover a wide range of topics. It is a good place to check if your question is
already answered.

 Chat Room Support: This resource is always available. You can enter your email and ask the community a question
regarding APIs.

 Forum Support: Free resource to any DevNet member. You need to be logged in to post your questions.

 Case Support: Developers can open support tickets for a specific technology. These tickets can be purchased and
are also provided as a part of the Solution Partner program.

The DevNet Support forum can be found on https://developer.cisco.com/site/support.


Section 7: Summary Challenge
Section 8: Describing IP Networks

Introduction 
At the most basic level, a "network" is defined as a group of systems interconnected to share resources. You can find
examples of such systems and resources in a social network to share work experience or personal events or a computer
network to share file storage, printer access, or Internet connectivity.

A network connects computers, mobile phones, peripherals, and even IoT (Internet of Things) devices. Switches, routers,
and wireless access points (APs) are the essential networking basics. Through them, devices connected to your network
can communicate with one another and with other networks, such as the Internet, which is a global system of
interconnected computer networks.

Networks carry data in many types of environments, including homes, small businesses, and large enterprises. Large
enterprise networks may have several locations that need to communicate with each other. You can use a network in your
home office to communicate via the Internet to locate information, place orders for merchandise, and send messages to
friends. You can also have a small office that is set up with a network that connects other computers and printers in the
office. Similarly, you can work in a large enterprise with many computers, printers, storage devices, and servers running
applications that are used to communicate, store, and process information from many departments over large geographic
areas.

A network of computers and other components that are located relatively close together in a limited area is often referred
to as a LAN. Every LAN has specific components, including hardware, interconnections, and software. WAN communication
occurs between geographically separated areas and is typically provided by different telecommunication providers using
various technologies that use different media, such as fiber, copper, cable, asymmetric DSL (ADSL), or wireless links. In
enterprise internetworks, WANs connect the main office, branches, small office home office (SOHO), and mobile users.

As someone who is exploring the functions of networking, there are some important skills that you will build on:

 Explain the functions, characteristics, and common components of a network.

 Read a network diagram, including comparing and contrasting the logical and physical topologies.

 Describe the impact of user applications on the network.

Basic Networking Concepts 

The term "network" is used in many different arenas. Examples of networks are social networks, phone networks,
television networks, neural networks, and, of course, computer networks. A network is a system of connected elements
that operate together. A computer network connects PCs, printers, servers, phones, cameras, and other types of devices.
A computer network connects devices, allowing them to exchange data, which facilitates information and resource
sharing. At home, computers allow family members to share files (such as photos) and print documents on a network
printer; televisions can play movies or other media stored on your computers; and Internet-enabled devices can connect
to web pages, applications, and services anywhere in the world.

In the business environment, you have a lot of business operations—marketing, sales, and IT. You need to develop apps
that allow information to be collected and processed. Computer systems that collect and process the information need to
communicate with each other in order to share resources. You also need an infrastructure that supports employees, who
need to access these resources and interact with each other. A network allows multiple devices such as laptops, mobile
devices, servers, and shared storage to exchange information. There are various components connected to each other that
are necessary for this communication to take place. This infrastructure allows a business to run, allows customers to
connect to the business (either through salespeople or through an online store) and allows a business to sell its products
or services. To run normally, a business and its applications relies on networking technology.

A computer network can exist on its own, independent of other computer networks. It can also connect to other networks.
The Internet is an example of many networks interconnected together. It is global in its span and scope. To operate
successfully, interconnected networks follow standardized rules to communicate. Each participating network accepts and
adheres to these rules.

The early Internet connected only several mainframe computers with computer terminals. The mainframe computers
were large, and their computing power was considered enormous (albeit being the equivalent to mobile phones today).
Terminals were simple and inexpensive devices, which were used only to input data and display the results. Teletype is an
example of such a device. The range of devices that connects to the Internet has expanded in the last decade. The Internet
now connects not only laptops, smartphones, and tablets but also game consoles, television sets, home systems, medical
monitors, home appliances, thunder detectors, environment sensors, and many more devices. The earlier concept of
centralized computing resources is revived today in the form of computing clouds.

The following figure shows a mainframe computer (figure by Pargon, licensed under CC BY 2.0).

Computer network engineers design, develop, and maintain highly available network infrastructure to support the IT
activities of the business. Network engineers interact with users of the network and provide support or consultancy
services about design or network optimization, or both. Network engineers typically have more knowledge and experience
than network technicians, operators, and administrators. A network engineer should update their knowledge of
networking constantly to keep up with new trends and practices.
Users who want to connect their networks to the Internet acquire access through a service provider access network.
Service provider networks can use different technologies from dialup or broadband telephony networks, such as ADSL
networks, cable networks, mobile, radio, or fiber optic networks. A service provider network can cover large geographical
areas. Service provider networks also maintain connections between themselves and other service providers to enable
global coverage.

Computer networks can be classified in several ways, which can be combined to find the most appropriate one for the
implementation. Local and remote networks are distinguished by the distance between the user and the computer
networks that the user is accessing. Examples of networks categorized by their purpose would be the data center network
and SAN. Focusing on the technology used, you can distinguish wireless or wired networks. Looking at the size of the
network in terms of the number of devices it has, there are small networks, usually with less than 10 devices, medium-to-
large networks consisting of tens to hundreds of devices, and very large global networks such as the Internet, which
connects billions of devices across the world.

One of the most common categorizations looks at the geographical scope of the network. Within it, there are LANs that
connect devices located relatively close together in a limited area. Contrasting LANs, there are WANs, which cover broad
geographic areas and are managed by service providers. An example of a LAN network is a university campus network that
can span several collocated buildings. An example of a WAN would be a telecommunication provider network that
interconnects multiple cities and states. This categorization also includes metropolitan-area networks (MANs), which span
a physical area larger than LAN but smaller than WAN—for instance, a city.

Medium-to-large enterprise networks can span multiple locations. Usually, they have a main office or enterprise campus,
which holds most of the corporate resources, and remote sites, such as branch offices or home offices of remote workers.
A home office usually has small number of devices and is called SOHO. SOHO networks mostly use the Internet to connect
to the main office. A main office network, which is a LAN in terms of its geographical span, may consist of several networks
that occupy many floors, or it may cover a campus that contains several buildings. Many corporate environments require
deployment of wireless networks on a large scale, and they use wireless LAN controllers (WLCs) for centralizing
management of wireless deployments.

Enterprise campuses also typically include a separate data center that is home to the computational power, storage, and
applications necessary to support an enterprise business. Enterprises also are connected to the Internet, and Internet
connectivity is protected by a firewall. Branch offices have their own LANs with their own resources, such as printers and
servers, and they may store corporate information, but their operations largely depend on the main office, hence the
network connection with it. They connect to the main office by a WAN or Internet using routers as gateways.

Cisco Enterprise Architecture Model

Networks support the activities of many businesses and organizations and are required to be secure, resilient, and allow
growth. The design of a network requires considerable technical knowledge. Network engineers commonly use validated
network architecture models to assist in the design and the implementation of the network. Examples of validated models
are the Cisco three-tier hierarchical network architecture model, spine-leaf model, and Cisco Enterprise Architecture
model. These models provide hierarchical structure to enterprise networks, which is used to design the network
architecture in a form of layers (for example, LAN access and LAN core), with each layer providing different functionalities.

Note

The words "Internet" and "web" are very often used interchangeably, but they do not share the same meaning. The
Internet is a global network that interconnects many networks and therefore provides a worldwide communication
infrastructure. The World Wide Web describes one way to provide and access information over the Internet using a web
browser. It is a service that relies on connections provided by the Internet for its function.

All the exchange of data within the Internet follows the same well-defined rules, called protocols, which are designed
specifically for Internet communication. These protocols specify, among other things, the usage of hyperlinks and Uniform
Resource Identifiers (URIs). The Internet is a base for many other data exchange services, such as email or file transfer. It is
a common global infrastructure composed of many computer networks connected together that follow communication
rules standardized for the Internet. The protocols and processes of the Internet are defined by a set of specifications called
RFCs.

Components of a Network

A network can be as simple as two PCs that are connected by a wire or as complex as several thousand devices that are
connected through different types of media. The elements that form a network can be roughly divided into three
categories: devices, media, and services. Devices are interconnected by media. Media provides the channel over which the
data travels from source to destination. Services are software and processes that support common networking
applications in use today.

Network Devices

Devices can be further divided into endpoints and intermediary devices:

 Endpoints: End devices, which are most common to people, fall into the category of endpoints. In the context of a
network, end devices are called end-user devices and include PCs, laptops, tablets, mobile phones, game consoles,
and television sets. Endpoints also are file servers, printers, sensors, cameras, manufacturing robots, smart home
components, and so on. At the beginning of computer networking, all end devices were physical hardware units.
Today, many end devices are virtualized, meaning that they do not exist as separate hardware units any more. In
virtualization, one physical device is used to emulate multiple end devices—for example, all the hardware
components that one end device would require. The emulated computer system operates as if it were a separate
physical unit and has its own operating system and other required software. In a way, it behaves like a tenant
living inside a host physical device, using its resources (processor power, memory, and network interface
capabilities) to perform its functions. Virtualization is commonly applied for servers to optimize resource
utilization, because server resources are often underutilized when they are implemented as separate physical
units.
 Intermediary devices: These devices interconnect end devices or interconnect networks. In doing so, they perform
different functions, which include regenerating and retransmitting signals, choosing the best paths between
networks, classifying and forwarding data according to priorities, filtering traffic to allow or deny it based on
security settings, and so on. As endpoints can be virtualized, so can intermediary devices or even entire networks.
The concept is the same as in the endpoint virtualization; the virtualized element uses a subset of resources
available at the physical host system. Intermediary devices that are commonly found in enterprise networks are:

1. Switches: These devices enable multiple endpoints such as PCs, file servers, printers, sensors, cameras,
and manufacturing robots to connect to the network. Switches are used to allow devices to communicate
on the same network. In general, a switch or group of interconnected switches attempt to forward
massages from the sender so that it is only received by the destination device. Usually, all the devices that
connect to a single switch or a group of interconnected switches belong to a common network and can
therefore communicate directly with each other. If an end device wants to communicate with a device
that is on a different network, then it requires the "services" of a device that is known as a router, which
connects different networks together.

2. Routers: These devices connect networks and intelligently choose the best paths between networks. Their
main function is to route traffic from one network to another. For example, you need a router to connect
your office network to the Internet. An analogy for the basic function of switches and routers is to imagine
a network as a neighborhood. A switch is the street that connects the houses, and routers are the
crossroads of those streets. The crossroads contain helpful information, such as road signs, to help you in
finding a destination address. Sometimes, you might need the destination after just one crossroad, but
other times, you might need to cross several. The same is true in networking. Data sometimes "stops" at
several routers before it is delivered to the final recipient. Certain switches combine functionalities of
routers and switches; they are called Layer 3 switches.

3. APs: These devices allow wireless devices to connect to a wired network. An AP usually connects to a
router as a standalone device, but it also can be an integral component of the router itself.

4. WLCs: These devices are used by network administrators or network operations centers to facilitate
management of many APs. The WLC automatically manages the configuration of wireless APs.

5. Next-generation firewalls (NGFWs): Firewalls are network security systems that monitor and control the
incoming and outgoing network traffic based on predetermined security rules. A firewall typically
establishes a barrier between a trusted, secure internal network, and another outside network, such as
the Internet, that is assumed not to be secure or trusted. The term "next-generation firewall" indicates a
firewall that provides additional features to accommodate the newest security requirements. An example
of such a feature is the ability to recognize user applications—for instance, a game running inside an
application, such as a browser, that is connected to Facebook.

6. Intrusion prevention system (IPS): An IPS is a system that performs deep analysis of network traffic,
searching for signs that behavior is suspicious or malicious. If the IPS detects such behavior, it can take
protective action immediately. An IPS and a firewall can work in conjunction to defend a network.

7. Load balancers: Load balancing is a computer networking methodology to distribute the workload across
multiple computers or a computer cluster, network links, CPUs, disk drives, or other resources to achieve
optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using
multiple components with load balancing, instead of a single component, may increase reliability through
redundancy. The load-balancing service is usually provided by dedicated software or hardware. Server load
balancing is the process of deciding to which server a load-balancing device should send a client request
for service. The job of the load balancer is to choose the server that can successfully fulfill the client
request and do so in the shortest amount of time without overloading either the server or the server farm
as a whole. Depending on the load-balancing algorithm or predictor that you configure, a load balancer
performs a series of checks and calculations to determine the server that can best service each client
request. A load balancer bases server choice on several factors, including the server with the fewest
connections with respect to load, source or destination address, cookies, or header data.
8. Management services: A modern management service offers centralized management that facilitates
designing, provisioning, and applying policies across a network. It includes features for discovery and
management of network inventory, management of software images, device configuration automation,
network diagnostics, and policy configuration. It provides end-to-end network visibility and uses network
insights to optimize the network. An example of such centralized management service is Cisco Digital
Network Architecture (DNA) Center.

In user homes, you can often find one device that provides connectivity for wired and wireless devices and access to the
Internet. It has characteristics of a switch in that it provides physical ports to plug local devices, a router in that it enables
users to access other networks and the Internet, and a wireless LAN (WLAN) AP in that it allows wireless devices to
connect to it. It is actually all three of these devices in a single package. This device is often called a wireless router.

Another example of a network device is a file server, which is an end device. A file server runs software that implements
protocols that are standardized to support file transfer from one device to another over a network. This service can be
implemented by either FTP or TFTP. Having an FTP or TFTP  server in a network allows uploads and downloads of files over
the network. An FTP or TFTP server is often used to store backup copies of files that are important to network operation,
such as operating system images and configuration files. Having those files in one place makes file management and
maintenance easier.

Media

Media are the physical elements that connect network devices. Media carry electromagnetic signals that represent data.
Depending on the medium, electromagnetic signals can be guided, like in wires and fiber optic cables, or can be
propagated, like in wireless transmissions such as Wi-Fi, mobile, and satellite. Different media have different
characteristics, and the selection of the most appropriate medium would depend on the circumstances, such as the
environment in which the media is used, distances that need to be covered, availability of financial resources, and so on.
For instance, for a filming crew working in a desert, a satellite connection (air medium) might be the only available option.

Connecting of wired media to network devices is greatly eased by the use of connectors. A connector is a plug that is
attached to each end of the cable. The most common type of connector on a LAN is a Registered Jack-45 (RJ-45), which
looks like an analog phone connector.

To be able to connect the media, which connects a device to a network, devices use network interface cards (NICs). The
media "plugs" directly into the NIC. NICs translate the data that is created by the device into a format that can be
transmitted over the media. NICs used on LANs are also called LAN adapters. End devices used in LANs usually come with
several types of NICs installed, such as wireless NICs and Ethernet NICs. NICs on a LAN are uniquely identified by a MAC
address. The MAC address is hardcoded or "burned in" by the NIC manufacturer. NICs that are used to interface with
WANs are called WAN interface cards (WICs), and they use serial links to connect to a WAN.

Network Services
Services in a network comprise software and processes that implement common network applications, such as email and
web, and also include the less obvious processes implemented across the network, all of which generate data and
determine how data is moved through the network.

Companies typically centralize business-critical data and applications into central locations called data centers. These data
centers can include routers, switches, firewalls, storage systems, servers, and application delivery controllers. Similar to
data center centralization, computing resources can also be centralized off premises in the form of a cloud. Clouds can be
private, public, or hybrid and aggregate the computing, storage, network, and application resources in central locations.
Cloud computing resources are configurable and shared among many end users. The resources are transparently available,
regardless of the user point of entry (a personal computer at home, an office computer at work, a smartphone or tablet, or
a computer on a school campus). Data stored by the user is available whenever the user is connected to the cloud.

Characteristics of a Network

When you purchase a mobile phone or a PC, the specifications list tells you the important characteristics of the device, just
as specific characteristics of a network help describe its performance and structure. When you understand what each
characteristic of a network means, you can better understand how the network is designed, how it performs, and which
aspects you may need to adjust to meet user expectations.

You can describe the qualities and features of a network by considering these characteristics:

 Topology: A network topology is the arrangement of its elements. Topologies give insight into physical
connections and data flows among devices. In a carefully designed network, data flows are optimized, and the
network performs as desired.

 Bit rate or bandwidth: Bit rate is a measure of the data rate in bits per second of a given link in the network. The
unit of bit rate is bits per second (bps). This measure is often referred to as bandwidth or speed in device
configurations. However, it is not about how fast 1 bit is transmitted over a link—which is determined by the
physical properties of the medium that propagates the signal—it is about the number of bits transmitted in a
second. Link bit rates commonly encountered today are 1 and 10 gigabits per second (1 or 10 billion bits per
second). Links that are 100 Gbps are not uncommon either.

 Availability: Availability indicates how much time a network is accessible and operational. Availability is expressed
in terms of the percentage of time that the network is operational. This percentage is calculated as a ratio of the
time in minutes that the network is actually available and the total number of minutes over an agreed period,
multiplied by 100. In other words, availability is the ratio of uptime and total time, expressed in percentage. To
ensure high availability, networks should be designed to limit the impact of failures and to allow quick recovery
when a failure does occur. High-availability design usually incorporates redundancy. Redundant design includes
extra elements, which serve as backups to the primary elements and take over the functionality if the primary
element fails. Examples include redundant links, components, and devices.

 Reliability: Reliability indicates how well the network operates. It considers the ability of a network to operate
without failures and with the intended performance for a specified time period. In other words, it tells you how
much you can count on the network to operate as you expect it to. For a network to be reliable, the reliability of all
its components should be considered. Highly reliable networks are highly available, but a highly available network
might not be highly reliable; its components might operate, but at lower performance levels. A common measure
of reliability is the mean time between failures (MTBF), which is calculated as the ratio between the total time in
service and the number of failures, where not meeting the required performance level is considered a failure.
Choosing highly reliable redundant components in the network design increases both availability and reliability.

For instance, consider a networking device that reboots every hour. The reboot takes 5 minutes, after which the device
works as expected. The figure shows the calculations of availability and reliability.

The availability percentage for a period of one day can be calculated as follows:
 Scalability: Scalability indicates how easily the network can accommodate more users and data transmission
requirements without affecting current network performance. If you design and optimize a network only for the
current requirements, it can be very expensive and difficult to meet new needs when the network grows.

 Security: Security tells you how well the network is defended from potential threats. Both network infrastructure
and the information that is transmitted over the network should be secured. The subject of security is important,
and defense techniques and practices are constantly evolving. You should consider security whenever you take
actions that affect the network.

 Quality of service (QoS): QoS includes tools, mechanisms, and architectures that allow you to control how and
when network resources are used by applications. QoS is especially important for prioritizing traffic when the
network is congested.

 Cost: Cost indicates the general expense for the initial purchase of the network components and any costs
associated with the installation and ongoing maintenance of these components.

 Virtualization: Traditionally, network services and functions have only been provided via hardware. Network
virtualization creates a software solution that emulates network services and functions. Virtualization solves a lot
of the networking challenges in networks today, helping organizations centrally automate and provision the
network from a central management point.

These characteristics and attributes provide a means to compare various networking solutions.

Interpreting a Network Diagram

Network diagrams are visual aids in understanding how a network is designed and how it operates. In essence, they are
maps of the network. They illustrate physical and logical devices and their interconnections. Depending on the amount of
information you wish to present, you can have multiple diagrams for a network. Most common diagrams are physical and
logical diagrams. Other diagrams used in networking are sequence diagrams, which illustrate the chronological exchange
of messages between two or more devices.

Both physical and logical diagrams use icons to represent devices and media. Usually, there is additional information about
devices, such as device names and models.

Physical diagrams focus on how physical interconnections are laid out and include device interface labels (to indicate the
physical ports to which media is connected) and location identifiers (to indicate where devices can be found physically).
Logical network diagrams also include encircling symbols (ovals, circles, and rectangles), which indicate how devices or
cables are grouped. These symbols further include device and network logical identifiers, such as addresses. These symbols
also indicate which networking processes are configured, such as routing protocols, and provide their basic parameters.
In the example, you can see interface labels S0/0/0, Fa0/5, and Gi0/1. The label is composed of letters followed by
numbers. Letters indicate the type of an interface. In the example, "S" stands for Serial, "Fa" stands for Fast Ethernet, and
"Gi" for Gigabit Ethernet.

Devices can have multiple interfaces of the same type. The exact position of the interface is indicated by the numbers that
follow, which are subject to conventions. For instance, the label S0/0/0 indicates serial port 0 (the last zero in the label) in
the interface card slot 0 (the second zero) in the module slot 0 (the first zero).

Note

The name "Fast Ethernet" indicates an Ethernet link with a speed of 100 Mbps.

The diagram also includes the IP version 4 (IPv4) address of the entire network given by 192.168.1.0/24. This number
format indicates not only the network address, which is 192.168.1.0, but also the network prefix, a representation of its
subnet mask, which is /24. IPv4 addresses of individual devices are shown as ".1" and ".2." These numbers are only parts of
the complete address, which is constructed by combining the address of the entire network with the number shown. The
resulting address of the device in the diagram would be 192.168.1.1.

Impact of User Applications on the Network

The data traffic that is flowing in a network can be generated by end users or can be control traffic. Users generate traffic
by using applications. Control traffic can be generated by intermediary devices or by activities related to operation,
administration, and management of the network. Today, users utilize many applications. The traffic created by these
applications differs in its characteristics. Usage of applications can affect network performance and, in the same way,
network performance can affect applications. Usage translates to the user perception of the quality of the provided service
—in other words, a user experience that is good or bad. Recall that QoS is implemented to prioritize network traffic and
maximize the user experience.

User applications can be classified to better describe their traffic characteristics and performance requirements. It is
important to know which traffic is flowing in your network and describe the traffic in technical terms. An example of traffic
types found in networks today is given in the figure. This knowledge is used to optimize network design.
To classify applications, their traffic, and performance, the requirements are described in terms of these characteristics:

 Interactivity: Applications can be interactive or noninteractive. Interactivity presumes that for a given request, a
response is expected for the normal functioning of the application. For interactive applications, it is important to
evaluate how sensitive they are to delays; some might tolerate larger delays up to practical limits, but some might
not.

 Real-time responsiveness: Real-time applications expect timely serving of data. They are not necessarily
interactive. An example of a real-time application is live football match video streaming (live streaming) or video
conferencing. Real-time applications are sensitive to delay. Delay is sometimes used interchangeably with the term
latency. Latency refers to the total amount of time from the source sending data to the destination receiving it.
Latency accounts for propagation delay of signals through media, the time required for data processing on devices
it crosses along the path, and so on. Because of changing network conditions, latency might vary during data
exchange; some data might arrive with less latency than other data. The variation in latency is called jitter.

 Amount of data generated: There are applications that produce low quantity of data, such as voice applications.
These applications do not require much bandwidth. Usually, they are referred to as bandwidth benign
applications. On the other hand, video streaming applications produce a significant amount of traffic. This kind of
application is also termed bandwidth greedy.

 Burstiness: Applications that always generate a consistent amount of data are referred to as smooth or nonbursty
applications. On the other hand, bursty applications at times create a small amount of data, but they can change
behavior for shorter periods. An example is web browsing. If you open a page in a browser that contains a lot of
text, a small amount of data is transferred. But if you start downloading a huge file, the amount of data will
increase during the download.

 Drop sensitivity: Packet loss is losing packets along the data path, which can severely degrade the application
performance. Some real-time applications such as VoD are sensitive to the perceived packet loss when using the
network resources. You can say that such applications are drop-sensitive.

 Criticality to business: This aspect of an application is subjective in that it depends on an estimate of how valuable
and important the application is to a business. For instance, an enterprise that relies on video surveillance to
secure its premises might consider video traffic as a top priority, while another enterprise might consider it totally
irrelevant.
Host-To-Host Communications Overview

Communication can be described as successful sharing or exchanging of information. It involves a source and a destination
of information. Information is represented in some form of messages. In computer networks, the sources of messages are
end devices, also called endpoints or hosts. The messages are created at the source, transferred over the network, and
delivered at the destination. For communication to be successful, the message has to traverse one or more networks. A
network interconnects large number of devices, produced by different hardware and software manufacturers, over many
different transmission media, each one having its specifics. All these parameters make the network very complex.

Communication models were created to organize internetworking complexity. Two commonly used models today are ISO
Open Systems Interconnection (OSI) and TCP/IP. Both provide a model of networking that describes internetworking
functions and a set of rules called protocols that set out requirements for internetworking functions.

Both models present a network in terms of layers. Layers group networking tasks by the functions that they perform in
implementing a network. Each layer has a particular role. In performing its functions, a layer deals with the layer above it
and the layer below it, which is called "vertical" communication. A layer at the source creates data that is intended for the
same layer on the destination device. This communication of two corresponding layers is also termed "horizontal."

The second aspect of communication models is protocols. In the same way that communication functions are grouped in
layers, so are the protocols. People usually talk about the protocols of certain layers, protocol architectures, or protocol
suites. In fact, TCP/IP is a protocol suite.

A networking protocol is a set of rules that describe one type of communication. All devices participating in
internetworking agree with these rules, and it is this agreement that makes communication successful. Protocols define
rules used to implement communication functions.

Note

As defined by the ISO/International Electrotechnical Commission (IEC) 7498-1:1994 ISO standard, the word "Open" in the
OSI acronym indicates systems that are open for the exchange of information using applicable standards. Open does not
imply any particular systems implementation, technology, or means of interconnection, but it refers to the mutual
recognition and support of the applicable standards.

While both ISO OSI and TCP/IP models define protocols, the protocols that are included in TCP/IP are widely implemented
in networking today. Nonetheless, as a general model, ISO OSI aims at providing guidance for any type of computer
system, and it is used in comparing and contrasting different systems. Therefore, ISO OSI is called the reference model.

Standards-based, layered models provide several benefits:

 Make complexity manageable by breaking communication tasks into smaller, simpler functional groups.

 Define and specify communication tasks to provide the same basis for everyone to develop their own solutions.
 Facilitate modular engineering, allowing different types of network hardware and software to communicate with
one another.

 Prevent changes in one layer from affecting the other layers.

 Accelerate evolution, providing for effective updates and improvements to individual components without
affecting other components or having to rewrite the entire protocol.

 Simplify teaching and learning.

Note

Knowledge of layers and the networking functions that they describe assists in troubleshooting network issues, making it
possible to narrow the problem to a layer or a set of layers.

Computer networks were initially concerned only with transfer of data, and the term "data" referred to information in an
electronic form that could be stored and processed by a computer. Additionally, different data transfer protocols required
completely different network topologies, equipment, and interconnections. IP, AppleTalk, Token Ring, and FDDI are
examples of data transfer communications protocols that required different hardware, topologies, and equipment to
properly operate.

In addition to data transfer, other communication networks existed in parallel. For example, telephone networks were
built using separate equipment and implemented a different set of protocols and standards. Over the years, computer
networking evolved such that IP became a common data communications standard. The technology has been extended to
also include other types of communication, such as voice conversations and video. Because only computer networking
protocols and standards are now used for voice, video, and "pure" computer data, the networking was termed "converged
networking."

The need to interconnect devices is not exclusive to computer networks. Industrial manufacturing companies used
standards and protocols specifically designed to provide automation and control over the production process. The
management and monitoring of the manufacturing plant were traditionally the task of the operational technology
departments. IT departments, which manage business applications, and operational technology departments functioned
independently. Today, thanks to the industrial IoT, manufacturers are collecting more data from the plant floor than ever
before. However, that data is only as valuable as the decisions it can support. Operational technology and IT departments
collaborate to make the data meaningful and accessible for use across the organization.
The result is another example of a converged network, called Factory Network, which connects factory automation and
control systems with IT systems using standards-based networking. The Factory Network provides real-time access to
mission-critical data at the plant level while sharing knowledge throughout the enterprise, helping operations leaders
make decisions that can contribute to safety and operational effectiveness.

ISO OSI Reference Model

To address the issues with network interoperability, the ISO researched different communication systems. As a result of
this research, the ISO created the ISO OSI model to serve as a framework on which a suite of protocols can be built. The
vision was that this set of protocols would be used to develop an international network that would not depend on
proprietary systems. In the computer industry, "proprietary" means that one company or a small group of companies uses
their own interpretation of tasks and processes to implement networking. Usually, the interpretation is not shared with
others, so their solutions are not compatible, hence they do not communicate. Meanwhile, the TCP/IP suite was used in
the first network implementations. It quickly became a standard, meaning that it was the protocol suite implemented in
practice. Consequently, it was chosen over the OSI protocol suite and became the standard in network implementations
today.

Note

ISO is an independent, nongovernmental organization. It is the largest developer of voluntary international standards in
the world. Those standards help businesses to increase productivity while minimizing errors and waste.

The OSI reference model describes how data is transferred over a network. The model addresses hardware and software
equipment, and transmission.

The OSI model provides an extensive list of functions and services that can occur at each layer. It also describes the
interaction of each layer with the layers directly above and below it. More importantly, the OSI model facilitates an
understanding of how information travels throughout the network. It provides vendors with a set of standards that ensure
compatibility and interoperability between the various types of network technologies that companies produce around the
world. The OSI model also is used for computer network design, operation specifications, and troubleshooting.

Roughly, the model layers can be grouped into upper and lower layers. Layers 5 to 7, or upper layers, are concerned with
user interaction and the information that is communicated, its presentation, and how the communication proceeds. Layers
1 to 4, the lower layers, are concerned with how this content is transferred over the network.
The OSI reference model separates network tasks into seven layers, which are named and numbered. Here are the OSI
model layers:

 Layer 1: The physical layer defines electrical, mechanical, procedural, and functional specifications for activating,
maintaining, and deactivating the physical link between devices. This layer deals with electromagnetic
representation of bits of data and their transmission. Physical layer specifications define line encoding, voltage
levels, timing of voltage changes, physical data rates, maximum transmission distances, physical connectors, and
other attributes. This layer is the only layer implemented solely in hardware.

 Layer 2: The data link layer defines how data is formatted for transmission and how access to physical media is
controlled. This layer typically includes error detection and correction to ensure reliable data delivery. The data
link layer involves NIC-to-NIC communication within the same network. This layer uses a physical address to
identify hosts on the local network.

 Layer 3: The network layer provides connectivity and path selection beyond the local segment, all the way from
the source to the final destination. The network layer uses logical addressing to manage connectivity. In
networking, the logical address is used to identify the sender and the recipient. The postal system is another
common system that uses addressing to identify the sender and the recipient. Postal addresses follow the format
that includes name, street name and number, city, state, and country. Network logical addresses have a different
format than postal addresses; they are determined by the network layer rules. Logical addressing ensures that a
host has a unique address or that it can be uniquely identified in terms of network communication.

 Layer 4: The transport layer defines segmenting and reassembling of data belonging to multiple individual
communications, defines the flow control, and defines the mechanisms for reliable transport, if required. The
transport layer serves the upper layers, which in turn interface with many user applications. To distinguish
between these application processes, the transport layer uses its own addressing. This addressing is valid locally,
within one host, unlike addressing at the network layer. The transport services can be reliable or unreliable. The
selection of the appropriate service depends on application requirements. For instance, file transfer may be
reliable, to guarantee that the file arrives intact and whole. On the other hand, a missing pixel when watching a
video might go unnoticed. In networking, this is called an unreliable service.

 Layer 5: The session layer establishes, manages, and terminates sessions between two communicating hosts to
allow them to exchange data over a prolonged time period. The session layer is mainly concerned with issues that
application processes may encounter and not with lower layer connectivity issues. The sessions, also called dialogs,
can determine whether to handle data in both directions simultaneously or only handle data flow in one direction
at a time. It also takes care of checkpoints and recovery mechanisms. The session layer is explicitly implemented
with applications that use remote procedure calls.

 Layer 6: The presentation layer ensures that data sent by the application layer of one system is "readable" by the
application layer of another system. It achieves that by translating data into a standard format before transmission
and converting that format into a format known to the receiving application layer. It also provides special data
processing that must be done before transmission. It may compress and decompress data to improve the
throughput, and may encrypt and decrypt data to improve security. Compression/decompression and
encryption/decryption may also be done at lower layers.

 Layer 7: The application layer is the OSI layer that is closest to the user. It provides services to user applications
that want to use the network. Services include email, file transfer, and terminal emulation. An example of a user
application is the web browser. It does not reside at the application layer but is using protocols that operate at the
application layer. Operating systems also use the application layer when performing tasks triggered by actions that
typically do not involve communication over the network. Examples of such actions are opening a remotely
located file with a text editor or importing a remotely located file into spreadsheet. The application layer differs
from other layers in that it does not provide services to any other OSI layer.

TCP/IP Suite

The TCP/IP model represents a protocol suite. It is similar to the ISO OSI model in that it uses layers to organize protocols
and explain which functions they perform. TCP/IP protocols are actively used in actual networks today.
The TCP/IP model defines and describes requirements for the implementation of host systems. These include standard
protocols that these systems should use. It does not specify how to implement the protocol functions, but rather provides
guidance for vendors, implementors, and users of what should be provided within the system.

The TCP/IP suite has four layers and includes many protocols, although its name stands for only two. The reason is that
layers represented by these two protocols carry out functions crucial to successful network communication.

Note:

Although this course refers to the TCP/IP stack or suite, it is common in the industry to shorten this term to "IP stack."

The OSI model and the TCP/IP stack were developed by different organizations at approximately the same time. The
purpose was to organize and communicate the components that guide the transmission of data.

The speed at which the TCP/IP-based Internet was adopted and the rate at which it expanded caused the OSI protocol
suite development and acceptance to lag behind. Although few of the protocols that were developed using the OSI
specifications are in widespread use today, the seven-layer OSI model has made major contributions to the development
of other protocols and products for all types of new networks.

The layers of the TCP/IP stack correspond to the layers of the OSI model:
 The TCP/IP link layer corresponds to the OSI physical and data link layers and is concerned primarily with
interfacing with network hardware and accessing the transmission media. Like Layer 2 of the OSI model, the link
layer of the TCP/IP model is concerned with hardware addresses.

 The TCP/IP Internet layer aligns with the network layer of the OSI model and manages the addressing of and
routing between network devices.

 The TCP/IP transport layer, like the OSI transport layer, provides the means for multiple host applications to access
the network layer in a best-effort mode or through a reliable delivery mode.

 The TCP/IP application layer supports applications that communicate with the lower layers of the TCP/IP model
and corresponds to the separate application, presentation, and session layers of the OSI model.

Because the functions of each OSI layer are clearly defined, the OSI layers are used even today when referring to devices
and protocols.

Take, for example, a Layer 2 switch, which is a LAN switch. The "Layer 2" in this case refers to the OSI Layer 2, making it
easy for people to know what is meant, as they associate the OSI Layer 2 with a clearly defined set of functions.

Similarly, it is often said that IP is a "network layer protocol" or a "Layer 3 protocol," because the TCP/IP Internet layer can
be matched to the OSI network layer.

As a next example, look at the TCP/IP transport layer, which corresponds to the OSI transport layer. The functions defined
at both layers are the same. However, different specific protocols are involved. Because of this, it is common to refer to
the TCP and UDP as "Layer 4 protocols," again using the OSI layer number.

Another example is the term "Layer 3 switch." A switch was traditionally thought of as a device that works on the link layer
level (Layer 2 of the OSI model). A Layer 3 switch is also capable of providing Internet layer (Layer 3 of the OSI model)
services, which were traditionally provided by routers.

It is very important to remember that the OSI model terminology and layer numbers are often used rather than the TCP/IP
model terminology and layer numbers when referring to devices and protocols.

Peer-To-Peer Communications

The term "peer" means the equal of a person or object. By analogy, peer-to-peer communication means communication
between equals. This concept is at the core of layered modeling of a communication process. Although in performing its
functions a layer deals with layers directly above and below it, the data it creates is intended for the corresponding layer at
the receiving host. The concept is also called horizontal communication.

Except for the physical layer, functions of all layers are typically implemented in software. Therefore, you hear about the
logical communication of layers. Software processes at different hosts are not communicating directly. Most probably, the
hosts are not even connected directly. Nevertheless, processes on one host manage to accomplish logical communication
with the corresponding processes on another host.

Note

The term "peer-to-peer" is often used in computing to indicate an application architecture in which application tasks and
workloads are equally distributed among peers. Contrary to peer-to-peer are client-server architectures, in which tasks
and workload are unequally divided.

Applications create data. The intended recipient of this data is the application at the destination host, which can be
distant. In order for application data to reach the recipient, it first needs to reach the directly connected physical network.
In the process, the data is said to pass down the local protocol stack. First, an application protocol takes user data and
processes it. When processing by the application protocol is done, it passes processed data down to the transport layer,
which does its processing. The logic continues down the rest of the protocol stack until data is ready for the physical
transmission. The data processing that happens as data traverses the protocol stack alters the initial data, which means
that original application data is not the same as the data represented in the electromagnetic signal transmitted.

At the receiving side, the process is reversed. The signals that arrive at the destination host are received from the media by
the link layer, which serves data to the Internet layer. From there, data is passed up the stack all the way to the receiving
application. Here again, the data received as the electromagnetic signal is different from the data that will be delivered to
the application. But the data that the application sees is the same data that the sending application created.

Passing data up and down the stack is also referred to as vertical communication. For the horizontal peer-to-peer
communication of layers to happen, it first requires vertical down-the-stack and up-the-stack communication.

As data passes down or up the stack, the unit of data changes—and so does its name. The generic term used for a data
unit, regardless of where it is found in the stack, is a protocol data unit (PDU). Its name depends on where it exists in the
protocol stack

Although there is no universal naming convention for PDUs, they are typically named as follows:

 Data: The general term for the PDU that is used at the application layer

 Segment: A transport layer PDU

 Packet: An Internet layer PDU

 Frame: A link layer PDU

To look into PDUs from peer-to-peer communication, you can use a packet analyzer, such as Wireshark, which is a free and
open source packet analyzer. Packet analyzers capture all the PDUs on a selected interface. They then examine their
content, interpret it, and display it in text or by using a graphical interface. Packet analyzers, sometimes also called sniffers,
are used for network troubleshooting, analysis, software and communications protocol development, and education.
The figure shows a screenshot of a Wireshark capture that was started on a LAN Ethernet interface. Wireshark organizes
captured information into three windows. The top window shows a table listing all captured frames. This listing can be
filtered to ease analysis. In the example, the filter is set to show only frames that carry Domain Name System (DNS)
protocol data. The second, middle window—the details pane—shows the details of one frame selected from the list.
Information is given first for the lower layers. For each layer, the information includes data added by the protocol at that
layer. In the third window (not shown in the figure), the bytes pane displays information selected in the details pane, as it
was captured, in bytes.

In the figure, you can also see how Wireshark organizes analyzed information. In the details pane, it displays data that it
finds in headers. It organizes header information by layers, starting with the link layer header and proceeding to the
application layer. If you look closely at the display of each header, you will see that information is organized into
meaningful groups. These groups are recognizable by the names, followed by a colon and a value—for example, "Source:
Cisco_29:ec:52 (04:fe:7f:29:ec:52)" or "Time to live: 127." These groupings correspond to how information is organized in
the header. Headers have fields, and the names Wireshark uses correspond to header field names. For instance, Source
and Destination in Wireshark correspond to Source Address and Destination Address fields of a header.

Encapsulation and De-Encapsulation

Information that is transmitted over a network must undergo a process of conversion at the sending and receiving ends of
the communication. The conversion process is known as encapsulation and de-encapsulation of data. Both processes
provide the means for implementation of the concept of horizontal communication, where the layer on the transmitting
side is communicating with the corresponding layer on the receiving side.

Have you ever opened a very large present and found a smaller box inside, and then an even smaller box inside that one,
until you got to the smallest box and, finally, to your present? The process of encapsulation operates similarly in the TCP/IP
model. The application layer receives the user data and adds to it its information in the form of a header. It then sends it to
the transport layer. This process corresponds to putting a present (user data) into the first box (a header) and adding some
information on the box (application layer data). The transport layer also adds its own header before sending the package
to the Internet layer, placing the first box into the second box and writing some transport-related information on it. This
second box must be larger than the first one to fit the content. This process continues at each layer. The link layer adds a
trailer in addition to the header. The data is then sent across the physical media.
Note

Encapsulation increases the size of the PDU. The added information is required for the handling of the PDU and is called
"overhead" to distinguish it from user data.

The figure represents the encapsulation process. It shows how data passes through the layers down the stack. The data is
encapsulated as follows:

1. The user data is sent from a user application to the application layer, where the application layer protocol adds its
header. The PDU is now called data.

2. The transport layer adds the transport layer header to the data. This header includes its own information,
indicating which application layer protocol has sent the data. The new data unit is now called a segment. The
segment will be further treated by the Internet layer, which is the next to process it.

3. The Internet layer encapsulates the received segment and adds its own header to the data. The header and the
previous data become a packet. The Internet layer adds the information used to send the encapsulated data from
the source of the message across one or more networks to the final destination. The packet is then passed down
to the link layer.

4. The link layer adds its own header and also a trailer to form a frame. The trailer is usually a data-dependent
sequence, which is used for checking for transmission errors. An example of such a sequence is a frame check
sequence (FCS.) The receiver will use it to detect errors. This layer also converts the frame to a physical signal and
sends it across the network, using physical media.
At the destination, each layer looks at the information in the header added by its counterpart layer at the source. Based on
this information, each layer performs its functions and removes the header before passing it up the stack. This process is
equivalent to unpacking a box. In networking, this process is called de-encapsulation.

The de-encapsulation process is like reading the address on a package to see if it is addressed to you and then, if you are
the recipient, opening the package and removing the contents of the package.

The following is an example of how the destination device de-encapsulates a sequence of bits:

1. The link layer reads the whole frame and looks at both the frame header and the trailer to check if the data has
any errors. Typically, if an error is detected, the frame is discarded, and other layers may ask for the data to be
retransmitted. If the data has no errors, the link layer reads and interprets the information in the frame header.
The frame header contains information relevant for further processing, such as the type of encapsulated protocol.
If the frame header information indicates that the frame should be passed to upper layers, the link layer strips the
frame header and trailer and then passes the remaining data up to the Internet layer to the appropriate protocol.

2. The Internet layer examines the Internet header in the packet that it received from the link layer. Based on the
information that it finds in the header, it decides either to process the packet at the same layer or to pass it up to
the transport layer. Before the Internet layer passes the message to the appropriate protocol on the transport
layer, it first removes the packet header.

3. The transport layer examines the segment header of the received segment. The information included in the
segment header indicates which application layer protocol should receive the data. The transport layer strips the
segment header from the segment and hands over data to the appropriate application layer protocol.

4. The application layer protocol strips the data header. It uses the information in the header to process the data
before passing it to the user application.

Not all devices process PDUs at all layers. For instance, a switch might only process a PDU at the link layer, meaning that it
will "read" only frame information that is contained in the frame header and trailer. Based on the information found in the
frame header and trailer, the switch will forward the frame unchanged out a specific port, forward it out all ports except
for the incoming port, or discard the frame if it detects errors. Routers might look deeper into the PDU. A router de-
encapsulates the frame header and trailer and relies on the information contained in the packet header to make their
forwarding decisions. If a router is filtering the packets, it may also look even deeper into the information contained in the
segment header before it decides on what to do with the packet.

A host performs encapsulation as it sends data and performs de-encapsulation as it receives it, and it can perform both
functions simultaneously as part of the multiple communications that it maintains.

Note

In networking, you will often encounter usage of both the OSI and TCP/IP models, sometimes even interchangeably. You
should be familiar with both so that you can competently communicate with network engineers.
MAC Addresses and VLANs 

When users want to communicate in the enterprise environment, at home, or basically anywhere in the world, they need
to have a way of being connected to the network by some sort of physical media. This is where the TCP/IP link layer
becomes vital. There are different ways of being connected to the network, either using wired or wireless connectivity. In
most enterprise environments and also at home, wireless communication is getting more and more common for end
users, but for the majority of other network devices, Ethernet is the basis of all enterprise communication.

The TCP/IP link layer contains Ethernet and other protocols that computers use to deliver data to the other computers and
devices that are attached to the network. Unlike higher level protocols, the link layer protocols must understand the
details of the underlying physical network, such as the protocol data units structure and the physical address scheme that
is used. Understanding the details and constraints of the physical network ensures that these protocols can format the
data correctly so that it can be transmitted across the network. You should keep in mind how important the physical
characteristics of the transmission medium are. They include different cables, connectors, use of pins, electrical currents,
encoding, light modulation, and the rules for how to activate and deactivate the use of the physical medium. These
characteristics are essential in building any kind of enterprise, data center, or home network environment.

LAN Overview

The LAN emerged to serve the needs of high-speed interconnections between computer systems. While there have been
many types of LAN transports, Ethernet became the favorite of businesses starting in the early 1990s. Since its
introduction, Ethernet bandwidth has scaled from the original shared-media 10 Mbps to 400 Gbps in Cisco Nexus 9000
Series Switches for the data center.

A LAN is a network of endpoints and other components that are located relatively close together in a limited area.

LANs can vary widely in size. A LAN may consist of only two computers in a home office or small business, or it may include
hundreds of computers in a large corporate office or multiple buildings. A LAN typically is a network completely within
your own premises (your organizational campus, or building, or office suite, or even your home). Organizations or
individuals typically build and own the whole infrastructure, all the way down to the physical cabling.

The defining characteristics of LANs, in contrast to WANs, include their typically higher data transfer rates, smaller
geographic area, and the lack of need for leased telecommunication lines.

To connect a switch to a LAN, you must use some sort of media. The most common LAN media is Ethernet. Ethernet is not
just a type of cable or protocol; it is a network standard published by the IEEE. You may hear various Ethernet terms, such
as Ethernet protocols, Ethernet cables, Ethernet ports, and Ethernet switches. IEEE 802.3 and Ethernet often are used
synonymously, although they have some differences. The term "Ethernet" is more common; IEEE 802.3 usually is used
when referring to a specific part of the standard such as a particular frame format. Ethernet basically is a set of guidelines
that enable various network components to work together. These guidelines specify cabling and signaling at the physical
and data link layers of the OSI model. For example, Ethernet standards recommend different types of cabling and specify
maximum segment lengths for each type.
A WAN is a data communications network that provides access to other networks over a large geographical area. WANs
use facilities that an ISP or carrier, such as a telephone or cable company, provides. The provider connects locations of an
organization to each other, to locations of other organizations, to external services, and to remote users. WANs carry
various traffic types such as voice, data, and video.

LAN Components

On the first LANs, devices with Ethernet connectivity were mostly limited to PCs, file servers, print servers, and legacy
devices such as hubs and bridges. Hubs and bridges were replaced by switches and are no longer used.

Today, a typical small office will include routers, switches, APs, servers, IP phones, mobile phones, PCs, and laptops.

Regardless of its size, a LAN requires these fundamental components for its operation:

 Hosts: Hosts include any device that can send or receive data on the LAN. Sometimes, hosts are also called
endpoints. Those two terms are used interchangeably throughout this course.

 Interconnections: Interconnections allow data to travel from one point to another in the network.
Interconnections include these components:

1. NICs: NICs translate the data that is produced by the device into a frame format that can be transmitted
over the LAN. NICs connect a device to the LAN over copper cable, fiber-optic cable, or wireless
communication.

2. Network media: In traditional LANs, data was transmitted mostly over copper and fiber-optic cables.
Modern LANs (even small home LANs) generally include a WLAN.

 Network devices: Network devices, like switches and routers, are responsible for data delivery between hosts.

1. Ethernet switches: Ethernet switches form the aggregation point for LANs. Ethernet switches operate at
Layer 2 of the OSI model and provide intelligent distribution of frames within the LAN.

2. Routers: Routers, sometimes called gateways, provide a means to connect LAN segments and provide
connectivity to the Internet. Routers operate at Layer 3 of the OSI model.

3. APs: APs provide wireless connectivity to LAN devices. APs operate at Layer 2 of the OSI model.

 Protocols: Protocols are rules that govern how data is transmitted between components of a network. Here are
some commonly used LAN protocols:

1. Ethernet protocols (IEEE 802.2 and IEEE 802.3)

2. IP

3. TCP

4. UDP
5. Address Resolution Protocol (ARP) for IPv4 and Neighbor Discovery Protocol (NDP) for IP version 6 (IPv6)

6. Common Internet File System (CIFS)

7. Dynamic Host Configuration Protocol (DHCP)

Functions of a LAN

LANs provide network users with communication and resource-sharing functions:

 Data and applications: When users are connected through a network, they can share files and even software
applications. This capability makes data more easily available and promotes more efficient collaboration on work
projects.

 Resources: The resources that can be shared include input devices, such as cameras, and output devices, such as
printers.

 Communication path to other networks: If a resource is not available locally, the LAN can provide connectivity via
a gateway to remote resources, such as the Internet.

Characteristics and Features of Switches

Switches have become a fundamental part of most networks. LAN switches have special characteristics that make them
effective in alleviating network congestion by increasing effective network bandwidth.

Switches provide the following important functions to eliminate network congestion:

 Dedicated communication between devices: This function increases frame throughput. Switches with one user
device per port have microsegmented the network. In this type of configuration, each user receives access to the
full bandwidth and does not have to contend with other users for available bandwidth. As a result, collisions do
not occur.

 Multiple simultaneous conversations: Multiple simultaneous conversations can occur by forwarding or switching


several packets at the same time, increasing network capacity by the number of conversations that are supported.
For example, when frames are being forwarded between ports 1 and 2, another conversation can be happening
between ports 5 and 6. This multiplication is possible because of I/O buffers and fast internal transfer speeds
between ports. A switch that can support all possible combinations of frame transfers between all ports
simultaneously is said to offer wire-speed and nonblocking performance. Because of high performance, this class
of switch is relatively expensive.

 Full-duplex communication: After a connection is microsegmented, it has only two devices—the switch and the
host. It is now possible to configure the ports so that they can both receive and send data at the same time, which
is called full-duplex communication. For example, point-to-point 100-Mbps connections have 100 Mbps of
transmission capacity and 100 Mbps of receiving capacity, for an effective 200-Mbps capacity on a single
connection. The configuration between half duplex and full duplex is automatically negotiated at the initial
establishment of the link connection. "Half duplex" means that there is transmission of data in just one direction at
a time.

 Media-rate adaptation: A LAN switch that has ports with different media rates can adapt between rates—for
example, between 10, 100, and 1000 Mbps; 1 and 10 Gbps; 1, 10 and 25 Gbps; and 40 Gbps and 100 Gbps. This
adaptability allows bandwidth to be matched as needed. Without this ability, it is not possible to have different
media-rate ports that are operating at the same time.

Today, switches operating at the link layer divide a network into segments and reduce the number of devices that share
the total bandwidth. Each segment results in a new collision-free domain.

However, switches have additional functionality and can also be a solution for the typical causes of network congestion.

The most common causes of network congestion are as follows:

 Increasingly powerful computer and network technologies: CPUs, buses, and peripherals are consistently
becoming faster and more powerful. Therefore, they can send more data at higher rates through the network.

 Increasing volume of network traffic: Network traffic is now more common as remote resources are used and are
even necessary to carry out basic work.

 High-bandwidth applications: Software applications are becoming richer in their functionality and are requiring
more bandwidth to process. Applications such as desktop publishing, engineering design, VoD, e-learning, and
streaming video all require considerable processing power and speed. This richer functionality puts a large burden
on networks to manage the transmission of their files and requires sharing of the applications among users.

Switches have these functions:

 Operate at the link layer of the TCP/IP suite

 Selectively forward individual frames

 Have many ports to segment a large LAN into many smaller segments

 Have high speed and support various port speeds

The main purpose of a switch is to forward frames as fast and as efficiently as possible. When a switch receives a frame on
an input interface, it buffers that frame until the switch performs the required processing and is ready to transmit the
frame out an exit interface. If switches did not have frame buffers, then the frames would be dropped when the
congestion occurs or the link becomes saturated.

Ethernet switches selectively forward individual frames from the source port to the destination port.
Switches connect LAN segments, determine the segment to send the data, and reduce network traffic. Here are some
important characteristics of switches:

 High port density: Switches have high port densities; 24-, 32- and 48-port switches operate at speeds of 100 Mbps
and 1, 10, 25, 40, and 100 Gbps. Large enterprise switches may support hundreds of ports.

 Large frame buffers: The ability to store more received frames before having to start dropping them is useful,
particularly when there may be congested ports connected to servers or other heavily used parts of the network.

 Port speed: Depending on the switch, it may be possible to support a range of bandwidths. Ports of 100 Mbps and
1 and 10 Gbps are expected, but 40- or 100-Gbps ports allow even more flexibility.

 Fast internal switching: Having fast internal switching using dedicated hardware allows for higher bandwidths—
100 Mbps and 1, 10, 25, 40, and 100 Gbps.

 Low per-port cost: Switches provide high port density at a lower cost. For this reason, LAN switches can
accommodate network designs that feature fewer users per segment. Therefore, this feature increases the
average available bandwidth per user.

Switches use ASICs, which are fundamental to how an Ethernet switch works. An ASIC is a silicon microchip designed for a
specific task, such as switching or routing packets, rather than being used for general-purpose processing, such as a CPU. A
generic CPU is too slow for forwarding traffic in a switch. While a general-purpose CPU may be fast at running a random
application on a laptop or server, manipulating and forwarding network traffic is a different matter. Traffic handling
requires constant lookups against large memory tables.

Ethernet Frame Structure

In Ethernet terminology, the container into which data is placed for transmission is called a  frame. The frame contains
header information, trailer information, and the actual data that is being transmitted. Bits that are transmitted over an
Ethernet LAN are organized into frames.

There are several types of Ethernet frames; Ethernet II frame is the most common type and is shown in the figure. This
frame type is often used to send IP packets.

The table shows the fields of an Ethernet II frame, which are:

 Preamble: This field consists of 8 bytes of alternating ones and zeros that are used to synchronize the signals of
the communicating computers.

 Destination address: This field contains the MAC address of the NIC on the local network to which the frame is
being sent.

 Source address: This field contains the MAC address of the NIC of the sending computer.

 Type: This field contains a code that identifies the network layer protocol.

 Payload: This field contains the network layer data. If the data is shorter than the minimum length of 46 bytes, a
string of extraneous bits is used to pad the field. This field is also known as "data and padding."

 FCS: The FCS field includes a checking mechanism to ensure that the frame of data has been transmitted without
corruption. The checking mechanism that is being used is the cyclic redundancy check (CRC).

MAC Addresses
A MAC address uniquely identifies a NIC interface of a device. It is used as a link layer address for technologies like
Ethernet, Wi-Fi, and Bluetooth for communication within a network segment. The MAC address provides the means by
which data is directed to the proper destination device. The MAC address of a device is an address that is hardcoded into
the NIC, so the MAC address is also referred to as the physical address, burned-in address, or Ethernet hardware address.
The MAC address is expressed as groups of hexadecimal digits that are organized in pairs or quads.

There are different display formats for MAC addresses, including:

 0000.0c43.2e08

 00:00:0c:43:2e:08

 00-00-0C-43-2E-08

Note

Hexadecimal (often referred to as simply hex) is a numbering system with a base of 16, meaning that it uses 16 unique
symbols as digits. The decimal system that you use on a daily basis has a base of 10, which means that it is composed of 10
unique symbols—0 through 9. The valid symbols in hexadecimal are 0,1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F. In decimal,
A, B, C, D, E, and F equal 10, 11, 12, 13, 14, and 15, respectively. Each hexadecimal digit is 4 bits long because it requires 4
bits in binary to count to 15. Because a MAC address is composed of 12 hexadecimal digits, it is 48 bits long. The letters A,
B, C, D, E, and F can be either uppercase or lowercase.

A MAC address is composed of 12 hexadecimal numbers, which means it has 48 bits. There are two main components of a
MAC address. The first 24 bits constitute the Organizational Unique Identifier (OUI). The last 24 bits constitute the vendor-
assigned, end-station address.

 24-bit OUI: The OUI identifies the manufacturer of the NIC. The IEEE regulates the assignment of OUI numbers.
Within the OUI, there are 2 bits that have meaning only when used in the destination address field:

1. Broadcast or multicast bit: When the least significant bit in the first octet of the MAC address is 1, it
indicates to the receiving interface that the frame is destined for all (broadcast) or a group of (multicast)
end stations on the LAN segment. This bit is referred to as the Individual/Group (I/G) address bit.

2. Locally administered address bit: The second least significant bit of the first octet of the MAC address is
referred as a Universal/Local (U/L) administered address bit. Normally, the combination of the OUI and a
24-bit station address is universally unique. However, if the address is modified locally, this bit should be
set to 1.

 24-bit, vendor-assigned, end-station address: This portion uniquely identifies the Ethernet hardware.

Frame Switching

The switch builds and maintains a table, called the MAC address table, which matches the destination MAC address with
the port that is used to connect to a node. The MAC address table is stored in the content-addressable memory (CAM),
which enables very fast lookups. Therefore, you might see the MAC address table of a switch referred to as a CAM table.

For each incoming frame, the destination MAC address in the frame header is compared with the list of addresses in the
MAC address table. Switches then use MAC addresses as they decide whether to filter, forward, or flood frames. When the
destination MAC address of a received unicast frame resides on the same switch port as the source, the switch drops the
frame, which is a behavior known as filtering. Flooding means that the switch sends the incoming frame to all active ports
except the port on which it received the frame.
The switch creates and maintains the MAC address table by using the source MAC addresses of incoming frames and the
port number through which the frame entered the switch. In other words, a switch learns the network topology by
analyzing the source address of incoming frames from all attached networks.

The following procedure describes a specific example when PC A sends a frame to PC B, and the switch starts with an
empty MAC address table.
The switch performs learning and forwarding actions (including in situations that differ from the aforementioned
example), such as:

 Learning: When the switch receives the frame, it examines the source MAC address and incoming port number. It
performs one of the following actions, depending on whether the MAC address is present in the MAC address
table:

1. No: Adds the source MAC address and port number to the MAC address table and starts the default 300
seconds aging timer for this MAC address

2. Yes: Resets the default 300 seconds aging timer

Note

When the aging timer expires, the MAC address entry is removed from the MAC address table.

 Unicast frames forwarding: The switch examines the destination MAC address, and if it is unicast (meaning to a
single destination), the switch performs one of the following actions, depending on whether the MAC address is
present in the MAC address table:

1. No: Forwards the frame out all ports except the incoming port (referred to as unknown unicast)

2. Yes: Forwards the frame out of the port from which that MAC address was learned previously

 Broadcast or multicast frames forwarding: The switch examines the destination MAC address, and if it is
broadcast or multicast, the switch forwards the frame out all ports except the incoming port (unless using Internet
Group Management Protocol [IGMP] with multicast, in which case, it will only send the frame to specific ports).

VLAN Introduction
If a network is poorly designed with a large number of devices in the same LAN segment, the poor design affects
performance of the network because of a large broadcast and failure domain, limited security control, and so on. A router
could be used to solve the issue because it blocks broadcasts, but routers are typically slower, expensive, and often do not
fit the design of a network.

A commonly used solution is the usage of VLANs, which segment a network on a per-ports basis and can span over
multiple switches. This allows you to logically segment a switched network on an organizational basis by functions, project
teams, or applications rather than on a physical or geographical basis. For example, all workstations and servers used by a
particular workgroup team can be connected to the same VLAN, regardless of their physical connections to the network or
the fact that they might be intermingled with other teams. Reconfiguration of the network can be done through software
rather than by physically unplugging and moving devices or wires.

To understand VLANs, you need a solid understanding of LANs. A LAN is a group of devices that share a common broadcast
domain. When a device on the LAN sends broadcast messages, the switch floods the broadcast messages (as well as
unknown unicast) to all ports except the incoming port. Therefore, all other devices on the LAN receive them. You can
think of a LAN and a broadcast domain as being basically the same. Without VLANs, a switch considers all its interfaces to
be in the same broadcast domain. In other words, all connected devices are in the same LAN. With VLANs, a switch can put
some interfaces into one broadcast domain and some into another. The individual broadcast domains that are created by
the switch are called VLANs. A VLAN is a group of devices on one or more LANs that are configured to communicate as if
they were attached to the same wire, when in fact, they are located on a number of different LAN segments.

A VLAN allows a network administrator to create logical groups of network devices. These devices act like they are in their
own independent network, even if they share a common infrastructure with other VLANs. Each VLAN is a separate Layer 2
broadcast domain, which is usually mapped to a unique IP subnet (Layer 3 broadcast domain). A VLAN can exist on a single
switch or span multiple switches. VLANs can include devices in a single building or multiple-building infrastructures, as
shown in the figure.

Within the switched internetwork, VLANs provide segmentation and organizational flexibility. You can design a VLAN
structure that lets you group devices that are segmented logically by functions, project teams, and applications without
regard to the physical location of the users. VLANs allow you to implement access and security policies for particular
groups of users. If a switch port is operating as an access port, it can be assigned to only one VLAN, which adds a layer of
security. Multiple ports can be assigned to each VLAN. Ports in the same VLAN share broadcasts. Ports in different VLANs
do not share broadcasts. Containing broadcasts within a VLAN improves the overall performance of the network.
To carry traffic for multiple VLANs across multiple switches, you need a trunk to connect each pair of switches. VLANs can
also connect across WANs. Traffic cannot pass directly to another VLAN (between broadcast domains) within the switch or
between two switches. To interconnect two different VLANs, you must use routers or Layer 3 switches. The process of
forwarding network traffic from one VLAN to another VLAN using a router is called inter-VLAN routing. Routers perform
inter-VLAN routing by either having a separate router interface for each VLAN, or by using a trunk link to carry traffic for all
VLANs. The devices on the VLANs send traffic through the router to reach other VLANs.

Usually, subnet numbers are chosen to reflect with which VLANs they are associated. The figure shows that VLAN 2 uses
subnet 10.0.2.0/24, VLAN 3 uses 10.0.3.0/24, and VLAN 4 uses 10.0.4.0/24. In this example, the third octet clearly
identifies the VLAN that the device belongs to. The VLAN design must take into consideration the implementation of a
hierarchical network-addressing scheme.

Cisco switches have a factory default configuration in which various default VLANs are preconfigured to support various
media and protocol types. The default Ethernet VLAN is VLAN 1, which contains all ports by default.

If you want to communicate with the Cisco switch for management purposes from a remote client that is on a different
VLAN, which means it is on a different subnet, then the switch must have an IP address and default gateway configured.
This IP address must be in the management VLAN, which is by default VLAN 1.

Trunking with 802.1Q

Without trunking, running many VLANs between switches would require the same number of interconnecting links.

If every port belongs to one VLAN and you have several VLANs that are configured on switches, then interconnecting them
requires one physical cable per VLAN. When the number of VLANs increases, the number of required interconnecting links
also increases. Ports are then used for interswitch connectivity instead of attaching end devices.

Instead, you can use one connection configured as a trunk:

 Combining many VLANs on the same port is called trunking.

 A trunk allows the transport of frames from different VLANs.

 Each frame has a tag that specifies the VLAN to which it belongs.

 The receiving device forwards the frames to the corresponding VLAN based on the tag information.
A trunk is a point-to-point link between two network devices, such as a server, router, and switch. Ethernet trunks carry
the traffic of multiple VLANs over a single link and allow you to extend the VLANs across an entire network. A trunk does
not belong to a specific VLAN; rather, it is a conduit for VLANs between devices. By default, on a Cisco Catalyst switch, all
configured VLANs are carried over a trunk interface.

Note

A trunk could also be used between a network device and a server or another device that is equipped with an appropriate
trunk-capable NIC.

If your network includes VLANs that span multiple interconnected switches, the switches must use VLAN trunking on the
connections between them. Switches use a process called VLAN tagging in which the sending switch adds another header
to the frame before sending it over the trunk. This extra header is called a tag and includes a VLAN ID (VID) field so that the
sending switch can list the VLAN ID and the receiving switch can identify the VLAN to which each frame belongs. The
switch does so by using the 802.1Q encapsulation header. IEEE 802.1Q uses an internal tagging mechanism that inserts an
extra 4-byte tag field into the original Ethernet frame between the Source Address and Type or Length fields. As a result,
the frame still has the original source and destination MAC addresses. Also, because the original header has been
expanded, 802.1Q encapsulation forces a recalculation of the original FCS field in the Ethernet trailer, because the FCS is
based on the content of the entire frame. It is the responsibility of the receiving Ethernet switch to look at the 4-byte tag
field and determine where to deliver the frame.

Trunking allows switches to pass frames from multiple VLANs over a single physical connection. For example, the figure
shows Switch 1 receiving a broadcast frame on the Fa0/1 interface, which is a member of VLAN 1. In a broadcast, the
frame must be forwarded to all ports in VLAN 1. Because there are ports on Switch 2 that are members of the VLAN 1
switch, the frame must be forwarded to Switch 2. Before forwarding the frame, Switch 1 adds a header that identifies the
frame as belonging to VLAN 1. This header tells Switch 2 that the frame should be forwarded to the VLAN 1 ports. Switch 2
removes the header and then forwards the frame for all ports that are part of VLAN 1.

As another example, the device on the Switch 1 Fa0/5 interface sends a broadcast. Switch 1 sends the broadcast out of
port Fa0/6 (because this port is in VLAN 2) and out Fa0/23 (because it is a trunk, meaning that it supports multiple VLANs).
Switch 1 adds a trunking header to the frame, listing a VLAN ID of 2. Switch 2 strips off the trunking header, and because
the frame is part of VLAN 2, Switch 2 knows to forward the frame out of only ports Fa0/5 and Fa0/6 and not ports Fa0/1
and Fa0/2.
Network Routes and Routing 

In every LAN, Ethernet is used to exchange data locally. But if you want to communicate between different LANs—for
example, if a user in an enterprise campus wants to communicate with a user at a remote site or globally with a web
server—this exchange will cross many different physical networks and devices. For communication to happen, you need an
addressing system that uniquely identifies every device globally and enables delivery of packets between them. The
delivery function is provided by the TCP/IP Internet layer, which provides services to exchange the data over the network
between identified end devices.

IP Overview

The IP component of the TCP/IP suite determines where packets of data are routed, based on their destination addresses.
IP has certain characteristics that are related to how it manages this function.

IP uses packets to carry information through the network. A packet is a self-contained, independent entity that contains
data and sufficient information to be routed from the source to the destination without reliance on previous packets.

IP has these characteristics:

 IP operates at Layer 3 or the network layer of the OSI reference model and at the Internet layer of the TCP/IP
stack.
 IP is a connectionless protocol, in which a one-way packet is sent to the destination without advance notification
to the destination device. The destination device receives the data and does not return any status information to
the sending device.

 Each packet is treated independently, which means that each packet can travel a different way to the destination.

 IP uses hierarchical addressing, in which the network ID is the equivalent of a street and the host ID is the
equivalent of a house or an office building on that street.

 IP provides service on a best-effort basis and does not guarantee packet delivery. A packet can be misdirected,
duplicated, or lost on the way to its destination.

 IP does not provide any special features that recover corrupted packets. Instead, the end systems of the network
provide these services.

 IP operates independently of the medium that is carrying the data.

 There are two types of IP addresses: IPv4 and IPv6, the latter becoming increasingly important in modern
networks.

IPv4 Address Representation

Every device must be assigned a unique address to communicate on an IP network. This includes hosts or endpoints (such
as PCs, laptops, printers, web servers, smartphones, and tablets), as well as intermediary devices (such as routers and
switches).

Physical street addresses are necessary to identify the locations of specific homes and businesses so that mail can reach
them efficiently. In the same way, logical IP addresses are used to identify the location of specific devices on an IP network
so that data can reach those network locations. Every host that is connected to a network or the Internet has a unique IP
address that identifies it. Structured addressing is crucial to route packets efficiently. Learning how IP addresses are
structured and how they function in the operation of a network provides an understanding of how IP packets are
forwarded over networks using TCP/IP.

An IPv4 address is a 32-bit number, is hierarchical, and consists of two parts:

 The network address portion (network ID): The network ID is the portion of an IPv4 address that uniquely
identifies the network in which the device with this IPv4 address resides. The network ID is important because
most hosts on a network can directly communicate only with devices in the same network. If the hosts need to
communicate with devices that have interfaces assigned to some other network ID, there must be a network
device—a router or a multilayer switch—that can route data between the networks.

 The host address portion (host ID): The host ID is the portion of an IPv4 address that uniquely identifies a device
on a given IPv4 network. Host IDs are assigned to individual devices, both hosts or endpoints, and intermediary
devices.

Note

There are two versions of IP that are in use: IPv4 and IPv6. IPv4 is the most common and currently is used on the Internet.
It has been the mainstay protocol since the 1980s. IPv6 was designed to solve the problem of global IPv4 address
exhaustion. The adoption of IPv6 was initially very slow but is now reaching wider deployment.

Practical Example of an IPv4 Address


Recall that IPv4 addresses are most often written in the dotted-decimal notation, which consists of four sets of 8 bits
(octets) converted from binary to decimal numbers, separated by dots. The following example shows an IPv4 address in
decimal form translated into its binary form, using the method described earlier.

IPv4 Header Fields

Before you can send an IP packet, there needs to be a format that all IP devices agree on to route a packet from the source
to the destination. All that information is contained in the IP header. The IPv4 header is a container for values that are
required to achieve host-to-host communications. Some fields (such as the IP version) are static, and others, such as Time
to Live (TTL), are modified continually in transit.

The IPv4 header has several fields. First, you will learn about these four fields:

 Service type: Provides information on the desired quality of service

 TTL: Limits the lifetime of a packet

Note

The TTL value does not use time measurement units. It is a value between 1 and 255. The packet source sets the value, and
each router that receives the packet decrements the value by 1. If the value remains above 0, the router forwards the
packet. If the value reaches 0, the packet is dropped. This mechanism keeps undeliverable packets from traveling between
networks for an indefinite amount of time.

 Source address: Specifies the 32-bit binary value that represents the IPv4 address of the sending endpoint

 Destination address: Specifies the 32-bit binary value that represents the IPv4 address of the receiving endpoint

Other fields in the header include:

 Version: Describes the version of IP

 IHL: Internet Header Length (IHL) describes the length of the header


 Total length: Describes the length of a packet, including header and data

 Identification: Used for unique fragment identification

 Flag: Sets various control flags regarding fragmentation

 Fragment offset: Indicates where a specific fragment belongs

 Protocol: Indicates the upper-layer protocol that is used in the data portion of an IPv4 packet. For example, a
protocol value of 6 indicates this packet carries a TCP segment.

 Header checksum: Used for header error detection

 Options: Includes optional parameters

 Padding: Used to ensure that the header ends on a 32-bit boundary

Subnet Masks

A subnet mask is a 32-bit number that describes which portion of an IPv4 address refers to the network ID and which part
refers to the host ID.

The subnet mask is configured on a device along with the IPv4 address.

If a subnet mask has a binary 1 in a bit position, the corresponding bit in the address is part of the network ID. If a subnet
mask has a binary 0 in a bit position, the corresponding bit in the address is part of the host ID.

The figure represents an IPv4 address separated into a network and a host part. In the example, the network part ends on
the octet boundary, which coincides with what you learned about IPv4 address class boundaries. The address in the figure
belongs to class B, where the first two octets (16 bits) indicate the network part, and the remaining two octets represent
the host part. Therefore, you create the subnet mask by setting the first 16 bits of the subnet mask to binary 1 and the last
16 bits of the subnet mask to 0.

Notice the prefix /16; it is another way of expressing the subnet mask, and it matches the number of network bits that are
set to binary 1 in the subnet mask.

Networks are not always assigned the same prefix. Depending on the number of hosts on the network, the prefix that is
assigned may be different. Having a different prefix number changes the host range and broadcast address for each
network.

In the early days of the Internet, the standard reserved the first 8 bits of an IPv4 address for the network part and the
remaining 24 bits for the host part. With 24 host bits, you can provide 16,777,214 IPv4 host addresses. It soon became
clear that such address allocation is inefficient, because most organizations require several smaller networks of smaller
size rather than one network with thousands of computers. Also, most organizations need several networks of different
sizes.

The first step to address this need was made in 1981 when the IETF released RFC 790, where the IPv4 address classes were
introduced for the first time. Here, the Internet Assigned Numbers Authority (IANA) determined IPv4 Class A, Class B, and
Class C.

A Class A address block is designed to support extremely large networks with more than 16 million host addresses. The
Class A address uses only the first octet (8 bits) of the 32-bit number to indicate the network address. The remaining three
octets of the 32-bit number are used for host addresses. The first bit of a Class A address is always a 0. Because the first bit
is a 0, the lowest number that can be represented is 00000000 (decimal 0), and the highest number that can be
represented is 01111111 (decimal 127). However, these two network numbers, 0 and 127, are reserved and cannot be
used as network addresses. Therefore, any address that has a value in the range of 1 to 126 in the first octet of the 32-bit
number is a Class A address.

The Class B address space is designed to support the needs of moderate to large networks with more than 65,000 hosts.
The Class B address uses two of the four octets (16 bits) to indicate the network address. The remaining two octets specify
host addresses. The first 2 bits of the first octet of a Class B address are always binary 10. Starting the first octet with
binary 10 ensures that the Class B space is separated from the upper levels of the Class A space. The remaining 6 bits in the
first octet may be populated with either ones or zeros. Therefore, the lowest number that can be represented with a Class
B address is 10000000 (decimal 128), and the highest number that can be represented is 10111111 (decimal 191). Any
address that has a value in the range of 128 to 191 in the first octet is a Class B address.

The Class C address space is the most commonly available address class. This address space is intended to provide
addresses for small networks with a maximum of 254 hosts. In a Class C address, the first three octets (24 bits) of the
address identify the network portion, with the remaining octet reserved for the host portion. A Class C address begins with
binary 110. Therefore, the lowest number that can be represented is 11000000 (decimal 192), and the highest number
that can be represented is 11011111 (decimal 223). If an address contains a number in the range of 192 to 223 in the first
octet, it is a Class C address.

Note

IPv4 hosts only use Class A, B, and C IPv4 addresses for unicast (host-to-host) communications. In 2002, RFC 3330
introduced also Class D and Class E, defining special-use IPv4 addresses. This RFC later became obsolete by another RFC
defining global and other specialized IPv4 address blocks. Still, Class D and Class E are included here for completeness, but
they are outside the scope of this discussion.

Assigning IPv4 addresses to classes is known as classful addressing. Each IPv4 address is broken down into a network ID
and a host ID. In addition, a bit or bit sequence at the start of each address determines the class of the address.
Nowadays, classless addressing is predominantly used, and the two mechanisms to achieve that are subnetting and
variable-length subnet mask (VLSM).

Subnetting allows you to create multiple logical networks that exist within a single larger network. When you are designing
a network addressing scheme, you need to be able to determine how many logical networks you will need and how many
devices you will be able to fit into these smaller networks.

When you are using subnetting, the same subnet mask is applied for all the subnets of a given network. This way, each
subnet has the same number of available host addresses. You may need this approach sometimes, but most organizations
require several networks of various sizes rather than one network with thousands of devices. So usually, having the same
subnet mask for all subnets of a given network ends up wasting address space, because each subnet has the same number
of available host addresses.

VLSM allows you to use more than one subnet mask within a network to achieve more efficient use of IP addresses.
Instead of using the same subnet mask for all subnets, you can use the most efficient subnet mask for each subnet. The
most efficient subnet mask for a subnet is the mask that provides an appropriate number of host addresses for that
individual subnet.

Private vs. Public IPv4 Addresses

As the Internet began to grow exponentially in the 1990s, it became clear that if the current growth trajectory continued,
eventually there would not be enough IPv4 addresses for everyone who wanted one. Work began on a permanent
solution, which would become IPv6, but in the interim, several other solutions were developed. These solutions included
Network Address Translation (NAT), classless interdomain routing (CIDR), private IPv4 addressing, and VLSM.

Public IPv4 Addresses

Hosts that are publicly accessible over the Internet require public IP addresses. Internet stability depends directly on the
uniqueness of public network addresses. Therefore, a mechanism is needed to ensure that addresses are, in fact, unique.
This mechanism was originally managed by the InterNIC. The IANA succeeded the InterNIC. The IANA carefully manages
the remaining supply of IPv4 addresses to ensure that duplication of public IP addresses does not occur. Duplication would
cause instability in the Internet and would compromise its ability to deliver packets only to the correct network.
With few exceptions, businesses and home Internet users receive their IP address assignment from their local Internet
registry (LIR), which typically is their ISP. These IP addresses are called provider-aggregatable (as opposed to provider-
independent addresses) because they are linked to the ISP. If you change ISPs, you will need to readdress your Internet-
facing hosts.

The following table provides a summary of public IPv4 addresses.

LIRs obtain IP address pools from their regional Internet registry (RIR):

 African Network Information Center (AfriNIC)

 Asia Pacific Network Information Center (APNIC)

 American Registry for Internet Numbers (ARIN)

 Latin American and Caribbean Network Information Center (LACNIC)

 Réseaux IP Européens Network Coordination Centre (RIPE NCC)

With the rapid growth of the Internet, public IPv4 addresses began to run out. New mechanisms such as NAT, CIDR, VLSM,
and IPv6 were developed to help solve the problem.

Private IPv4 Addresses

Internet hosts require a globally routable and unique IPv4 address, but private hosts that are not connected to the Internet
can use any valid address, as long as it is unique within the private network. However, because many private networks
exist alongside public networks, deploying random IPv4 addresses is strongly discouraged.

In February 1996, the IETF published RFC 1918, "Address Allocation for Private Internets," to both ease the accelerating
depletion of globally routable IPv4 addresses and provide companies an alternative to using arbitrary IPv4 addresses.
Three blocks of IPv4 addresses (one Class A network, 16 Class B networks, and 256 Class C networks) are designated for
private, internal use.

Addresses in these ranges are not routed on the Internet backbone. Internet routers are configured to discard private
addresses. In a private intranet, these private addresses can be used instead of globally unique addresses. When a network
that is using private addresses must connect to the Internet, private addresses must be translated to public addresses. This
translation process is called NAT. A router is often the network device that performs NAT.

The following table provides a summary for private IPv4 addresses.

IPv6 Overview

Although VLSM, NAT, and other workarounds (for avoiding the transition to IPv6) are available, networks with Internet
connectivity must begin the transition to IPv6 as soon as possible. For IPv4 networks that provide goods and services to
Internet users, it is especially important because the transition by the Internet community is already under way. New
networks may be unable to acquire IPv4 addresses, and networks that are running IPv6 exclusively will not be able to
communicate with IPv4-only networks unless you configure an intermediary gateway or another transition mechanism.
IPv6 and IPv4 are completely separate protocols, and IPv6 is not backward-compatible with IPv4. As the Internet evolves,
organizations must adopt IPv6 to support future business continuity, growth, and global expansion. Furthermore, some
ISPs and Regional Internet Registries are administratively out of IPv4 addresses, which means that their supply of IPv4
addresses is now limited and organizations have to migrate to and support IPv6 networks.

As the global Internet continues to grow, its overall architecture needs to evolve to accommodate the new technologies
that support the increasing numbers of users, applications, appliances, and services. This evolution also includes Enterprise
networks and communication providers, which provide services to home users. IPv6 was proposed when it became clear
that the 32-bit addressing scheme of IPv4 could not keep up with the demands of Internet growth. IPv6 quadruples the
number of network address bits from 32 bits (in IPv4) to 128 bits. This means that the address pool for IPv6 is around 340
undecillion, or 340 trillion trillion trillion, which is an unimaginably large number.

The larger IPv6 address space allows networks to scale and provide global reachability. The simplified IPv6 packet header
format handles packets more efficiently. The IPv6 network is designed to embrace encryption and favor targeted multicast
over often problematic broadcast communication.

IPv6 includes several features that make it attractive for building global-scale, highly effective networks:

 Larger address space: The expanded address space includes several IP addressing enhancements:

1. It provides improved global reachability and flexibility.

2. A better aggregation of IP prefixes is announced in the routing tables. The aggregation of routing prefixes
limits the number of routing table entries, which creates efficient and scalable routing tables.

3. Multihoming increases the reliability of the Internet connection of an IP network. With IPv6, a host can
have multiple IP addresses over one physical upstream link. For example, a host can connect to several
ISPs.

4. Autoconfiguration is available.

5. There are more plug-and-play options for more devices.

6. Simplified mechanisms are available for address renumbering and modification.

 Simpler header: Streamlined fixed header structures make the processing of IPv6 packets faster and more efficient
for intermediate routers within the network. This fact is especially true when large numbers of packets are routed
in the core of the IPv6 Internet.

 Security and mobility: Features that were not part of the original IPv4 specification, such as security and mobility,
are now built into IPv6. IP Security (IPsec) is available in IPv6, allowing the IPv6 networks to be secure. Mobility
enables mobile network devices to move around in networks without breaks in established network connections.

 Transition richness: IPv6 also includes a rich set of tools to aid in transitioning networks from IPv4 to allow an
easy, nondisruptive transition over time to IPv6-dominant networks. An example is dual stacking, in which devices
run both IPv4 and IPv6.
IPv6 addresses consist of 128 bits and are represented as a series of eight 16-bit hexadecimal fields that are separated by
colons. Although uppercase and lowercase are permitted, it is best practice to use lowercase for IPv6 representation:

Address representation:

 Format is x:x:x:x:x:x:x:x, where x is a 16-bit hexadecimal field:

1. Example: 2001:0db8:010f:0001:0000:0000:0000:0acd

 Leading zeros in a field can be omitted:

1. Example: 2001:db8:10f:1:0:0:0:acd

 Successive fields of 0 are represented as "::" but only once in an address:

1. Example: 2001:db8:10f:1::acd

Note

The a, b, c, d, e, and f in hexadecimal fields can be either uppercase or lowercase, but it is best practice to use lowercase
for IPv6 representation.

Here are two ways to shorten the writing of IPv6 addresses:

 The leading zeros in a field can be omitted, so 010f can be written as 10f. A field that contains all zeros (0000) can
be written as 0.

 Successive fields of zeros can be represented as a double colon (::) but only once in an address. An address parser
can identify the number of missing zeros by separating the two parts and filling in zeros until the 128 bits are
completed. However, if two double colons are placed in the address, there is no way to identify the size of each
block of zeros. Therefore, only one double colon is possible in a valid IPv6 address.

The use of the double-colon technique makes many addresses very small; for example, ff01:0:0:0:0:0:0:1 becomes ff01::1.
The all-zeros address is written as a double colon; this type of address representation is known as the unspecified address.

IPv6 Address Types

IPv6 supports three basic types of addresses. Each address type has specific rules regarding its construction and use. These
types of addresses are:

 Unicast: Unicast addresses are used in a one-to-one context.

 Multicast: A multicast address identifies a group of interfaces. Traffic that is sent to a multicast address is sent to
multiple destinations at the same time. An interface may belong to any number of multicast groups.

 Anycast: An IPv6 anycast address is assigned to an interface on more than one node. When a packet is sent to an
anycast address, it is routed to the nearest interface that has this address. The nearest interface is found according
to the measure of metric of the particular routing protocol that is running. All nodes that share the same address
should behave the same way so that the service is offered similarly, regardless of the node that services the
request.
IPv6 does not support broadcast addresses in the way that they are used in IPv4. Instead, specific multicast addresses
(such as the all-nodes multicast address) are used.

Role of a Router

A router is a networking device that forwards packets between different networks.

While switches exchange data frames between segments to enable communication within a single network, routers are
required to reach hosts that are not in the same network. Routers enable internetwork communication by connecting
interfaces in multiple networks. For example, the router in the figure above has one interface connected to the
192.168.1.0/24 network and another interface connected to the 192.168.2.0/24 network. The router uses a routing table
to route traffic between the two networks.

In the following figure, data frames travel between the various endpoints on local area network (LAN) A. The switch
enables the communication to all devices within the same network whose network IPv4 address is 10.18.0.0/16. Likewise,
the LAN B switch enables communication among the hosts on LAN B, whose network IPv4 address is 10.22.0.0/16.
A host in LAN A cannot communicate with a host in LAN B without the router. Routers enable communication between
hosts that are not in the same local LAN. Routers are able to do this function because they can be attached to multiple
networks and have the ability to route between them. In the figure, the router is attached to two networks—10.18.0.0/16
and 10.22.0.0/16. Routers are essential components of large IP networks, because they can accommodate growth across
wide geographical areas.

This figure illustrates another important routing concept. Networks to which the router is attached are called local or
directly connected networks. All other networks—networks that a router is not directly attached to—are called remote
networks.

The topology in the figure shows Router X, which is directly attached to three networks—172.16.1.0/24, 172.16.2.0/24,
and 192.168.100.0/24. To Router X, all other networks—in other words, 10.10.10.0/24, 10.10.20.0/24, and 10.10.30.0/24
—are remote networks. To Router Y, networks 10.10.10.0/24, 10.10.20.0/24, and 10.10.30/24 are directly connected
networks. Router X and Router Y have a common directly connected network—192.168.100.0/24.

Router Functions

Routers have these two important functions:

 Path determination: Routers use their routing tables to determine how to forward packets. Each router must
maintain its own local routing table, which contains a list of all destinations that are known to the router, and
information about how to reach those destinations. When a router receives an incoming packet, it examines the
destination IP address in the packet and searches for the best match between the destination address and the
network addresses in the routing table. A matching entry may indicate that the destination is directly connected to
the router or that it can be reached via another router. This router is called the next-hop router and is on the path
to the final destination. If there is no matching entry, the router sends the packet to the default route. If there is
no default route, the router drops the packet.

 Packet forwarding: After a router determines the appropriate path for a packet, it forwards the packet through a
network interface toward the destination network. Routers can have interfaces of different types. When
forwarding a packet, routers perform encapsulation following the OSI Layer 2 protocol implemented at the exit
interface. The figure shows router A, which has two Fast Ethernet interfaces and one serial interface. When the
router A receives an Ethernet frame, it de-encapsulates it, examines it, and determines the exit interface. If the
router needs to forward the packet out of the serial interface, the router will encapsulate the frame according to
the Layer 2 protocol used on the serial link. The figure also shows a conceptual routing table that lists destination
networks known to the router along with its corresponding exit interface or next-hop address. If there is an
interface on the router that has an IPv4 address within the destination network, the destination network is
considered "directly connected" to the router. For example, assume that router A receives a packet on its Serial
0/0/0 interface that is destined for a host on network 10.1.1.0. Because the routing table indicates that network
10.1.1.0 is directly connected, the router A forwards the packet out of its Fast Ethernet 0/1 interface, and the
switches on the segment process the packet to the host. If a destination network in the routing table is not directly
connected, the packet must reach the destination network via the next-hop router. For example, assume that
router A receives a packet on its Serial 0/0/0 interface and the destination host address is on the 10.1.3.0 network.
In this case, it must forward the packet to the router B interface with the IPv4 address 10.1.2.2.

Routers preserve knowledge of the network topology and forward packets based on destinations, choosing the best path
across the topology. This knowledge of the topology and changes in the topology can be maintained statically or
dynamically. In large enterprise campus or data center environments, typically, you would use one of the available routing
protocols that calculate route information using dynamic routing algorithms.

Routing is the process of selecting a path to forward data that originated from one network and is destined for a different
network. Routers gather and maintain routing information to enable the transmission and receipt of such data packets.

Conceptually, routing information takes the form of entries in a routing table, with one entry for each identified route. You
can manually configure the entries in the routing table, or the router can use a routing protocol to create and maintain the
routing table dynamically to accommodate network changes when they occur.

A router must perform these actions to route data:

 Identify the destination of the packet: Determine the destination network address of the packet that needs to be
routed by using the subnet mask.

 Identify the sources of routing information: Determine from which sources a router can learn paths to network
destinations.

 Identify routes: Determine sources from which a router can learn paths to network destinations.

 Select routes: Select the best path to the intended destination.

 Maintain and verify routing information: Update known routes and the selected route according to network
conditions.

If the destination network is directly connected—that is, if there is an interface on the router that belongs to that network
—the router already knows which interface to use when forwarding packets. If destination networks are not directly
attached, the router must learn which route to use when forwarding packets.

The destination information can be learned in two ways:

 You can enter destination network information manually, also known as a static route.

 Routers can learn destination network information dynamically through a dynamic routing protocol process that is
running on the router.
The routing information that a router learns is offered to the routing table. The router relies on this table to tell it which
interfaces to use when forwarding packets.

A routing table may contain four types of entries:

 Directly connected networks

 Dynamic routes

 Static routes

 Default routes

All directly connected networks are added to the routing table automatically. A newly deployed router, without any
configured interfaces, has an empty routing table. The directly connected routes are added after you assign a valid IP
address to the router interface, it is enabled, and when it receives a carrier signal from another device (router, switch, end
device, and so on). If the hardware fails or is administratively shut down, the entry for that network is removed from the
routing table.

Dynamic routing protocols are used by routers to share information about the reachability and status of remote networks.
A dynamic routing protocol allows routers to automatically learn about remote networks from other routers. These
networks, and the best path to each, are added to the routing table of the router and identified as a network learned by a
specific dynamic routing protocol. Cisco routers can support a variety of dynamic IPv4 and IPv6 routing protocols, such as
Border Gateway Protocol (BGP), Open Shortest Path First (OSPF), Enhanced Interior Gateway Routing Protocol (EIGRP),
Intermediate System-to-Intermediate System (IS-IS), Routing Information Protocol (RIP), and so on. The routing
information is updated when changes in the network occur. Larger networks require dynamic routing because there are
usually many subnets and constant changes. These changes require updates to routing tables across all routers in the
network to prevent connectivity loss. Dynamic routing protocols ensure that the routing table is automatically updated to
reflect changes in network.

Static routes are entries that you manually enter directly into the configuration of the router. Static routes are not
automatically updated and must be manually reconfigured if the network topology changes. Static routes can be effective
for small, simple networks that do not change frequently. The benefits of using static routes include improved security and
resource efficiency. The main disadvantage of using static routes is the lack of automatic reconfiguration if the network
topology changes. There are two common types of static routes in the routing table—static routes to a specific network
and the default static route.

A default route is an optional entry that is used by the router if a packet does not match any other, more specific route in
the routing table. A default route is used when the route from a source to a destination is not known or when it is not
feasible for the router to maintain many routes in its routing table. A default route can be dynamically learned or statically
configured.

Static and Dynamic Routing Comparison

There are two ways that a router can learn where to forward packets to destination networks that are not directly
connected:

 Static routing: The router learns routes when an administrator manually configures the static route. The
administrator must manually update this static route entry whenever an internetwork topology change requires an
update. Static routes are user-defined routes that specify the outgoing interface on the router when packets
should be sent to a specific destination. These administrator-defined routes allow a very precise control over the
routing behavior of the IP internetwork.

 Dynamic routing: The router dynamically learns routes after an administrator configures a routing protocol that
determines routes to remote networks. Unlike the situation with static routes, after the network administrator
enables dynamic routing, the routing process automatically updates the routing table whenever the device
receives new topology information. The router learns and maintains routes to the remote destinations by
exchanging routing updates with other routers in the internetwork.

Here are the characteristics of static and dynamic routes:


 Static routes:

1. A network administrator manually enters static routes into the router.

2. A network topology change requires a manual update to the route.

3. Routing behavior can be precisely controlled.

 Dynamic routes:

1. A network routing protocol automatically adjusts dynamic routes when the topology or traffic changes.

2. Routers learn and maintain routes to the remote destinations by exchanging routing updates.

3. Routers discover new networks or other changes in the topology by sharing routing table information.

Default Gateways

A source host is able to communicate directly (without a router) with a destination host only if the two hosts are on the
same subnet. If the two hosts are on different subnets, the sending host must send the data to its default gateway, which
will forward the data to the destination. The default gateway is an address on a router (or Layer 3 switch) connected to the
same subnet that the source host is on.

Therefore, before a host can send a packet to its destination, it must first determine if the destination address is on its
local subnet or not. It uses the subnet mask in this determination. The subnet mask describes which portion of an IPv4
address refers to the network or subnet and which part refers to the host.

The source host first does an AND operation between its own IPv4 address and subnet mask to arrive at its local subnet
address. To determine if the destination address is on the same subnet, the source host then does an AND operation
between the destination IPv4 address and the source subnet mask. This occurs because the source host does not know the
subnet mask of the destination address, and if the devices are on the same subnet, they must have the same mask. If the
resulting subnet address is the same, then it knows the source and destination are on the same subnet. Otherwise, they
are on different subnets.

For example, IPv4 host 10.10.1.241/24 is on the 10.10.1.0/24 subnet. If the host that it wants to communicate with is
10.10.1.175, it knows that this IPv4 host also is on the local 10.10.1.0/24 subnet.

If the source and destination devices are on the same subnet, then the source can deliver the packet directly. If they are on
different subnets, then the packet must be forwarded to the default gateway, which will forward it to its destination. The
default gateway address must have the same network and subnet portion as the local host address; in other words, the
default gateway must be on the same subnet as the local host.
Transport Layer and Packet Delivery 

IP addressing is used to uniquely identify the devices globally. But to provide a logical connection between the endpoints
of a network and to provide transport services from a host to a destination, you need a different set of functionalities,
which are provided by the TCP/IP transport layer. Another important functionality is that the transport layer provides the
interface between the application layer that you use to communicate with through various applications and the underlying
Internet layer, and therefore hides the complexity of the network from the applications.

TCP/IP Transport Layer Functions

The transport layer resides between the application and Internet layers of the TCP/IP protocol stack. The TCP/IP Internet
layer directs information to its destination, but it cannot guarantee that the information will arrive in the correct order,
free of errors, or even that the information will arrive at all. The two most common transport layer protocols of the TCP/IP
suite are TCP and UDP. Both protocols manage the communication of multiple applications and provide communication
services directly to the application process on the host.

The basic service that the transport layer provides is tracking communication between applications on the source and
destination hosts. This service is called session multiplexing, and it is performed by both UDP and TCP. A major difference
between TCP and UDP is that TCP can ensure that the data is delivered, while UDP does not ensure delivery.

Note

The transport layer of the TCP/IP protocol stack maps to the transport layer of the OSI model. The protocols that operate
at this layer are said to operate at Layer 4 of the OSI model. If you hear someone use the term "Layer 4," they are referring
to the transport layer of the OSI model.
Multiple communications often occur at once; for instance, you may be searching the web and using FTP to transfer a file
at the same time. The transport layer tracks these communications and keeps them separate. This tracking is provided by
both UDP and TCP. To pass data to the proper applications, the transport layer must identify the target application. If TCP
is used, the transport layer has the additional responsibilities of establishing end-to-end connections, segmenting data and
managing each piece, reassembling the segments into streams of application data, managing flow control, and applying
reliability mechanisms.

Session Multiplexing

Session multiplexing is the process by which an IP host is able to support multiple sessions simultaneously and manage the
individual traffic streams over a single link. A session is created when a source machine needs to send data to a destination
machine. Most often, this process involves a reply, but a reply is not mandatory.

Note:

Session multiplexing service provided by the transport layer supports multiple TCP or UDP sessions and not just one TCP
and one UDP session respectively over a single link, as indicated in the figure.

Identifying the Applications


To pass data to the proper applications, the transport layer must identify the target application. TCP/IP transport protocols
use port numbers to accomplish this task. The connection is established from a source port to a destination port. Each
application process that needs to access the network is assigned a port number that is unique in that host. The destination
port number is used in the transport layer header to indicate to which target application that piece of data is associated.
The source port is used by the sending host to help keep track of existing data streams and new connections it initiates.
The source and destination port numbers are not usually the same.

Segmentation

TCP takes variably sized data chunks from the application layer and prepares them for transport onto the network. The
application relies on TCP to ensure that each chunk is broken up into smaller segments that will fit the maximum
transmission unit (MTU) of the underlying network layers. UDP does not provide segmentation services. UDP instead
expects the application process to perform any necessary segmentation and supply it with data chunks that do not exceed
the MTU of lower layers.

Note

The MTU of the Ethernet protocol is 1500 bytes. Larger MTUs are possible, but 1500 bytes is the normal size.

Flow Control

If a sender transmits packets faster than the receiver can receive them, the receiver drops some of the packets and
requires them to be retransmitted. TCP is responsible for detecting dropped packets and sending replacements. A high
rate of retransmissions introduces latency in the communication channel. To reduce the impact of retransmission-related
latency, flow control methods work to maximize the transfer rate and minimize the required retransmissions.

Basic TCP flow control relies on acknowledgments that are generated by the receiver. The sender sends some data while
waiting for an acknowledgment from the receiver before sending the next part. However, if the round-trip time (RTT) is
significant, the overall transmission rate may slow to an unacceptable level. To increase network efficiency, a mechanism
called windowing is combined with basic flow control. Windowing allows a receiving computer to advertise how much data
it is able to receive before transmitting an acknowledgment to the sending computer.

Windowing enables avoidance of congestion in the network.

Connection-Oriented Transport Protocol

Within the transport layer, a connection-oriented protocol establishes a session connection between two IP hosts and
then maintains the connection during the entire transmission. When the transmission is complete, the session is
terminated. TCP provides connection-oriented reliable transport for application data.

Reliability
TCP reliability has these three main objectives:

 Detection and retransmission of dropped packets

 Detection and remediation of duplicate or out-of-order data

 Avoidance of congestion in the network

Reliable vs. Best-Effort Transport

The terms "reliable" and "best effort" describe two types of connections between computers. TCP is a connection-oriented
protocol that is designed to ensure reliable transport, flow control, and guaranteed delivery of IP packets. For this reason,
it is labeled a "reliable" protocol. UDP is a connectionless protocol that relies on the application layer for sequencing and
detection of dropped packets and is considered "best effort." Each protocol has strengths that make them useful for
particular applications.

Reliable (Connection-Oriented)

Some types of applications require a guarantee that packets arrive safely and in order. Any missing packets could cause the
data stream to be corrupted. Consider the example of using your web browser to download an application. Every piece of
that application must be assembled on the receiver in the proper binary order, or it will not execute. FTP is an application
where the use of a connection-oriented protocol like TCP is indicated.

TCP uses a three-way handshake when setting up a connection. You can think of it as being similar to a phone call. The
phone rings, the called party says "Hello," and the caller says "Hello." Here are the actual steps:

1. The source of the connection sends a synchronization (SYN) segment to the destination requesting a session. The
SYN segment includes the sequence number.

2. The destination responds to the SYN with a synchronization-acknowledgment (SYN-ACK) and increments the
initiator sequence number by 1.

3. If the source accepts the SYN-ACK, it sends an acknowledgment (ACK) segment to complete the handshake.

Here are some common applications that use TCP:


 Web browsers

 Email

 FTP

 Network printing

 Database transactions

To support reliability, a connection is established between the IP source and destination to ensure that the application is
ready to receive data. During the initial process of connection establishment, information is exchanged about the
capabilities of the receiver, and starting parameters are negotiated. These parameters are then used for tracking data
transfer during the connection.

When the sending computer transmits data, it assigns a sequence number to each packet. The receiver then responds with
an acknowledgment number that is equal to the next expected sequence number. This exchange of sequence and
acknowledgment numbers allows the protocol to recognize when data has been lost, or duplicated, or has arrived out of
order.

Best Effort (Connectionless)

Reliability (guaranteed delivery) is not always necessary, or even desirable. For example, if one or two segments of a VoIP
stream fail to arrive, it would only create a momentary disruption in the stream. This disruption might appear as a
momentary distortion of the voice quality, but the user may not even notice. In real-time applications, such as voice
streaming, dropped packets can be tolerated as long as the overall percentage of dropped packets is low.

Here are some common applications that use UDP:

 DNS

 VoIP

 TFTP

UDP provides applications with best-effort delivery and does not need to maintain state information about previously sent
data. Also, UDP does not need to establish any connection with the receiver and is termed connectionless. There are many
situations in which best-effort delivery is more desirable than reliable delivery. A connectionless protocol is desirable for
applications that require faster communication without verification of receipt.

UDP also is better for transaction-type services, such as DNS or DHCP. In transaction-type services, there is only a simple
query and response. If the client does not receive a response, it simple sends another query, which is more efficient and
consumes less resources than using TCP.

TCP Characteristics

Applications use the connection-oriented services of TCP to provide data reliability between hosts. TCP includes several
important features that provide reliable data transmission.

TCP can be characterized as follows:

 TCP operates at the transport layer of the TCP/IP stack (OSI Layer 4).

 TCP provides application access to the Internet layer (OSI Layer 3, the network layer), where application data is
routed from the source IP host to the destination IP host.
 TCP is connection-oriented and requires that network devices set up a connection to exchange data. The end
systems synchronize with one another to manage packet flows and adapt to congestion in the network.

 TCP provides error checking by including a checksum in the TCP segment to verify that the TCP header information
is not corrupt.

 TCP establishes two connections between the source and destination. The pair of connections operates in full-
duplex mode, one in each direction. These connections are often called a virtual circuit because, at the transport
layer, the source and destination have no knowledge of the network.

 TCP segments are numbered and sequenced so that the destination can reorder segments and determine if data is
missing or arrives out of order.

 Upon receipt of one or more TCP segments, the receiver returns an acknowledgment to the sender to indicate that
it received the segment. Acknowledgments form the basis of reliability within the TCP session. When the source
receives an acknowledgment, it knows that the data has been successfully delivered. If the source does not receive
an acknowledgment within a predetermined period, it retransmits that data to the destination. The source may
also terminate the connection if it determines that the receiver is no longer on the connection.

 TCP provides mechanisms for flow control. Flow control assists the reliability of TCP transmission by adjusting the
effective rate of data flow between the two services in the session.

Reliable data delivery services are critical for applications such as file transfers, database services, transaction processing,
and other applications in which delivery of every packet must be guaranteed. TCP segments are sent by using IP packets.
The TCP header follows the IP header and supplies information that is specific to the TCP protocol. Flow control, reliability,
and other TCP characteristics are achieved by using fields in the TCP header. Each field has a specific function.
The TCP header is a minimum of 20 bytes; the fields in the TCP header are as follows:

 Source port: Calling port number (16 bits)

 Destination port: Called port number (16 bits)

 Sequence number and acknowledgment number: Used for reliability and congestion avoidance (32 bits each)

 Header length: Size of the TCP header (4 bits)

 Reserved: For future use (3 bits)

 Flags or control bits (9 bits):

1. Nonce Sum (NS): Enables the receiver to demonstrate to the sender that segments are being
acknowledged.

2. Congestion Window Reduced (CWR): Acknowledge that the congestion-indication echoing was received

3. Explicit Congestion Notification Echo (ECE): Indication of congestion

4. Urgent (URG): This data should be prioritized over other data

5. ACK: Used for acknowledgment

6. Push (PSH): Indicates that application data should be transmitted immediately and not wait for the entire
TCP segment

7. Reset (RST): Indicates that the connection should be reset

8. SYN: Synchronize sequence numbers

9. Finish (FIN): Indicates there is no more data from sender

 Window size: Window size value, used for flow control (16 bits)

 Checksum: Calculated checksum from a constructed pseudo header (containing the source address, destination
address, and protocol from the IP header, TCP segment length, and reserved bits) and the TCP segment (TCP
header and payload) for error checking (16 bits)

 Urgent pointer: If the URG flag is set, this field is an offset from the sequence number indicating the last urgent
data byte (16 bits)

 Options: The length of this field is determined by the data offset field (from 0 to 320 bits)

 Data: Upper-layer protocol (ULP) data (varies in size)

UDP Characteristics
Applications use the connectionless services of UDP to provide high-performance, low-overhead data communications
between hosts. UDP includes several features that provide for low-latency data transmission.

UDP is a simple protocol that provides basic transport layer functions:

 UDP operates at the transport layer of the TCP/IP stack (OSI Layer 4).

 UDP provides applications with access to the Internet layer (OSI Layer 3, the network layer) without the overhead
of reliability mechanisms.

 UDP is a connectionless protocol in which a one-way datagram is sent to a destination without advance
notification to the destination device.

 UDP performs only limited error checking. A UDP datagram includes a checksum value, which the receiving device
can use to test the integrity of the data.

 UDP provides service on a best-effort basis and does not guarantee data delivery, because packets can be
misdirected, duplicated, or lost on the way to their destination.

 UDP does not provide any special features that recover lost or corrupted packets. UDP relies on applications that
are using its transport services to provide recovery.

 Because of its low overhead, UDP is ideal for applications like DNS and Network Time Protocol (NTP), where there
is a simple request-and-response transaction.

The low overhead of UDP is evident when you review the UDP header length of only 64 bits (8 bytes). The UDP header
length is significantly smaller compared with the TCP minimum header length of 20 bytes.

Here are the field definitions in the UDP segment:


 Source port: Calling port number (16 bits)

 Destination port: Called port number (16 bits)

 Length: Length of UDP header and UDP data (16 bits)

 Checksum: Calculated checksum of the header and data fields (16 bits)

 Data: ULP data (varies in size)

Application layer protocols that use UDP include DNS, Simple Network Management Protocol (SNMP), DHCP, Routing
Information Protocol (RIP), TFTP, Network File System (NFS), online games, and voice streaming.

Host-To-Host Packet Delivery

Host-to-host packet delivery consists of an interesting series of processes. In this multipart example, you will discover what
happens behind the scenes when an IPv4 host communicates with another IPv4 host—first, when a router is used, and
second, when a switch is responsible for the host-to-host packet delivery process.

Address Resolution Protocol

When a device sends a packet to a destination, it encapsulates the packet into a frame. The packet contains IPv4
addresses, and the frame contains MAC addresses. Therefore, there must be a way to map an IPv4 address to a MAC
address. For example, if you enter the ping 10.1.1.3 command, the MAC address of 10.1.1.3 must be included in the
destination MAC address field of the frame that is sent. To determine the MAC address of the device with an IPv4 address
10.1.1.3, a process is performed by a Layer 2 protocol called ARP.

ARP provides two essential services:

 Address resolution: Mapping IPv4 addresses to MAC addresses on a network

 Caching: Locally storing MAC addresses that are learned via ARP

The term "address resolution" in ARP refers to the process of binding or mapping the IPv4 address of a remote device to its
MAC address. ARP sends a broadcast message to all devices on the local network. This message includes its own IPv4
address and the destination IPv4 address. The message is asking the device on which the destination IPv4 address resides
to respond with its MAC address. The address resolution procedure is completed when the originator receives the reply
frame, which contains the required MAC address, and updates its table containing all the current bindings.

Note

The Layer 2 broadcast MAC address is FF:FF:FF:FF:FF:FF.

Using ARP to Resolve the MAC of a Local IPv4 Address

Because ARP is a Layer 2 protocol, its scope is limited to the local LAN. If the source and destination devices are on the
same subnet, then the source can use ARP to determine the MAC address of the destination.

For example, IPv4 host 10.10.1.241/24 is on the 10.10.1.0/24 subnet. If the host that it wants to communicate with is
10.10.1.175, it knows that this IPv4 host is also on the local 10.10.1.0/24 subnet, and it can use ARP to determine its MAC
address directly.
The following output shows the Wireshark analysis of the ARP messages. In the first example, you can see an ARP request
sent as a broadcast to find out the MAC address of IPv4 host 10.10.1.175. In the second ARP message, you can see the ARP
reply including the host MAC address, which is 00:bc:22:a8:e0:a0

Using ARP to Resolve the MAC of a Remote IPv4 Address

If the source and destination devices are not on the same subnet, then the source uses ARP to determine the MAC address
of the default gateway.

For example, when the source host 10.10.1.241 wants to communicate with the destination host 10.10.2.55, it compares
this IPv4 address against its subnet mask and discovers that the host is on a different IPv4 subnet (10.10.2.0/24). When a
host wants to send data to a device that is on another network or subnet, it encapsulates the packet in a frame addressed
to its default gateway. So, the destination MAC address in the frame needs to be the MAC address of the default gateway.
In this situation, the source must send an ARP request to find the MAC address of the default gateway. In the example,
host 10.10.1.241 sends a broadcast with an ARP request for the MAC address of 10.10.1.1.

The following output shows the Wireshark analysis of ARP messages. In the first example, you can see an ARP request sent
as a broadcast to find out the MAC address of IPv4 host 10.10.1.1. In the second ARP message, you can see the ARP reply
showing that the MAC address of the default gateway is 00:25:b5:9c:34:27.

Understanding the ARP Cache

Each IPv4 device on a network segment maintains a table in memory—the ARP table or ARP cache. The purpose of this
table is to cache recent IPv4 addresses to MAC address bindings. When a host wants to transmit data to another host on
the same subnet, it searches the ARP table to see if there is an entry. If there is an entry, the host uses it. If there is no
entry, the IPv4 host sends an ARP broadcast requesting resolution.

Note

By caching recent bindings, ARP broadcasts can be avoided for any mappings in the cache. Without the ARP cache, each
IPv4 host would have to send an ARP broadcast each time it wanted to communicate with another IPv4 host.

Each entry, or row, of the ARP table has a pair of values—an IPv4 address and a MAC address. The relationship between
the two values is a map, which simply means that you can locate an IPv4 address in the table and discover the
corresponding MAC address. The ARP table caches the mapping for the devices on the local LAN, including the default
gateway.

The device creates and maintains the ARP table dynamically, adding and changing address relationships as they are used
on the local host. The entries in an ARP table expire after a while; the default expiry time for Cisco devices is 4 hours. Other
operating systems, such as Windows and macOS, might have a different value. Windows, for example, uses a random
value of 15 to 45 seconds. This timeout ensures that the table does not contain information for systems that may be
switched off or that have been moved. When the local host wants to transmit data again, the entry in the ARP table is
regenerated through the ARP process.

If no device responds to the ARP request, then the original packet is dropped, because a frame to put the packet in cannot
be created without the destination MAC address.

On a Microsoft Windows PC, the arp -a command displays the current ARP table for all interfaces on the PC.

To limit the output of the arp command to a single interface, use the arp -a -N ip_address command.


In this example, the host 192.168.3.1 needs to send arbitrary application data to the host 192.168.4.2, which is located on
another subnet. The application does not need a reliable connection, so it uses UDP. Because it is not necessary to set up a
session, the application can start sending data, using the UDP port numbers to establish the session and deliver the
segment to the right application.
UDP prepends a UDP header (UDP HDR) and passes the segment to the IPv4 layer (Layer 3) with an instruction to send the
segment to 192.168.4.2. IPv4 encapsulates the segment in a Layer 3 packet, setting the source address (SRC IP) of the
packet to 192.168.3.1, while the destination address (DST IP) is set to 192.168.4.2.

When Host A analyzes the destination address, it finds that the destination address is on a different network. The host
forwards any packet that is not destined for the local IPv4 network in a frame addressed to the default gateway. The
default gateway is the address of the local router, which must be configured on hosts (PCs, servers, and so on). IPv4 passes
the Layer 3 packet to Layer 2 with instructions to forward it to the default gateway. Host A must place the packet in its
"parking lot" (on hold) until it has the MAC address of the default gateway.
To deliver the packet, the host needs the Layer 2 information of the next-hop device. The ARP table in the host does not
have an entry and must resolve the Layer 2 address (MAC address) of the default gateway. The default gateway is the next
hop for the packet. The packet waits while the host resolves the Layer 2 information.

Because the host does not know the Layer 2 address of the default gateway, the host uses the standard ARP process to
obtain the mapping. The host sends a broadcast ARP request looking for the MAC address of its default gateway.
The host has previously been configured with 192.168.3.2 as the default gateway. The host 192.168.3.1 sends out the ARP
request, and the router receives it. The ARP request contains information about Host A. Notice that the router
immediately adds this information to its own ARP table.

The router processes the ARP request like any other host would and sends the ARP reply with its own information, directly
to the host MAC address.
The host receives an ARP reply to its ARP request and enters the information in its local ARP table.

Now, the Layer 2 frame with the application data can be sent to the default gateway. The pending frame is sent with the
local host IPv4 address and MAC address as the source. However, the destination IPv4 address is that of the remote host,
but the destination MAC address is that of the default gateway.
When the router receives the frame, it recognizes its MAC address and processes the frame. At Layer 3, the router sees
that the destination IPv4 address is not its address. A host Layer 3 device would discard the frame. However, because this
device is a router, it passes all IPv4 packets that are not for the router itself to the routing process. The routing process
determines where to send the packet.

The routing process checks for the longest prefix match of the destination IPv4 address in its routing table. In this example,
the destination network is directly connected. Therefore, the routing process can pass the packet directly to Layer 2 for the
appropriate interface.
Assuming that the router does not have the mapping to 192.168.4.2, Layer 2 uses the ARP process to obtain the mapping
for the IPv4 address and the MAC address. The router asks for the Layer 2 information in the same way as the hosts. An
ARP request for the destination MAC address is sent to the link.

The destination host receives and processes the ARP request.

The destination host receives the frame that contains the ARP request and passes the request to the ARP process. The ARP
process takes the information about the router from the ARP request and places the information in its local ARP table. The
ARP process generates the ARP reply and sends it back to the router.

The router receives the ARP reply, populates its local ARP table, and starts the packet-forwarding process.

The frame is forwarded to the destination. Note that the router changes the Layer 2 address in frames as needed, but it
will not change the Layer 3 address in packets.
Role of a Switch in Packet Delivery (Step 1 of 4)

Typically, your network will have switches between hosts and routers. In this multipart example, you will see what
happens on a switch when a host communicates with a router.

Remember that a switch does not change the frame in any way. When a switch receives the frame, it forwards it out the
proper port according to the MAC address table.

An application on Host A must send data to a remote network. Before an IP packet can be forwarded to the default
gateway, the host MAC address needs to be obtained. ARP on Host A creates an ARP request and sends it out as a
broadcast frame. Before the ARP request reaches other devices on a network, the switch receives it.

When the switch receives the frame, it needs to forward it out on the proper port. However, in this example, the source
MAC address is not in the MAC address table of the switch. The switch can learn the port mapping for the source host
from the source MAC address in the frame, so the switch adds the information to the table (0800:0222:2222 = port
FastEthernet0/1).

Role of a Switch in Packet Delivery (Step 2 of 4)

Because the destination address of the frame is a broadcast, the switch has to flood the frame out to all the ports except
the port where it came in.

Role of a Switch in Packet Delivery (Step 3 of 4)

The router replies to the ARP request and sends an ARP reply packet back to the sender as a unicast frame.

The switch learns the port mapping for the router MAC address from the source MAC address in the frame. The switch
adds it to the MAC address table (0800:0333:2222 = port FastEthernet0/3)
Role of a Switch in Packet Delivery (Step 4 of 4)

The destination address of the frame (Host A) is found in the MAC address table, so the switch can forward the frame out
on port FastEthernet0/1. If the destination address was not found in the MAC address table, the switch would need to
flood out the frame on all ports except the port it came in on.

All frames pass through the switch unchanged. The switch builds its MAC address table based on the source address of
received frames, and it sends all unicast frames directly to the destination host based on the destination MAC address and
port that are stored in the MAC address table.
Network Device Planes 

Network devices implement processes that can be broken down into three functional planes: the data plane, control
plane, and management plane. Under normal network operating conditions, the network traffic consists mostly of data
plane transit packets. Network devices are optimized to handle these packets efficiently. Typically, there are considerably
fewer control and management plane packets.

The primary purpose of routers and switches is to forward packets and frames through the device onward to final
destinations. The data plane, also called the forwarding plane, is responsible for the high-speed forwarding of data through
a network device. Its logic is kept simple so that it can be implemented by hardware to achieve fast packet forwarding. The
forwarding engine processes the arrived packet and then forwards it out of the device. Data plane forwarding is very fast
and is performed in hardware. To achieve the efficient forwarding, routers and switches create and utilize data structures,
usually called tables, which facilitate the forwarding process. The control plane dictates the creation of these data
structures. Examples of data plane structures are the CAM table, ternary CAM (TCAM) table, Forwarding Information Base
(FIB) table, and adjacency table.

Cisco routers and switches also offer many features to secure the data plane. Almost every network device has the ability
to utilize ACLs, which are processed in hardware, to limit allowed traffic to only well-known traffic and desirable traffic.

Note

Data plane forwarding is implemented in specialized hardware. The actual implementation depends on the switching
platform. High-speed forwarding hardware implementations can be based on specialized integrated circuits called ASICs,
field-programmable gate arrays (FPGAs), or specialized network processors. Each of the hardware solutions is designed to
perform a particular operation in a highly efficient way. Operations performed by ASIC may vary from compression and
decompression of data, or computing and verifying checksums to filter or forward frames based on their MAC address.

The control plane consists of protocols and processes that communicate between network devices to determine how data
is to be forwarded. When packets that require control plane processing arrive at the device, the data plane forwards them
to the device processor, where the control plane processes them.

In cases of Layer 3 devices, the control plane sets up the forwarding information based on the information from routing
protocols. The control plane is responsible for building the routing table or Routing Information Base (RIB). The RIB in turn
determines the content of the forwarding tables, such as the FIB and the adjacency table, used by the data plane. In Layer
2 devices, the control plane processes information from Layer 2 control protocols, such as Spanning Tree Protocol (STP)
and Cisco Discovery Protocol, and processes Layer 2 keepalives. It also processes information from incoming frames—for
example, the source MAC address to fill in the MAC address table.

When high packet rates overload the control or management plane (or both), device processor resources can be
overwhelmed, reducing the availability of these resources for tasks that are critical to the operation and maintenance of
the network. Cisco networking devices support features that facilitate control of traffic that is sent to the device processor
to prevent the processor itself from being overwhelmed and affecting system performance.
The control plane processes the traffic that is directly or indirectly destined to the device itself. Control plane packets are
handled directly by the device processor, which is why control plane traffic is called process switched.

There are generally two types of process-switched traffic. The first type of traffic is directed, or addressed, to the device
itself and must be handled directly by the device processor. Examples include routing protocol data exchange. The second
type of traffic that is handled by the CPU is data plane traffic with a destination beyond the device itself, but which
requires special processing by the device processor. One example of such traffic is IPv4 packets that have a TTL value, or
IPv6 packets that have a hop limit value of less than or equal to one. They require Internet Control Message Protocol
(ICMP) "time exceeded" messages to be sent, which results in CPU processing.

The management plane  consists of functions that achieve the network management goals, which include interactive
configuration sessions and statistics gathering and monitoring. The management plane performs management functions
for a network and coordinates functions among all the planes (management, control, and data). In addition, the
management plane is used to manage a device through its connection to the network.

The management plane is associated with traffic related to the management of the network or the device. From the device
point of view, management traffic can be destined to the device itself or intended for other devices. The management
plane encompasses applications and protocols such as Secure Shell (SSH), SNMP, HTTP, HTTPS, NTP, TFTP, FTP, and others
that are used to manage the device and the network.

From the perspective of a network device, there are three general types of packets as related to the functional planes:

 Transit packets and frames include packets and frames that are subjected to standard, destination IP, and MAC-
based forwarding functions. In most networks and under normal operating conditions, transit packets are typically
forwarded with minimal CPU involvement or within specialized high-speed forwarding hardware.

 Receive or "for-us" packets include control plane and management plane packets that are destined for the
network device itself. Receive packets must be handled by the CPU within the device processor, because they are
ultimately destined for and handled by applications running at the process level within the device operating
system.

 Exception IP and non-IP information includes IP packets that differ from standard IP packets—for instance, IPv4
packets containing the Options field in the IPv4 header, IPv4 packets with TTL that expires, and IPv4 packets with
unreachable destinations. Examples of non-IP packets are Layer 2 keepalives, ARP frames, and Cisco Discovery
Protocol frames. All packets and frames in this set must be handled by the device processor.
In traditional networking, the control and data planes exist within one device. With the introduction of SDN, the
management and control planes are separated from the data plane into separate devices. Data plane devices, such as
switches and routers, focus on forwarding data. The management and control planes are abstracted into a centralized
solution, a specialized network controller, which implements a virtualized software orchestration to provide network
management and control functions.
Section 9: Relating Network and Applications

Introduction

There are different ways in which applications and networks intertwine—the connections to services that networks
provide to applications, how applications affect network performance (for example, bandwidth-hungry backups), and how
this latency may be felt by other applications. Interaction of applications within the network becomes especially important
when an issue arises. Is it only a temporary network condition? Is perhaps some application misbehaving? Or is network
configuration at fault? Network tools provide various means of troubleshooting that help us answer those questions.

Standard IP Network Services 

Network services provide various capabilities that make communication on the Internet possible for end hosts, such as the
user desktop or administrator management station, or even application servers. Consider the following example. You
know that devices on the Internet use IP addresses to communicate. How then does your browser know which IP address
to connect to when you open a Cisco web site? Also, consider that web certificates need accurate time to function, and
your device also requires its own IP address to communicate over the Internet. How does any device or application
discover and configure these parameters?

Browsing a website like https://www.cisco.com relies on network services such as DNS, DHCP, NAT, and NTP.

Domain Name System

In a world where many have no longer memorized phone numbers because the phone keeps track of your extensive list of
contacts, you can imagine how long ago people gave up the idea of memorizing IP addresses for all of the web sites they
would visit. To that end, the Domain Name System (DNS) is the service that changes a memorable name of a resource to
the IP addresses. Without the freedom of a service like DNS, the Internet would not be as popular or as usable as it is.

DNS is available internally within an organization and externally on the public-connected Internet. It is an automated
service that matches resource names with the required numeric network address (type A and type AAAA records) and vice
versa (reverse lookup).
DNS uses a distributed database that is hosted on several servers, which are located around the world, to resolve the
names that are associated with IP addresses.

An easy way to observe DNS in action would be to perform a simple nslookup hostname in a command window. This
command tells your client to make a DNS query. If your DNS server already knows the answer, the server returns the
answer directly. Otherwise, it will consult other DNS servers on the Internet to find it. In case the specified name does not
exist, DNS will provide that information as well.

DNS can return more than a single address for a given name, so your system can connect to an alternative IP if the first
address does not work. This mechanism is often used for services requiring high availability.

Dynamic Host Configuration Protocol

Managing a network can be very time-consuming. Network clients break or are moved, and new clients are purchased that
need network connectivity. These tasks are all part of the network administrator job. Depending on the number of IP
hosts, manual configuration of IP version 4 (IPv4) addresses for every device on the network is virtually impossible.

Using Dynamic Host Configuration Protocol (DHCP), a host can obtain an IP address quickly and dynamically from a defined
range of IP addresses on the DHCP server. As hosts come online, those hosts contact the DHCP server and request address
information. The DHCP server chooses an address from a pool that is assigned to the scope and allocates it to that host.
The address is only leased to the host, so the host periodically contacts the DHCP server to extend the lease. This lease
mechanism ensures that hosts that are moved or are switched off for extended periods of time do not keep addresses that
they do not use. The addresses are returned to the address pool by the DHCP server, to be reallocated as necessary.
Most endpoint devices on networks today are DHCP clients, including Cisco IP phones, desktop PCs, laptops, printers, and
all the mobile devices. Just about any device that you can configure to participate in a TCP/IP network has the option of
using DHCP to obtain its IPv4 configuration.

Note

In IP version 6 (IPv6), the newer version of IP, hosts can also be configured via DHCPv6. While the protocol details differ,
the concept remains the same.

The DHCP server is used to assign IP addresses and IP configuration parameters. Examples of IP configuration parameters
that are automatically set by DHCP would be the subnet mask, default router, and DNS servers. This protocol is also used
to provide other configuration information that is needed, including the length of time the address has been allocated to
the host.

DHCP is built on a client/server model. The protocol sends configuration parameters to dynamically configured hosts that
request them. The term "client" refers to a host that is requesting initialization parameters from a DHCP server.

The DHCP client and the DHCP server exchange the following packets:

1. DHCP Discover: The DHCP client boots up and sends this message on its local physical subnet to the subnet
broadcast (destination IPv4 address of 255.255.255.255 and MAC address of ff:ff:ff:ff:ff:ff), with a source IPv4
address of 0.0.0.0 and its MAC address.

2. DHCP Offer: The DHCP server responds and fills the yiaddr (your IPv4 address) field of the message with the
requested IPv4 address. The DHCP server sends the DHCP offer to the broadcast address, but includes the client
hardware address in the chaddr (client hardware address) field of the offer, so the client knows that it is the
intended destination.

3. DHCP Request: The DHCP client may receive multiple DHCP offer messages, but chooses one and accepts only that
DHCP server offer, implicitly declining all other DHCP offer messages. The client identifies the selected server by
populating the Server Identifier option field with the DHCP server IPv4 address. The DHCP request also is a
broadcast, so all DHCP servers that sent a DHCP offer will receive it, and each will know whether it was accepted or
declined. Even though the client has been offered an IPv4 address, it will send the DHCP request message with a
source IPv4 address of 0.0.0.0.

4. DHCP ACK: The DHCP server acknowledges the request and completes the initialization process. The DHCP
acknowledge (ACK) message has a source IPv4 address of the DHCP server, and the destination address is once
again a broadcast and contains all the parameters that the client requested in the DHCP request message. When
the client receives the DHCP ACK, it enters into the Bound state and is now free to use the IPv4 address to
communicate on the network.

Network Time Protocol

Time synchronization is crucial in secure management and reporting. Reviewing log files on multiple devices is common in
a security event response process. If the clocks on the reporting devices are not consistent with each other, the analysis is
much more difficult. In many jurisdictions, log files without valid time stamps are rejected as evidence in criminal
prosecution. Also, synchronized clocks in log files often are requirements of security compliance standards. Accurate time
status is critical for other aspects of security as well, such as the validity of digital certificates used in HTTPS.

Networks use Network Time Protocol (NTP) to synchronize the clocks of various devices across a network. NTP uses the
concept of a stratum to describe how many NTP hops away a machine is from an authoritative time source, a stratum 0
source. A stratum 1 time server has a radio or atomic clock that is directly attached, a stratum 2 time server receives its
time from a stratum 1 time server, an so on. A device that is running NTP automatically chooses as its time source the
device with the lowest stratum number that it is configured to communicate with through NTP. This strategy effectively
builds a self-organizing tree of NTP speakers.

To illustrate the importance of accurate time for logging, here is a display of two log messages on two different devices for
the same event, an interface being set to Administratively Down using the shutdown command.

Log messages on central router:

*Apr 8 19:10:40.086: %LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet0/0, changed state to down

*Apr 8 19:10:40.086: %OSPF-5-ADJCHG: Process 1, Nbr 200.1.1.1 on Ethernet0/0 from FULL to DOWN, Neighbor Down:
Interface down or detached

Log messages on branch router:

Apr 8 13:35:01.880: %OSPF-5-ADJCHG: Process 1, Nbr 200.1.1.2 on Ethernet0/2 from FULL to DOWN, Neighbor Down:
Interface down or detached

Apr 8 13:35:04.885: %LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet0/2, changed state to down

Both messages appeared within 1 second of each other because the shutdown command was issued on one of the
routers. Note that the clocks were not synchronized between the routers. Therefore, you can see that these two messages
can be treated as two separate events, which make it much harder to troubleshoot the problems.

Network Address Translation 

Network Address Translation (NAT) is a mechanism that is used for connecting multiple devices on internal, private
networks to a public network such as the Internet, using a limited number of public IPv4 addresses. It was designed for
conserving IPv4 address space, which has an upper limit due to the availability of values in 32-bit addressing. Even though
there are more than a trillion possible IP addresses, in the 30-plus years that IPv4 addresses have been assigned, that limit
has been reached.
The IPv4 address space is not large enough to uniquely identify all network-capable devices that need IP-based network
connectivity. This limitation led to the development of private addresses. Private addresses, described in RFC 1918, are not
routed by Internet routers and should be used only within an enterprise. Devices in the enterprise network must have a
mechanism in place to "procure" a public address when they need Internet access and to translate private addresses to
public addresses. Internet routers route public addresses. The mechanism that procures a public address for a device with
a private address that needs access to the Internet is NAT. NAT performs translations. Most commonly, the subject of
translation is the IP address—mostly an IPv4 address—and it is translated from a private address to a public address.

Note

Although the IPv6 suite also includes NAT mechanisms (mostly for translation between IPv4 and IPv6), this topic will focus
on its use in the context of IPv4. Where translation happens only between two IPv4 address, NAT is also called NAT44.

To illustrate how NAT performs its tasks, presume that an enterprise network uses a private IPv4 addressing scheme. The
translation usually happens when a device in the enterprise network initiates communication with a device in the Internet.
Just before the packets enter the Internet realm, a device at the border between the enterprise network and the Internet
translates or swaps the private address with a public address. The packets reach their destination in the destination device
and, eventually, the same border device receives responses. It is important to note that responses are destined to the
public address, and the public address is now written in the destination IPv4 address header field. The border device is the
only device that knows how to translate the public address back to the appropriate private address. The translation now
happens in reverse—public to private direction. The public address is translated back to the private address before the
responses are forwarded to the initiator of the communication. The most important point is that address translation, or
address swapping, happens for traffic traveling in both directions, outbound and inbound.

In an enterprise environment, NAT usually is implemented on border devices such as firewalls or routers. This
implementation allows devices within an enterprise network to have private addresses to communicate among
themselves and to translate addresses only when they need to send traffic to the Internet or outside networks in general.
When accessing the Internet, the border device translates private addresses to public addresses and keeps a mapping
between them in order to match the returning traffic. In a home environment, this device might be an access point (AP)
that has routing capability, or the DSL or cable router.

NAT also can be used when there is an addressing overlap between two private networks. An example of this
implementation would be when two companies merge and they both were using the same private address range. In this
case, NAT can be used to translate the private addresses of one intranet into another private range, avoiding an addressing
conflict and enabling devices from one intranet to connect to devices on the other intranet. Therefore, NAT is not
implemented only for translations between private and public IPv4 address spaces; it also can be used for generic
translations between any two different IPv4 address spaces.

It is possible to configure NAT also involving TCP and UDP port numbers. Port numbers can be used to identify network
services, because network services use application protocols, which use different port numbers. Port forwarding uses this
identifying property of port numbers.

Port forwarding allows users on the Internet to access internal servers by using the public address of the border device and
a selected outside port number. To the outside, the border device appears to be providing the service. Outside devices are
not aware of the mapping that exists between the border device and the inside server. The static nature of the mapping
ensures that any traffic that is received at the specified port will be translated and then forwarded to the internal server.
The internal servers are typically configured with RFC 1918 private IPv4 addresses.

The administrators can choose any value for the global port number. For instance, instead of specifying port 443 for web
service, the administrator can choose to specify 8443. In this way, many services running on different actual servers may
be exposed through the same public IP address.

The figure shows an example of port forwarding on router R2. IPv4 address 192.168.10.254 is the inside local IPv4 address
of the web server listening on port 443. Users will access this internal web server using the global IPv4 address
209.165.200.226, a globally unique public IPv4 address. In this case, it is the address of the outside interface of R2. The
global port is configured as 8443. This port will be the destination port used along with the global IPv4 address of
209.165.200.226 to access the internal web server.

NAT may be configured in these ways:

 Static NAT: A translation type or entry on the translating device that is statically configured and will always
translate between the same pre-NAT and post-NAT addresses. This is commonly used for servers providing a
service on a public network—for example, a demilitarized zone (DMZ) web server that is accessible from the
Internet.

 Dynamic NAT: A translation type where the client address is the pre-NAT address and is dynamically given an
address from a preconfigured pool of addresses.

 PAT: Port Address Translation (PAT) will translate the source IP address of the clients to a single IP address on the
perimeter device. Often, this address represents an outside-facing interface. Because the client addresses are
translated to the same address, the port is also changed to uniquely identify the client session. This is the most
common type of NAT, sometimes also called Network Address and Port Translation (NAPT).

Benefits and Drawbacks of NAT

Here are the benefits of NAT:

 NAT conserves public addresses by enabling multiple privately addressed hosts to communicate using a limited,
small number of public addresses instead of acquiring a public address for each host that needs to connect to the
Internet. The conserving effect of NAT is most pronounced with PAT, where internal hosts can share a single
registered IPv4 address for all external communication.

 NAT increases the flexibility of connections to the public network.

 NAT provides consistency for internal network addressing schemes. When a public IPv4 address scheme changes,
NAT eliminates the need to readdress all hosts that require external access, saving time and money. The changes
are applied to the NAT configuration only. Therefore, an organization could change ISPs and not need to change
any of its inside clients.

 NAT can be configured to translate all private addresses to only one public address or to a smaller pool of public
addresses. When NAT is configured, the entire internal network hides behind one address or a few addresses. To
the outside, it seems that there is only one or a limited number of devices in the inside network. This hiding of the
internal network helps provide additional security as a side benefit of NAT.
Here are the disadvantages of NAT:

 End-to-end functionality is lost. Many applications depend on the end-to-end property of IPv4-based
communication. Some applications expect the IPv4 header parameters to be determined only at endpoints of
communication. NAT interferes by changing the IPv4 address and sometimes transport protocol port (PAT)
numbers at network intermediary points. Changed header information can block applications; for instance, call
signaling application protocols include information about the device IPv4 address in their headers. Although the
application protocol information is going to be encapsulated in the IPv4 header as data is passed down the TCP/IP
stack, the application protocol header still includes the device IPv4 address as part of its own information. The
transmitted packet will include the sender IPv4 address twice—in the IPv4 header and application header. When
NAT makes changes to the source IPv4 address (along the path of the packet), it will change only the address in the
IPv4 header. NAT will not change IPv4 address information that is included in the application header. At the
recipient, the application protocol will rely only on the information in the application header. Other headers will be
removed in the de-encapsulation process. Therefore, the recipient application protocol will not be aware of the
change that NAT has made, and it will perform its functions and create response packets using the information in
the application header. This process results in creating responses for unroutable IPv4 addresses and ultimately
prevents calls from being established. Besides signaling protocols, some security applications, such as digital
signatures, fail because the source IPv4 address changes. Sometimes, you can avoid this problem by implementing
static NAT mappings.

 End-to-end IPv4 traceability is also lost. It becomes much more difficult to trace packets that undergo numerous
packet address changes over multiple NAT hops, so troubleshooting is challenging. On the other hand, for
malicious users, it becomes more difficult to trace or obtain the original source or destination addresses.

 Using NAT also creates difficulties for tunneling protocols, such as IP Security (IPsec), because NAT modifies the
values in the headers. Integrity checks declare packets invalid if anything changes in them along the path. NAT
changes interfere with the integrity-checking mechanisms that IPsec and other tunneling protocols perform.

 Services that require the initiation of TCP connections from an outside network (or stateless protocols, such as
those using UDP) can be disrupted. Unless the NAT router makes a specific effort to support such protocols,
inbound packets cannot reach their destination. Some protocols can accommodate one instance of NAT between
participating hosts (passive mode FTP, for example) but fail when NAT is performed at multiple points between
communicating systems—for instance, in both the source and destination network.

 NAT can degrade network performance. It increases forwarding delays because the translation of each IPv4
address within the packet headers takes time. For each packet, the router must determine whether it should
undergo translation. If translation is performed, the router alters the IPv4 header and possibly the TCP or UDP
header. All checksums must be recalculated for packets in order for packets to pass the integrity checks at the
destination. This processing is most time-consuming for the first packet of each defined mapping. The
performance degradation of NAT is particularly disadvantageous for real-time applications such as VoIP.

Common Protocols 

Multiple communications often occur at once on a network host. For instance, you may be browsing the web and listening
to music from an online streaming service at the same time. The transport layer in the network stack tracks these
communications and keeps them separate. This tracking is provided by both UDP and TCP, allowing multiple traffic streams
to share the same link. However, to pass data to the proper applications, the transport layer must somehow identify the
target application.

UDP and TCP use internal logical ports to support multiple conversations between network devices. TCP and UDP
differentiate the segments (at Layer 4) and datagrams (at Layer 3) for each application by identifying each conversation
with unique port values. The combination of an IP address and a port is strictly known as an endpoint and is sometimes
called a socket. A TCP connection is established from a source port to a destination port. Each application process that
needs to access the network is assigned a port number that is unique in that host. The destination port number is used in
the transport layer header to indicate which target application that piece of data is for.

Many applications use standard, well-known destination (server) ports. These port numbers are registered with the
Internet Assigned Numbers Authority (IANA). See the Service Name and Transport Protocol Port Number Registry for a
complete list at http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml.

Common applications using well-known, registered ports include the following:

 FTP (port 20 and 21, TCP): FTP is a reliable, connection-oriented service that uses TCP to transfer files between
systems that support FTP. FTP supports bidirectional binary and ASCII file transfers. Besides using port 21 for
exchange of control, FTP also uses one additional port, 20, for data transmission.

 SSH (port 22, TCP): Secure Shell (SSH) provides the capability to remotely access other computers, servers, and
networking devices over an encrypted connection.

 Telnet (port 23, TCP): Telnet is a predecessor to SSH. It sends messages in unencrypted cleartext. As a security
best practice, most organizations now use SSH for remote communications.

 SMTP (port 25, TCP): Simple Mail Transfer Protocol (SMTP) is used for exchanging emails on the Internet.

 HTTP (port 80, TCP): HTTP, which defines how messages are formatted and transmitted by web servers,
communicates over unencrypted TCP sessions.

 HTTPS (port 443, TCP): HTTPS provides HTTP over an encrypted connection. Since 2018, it is more often used than
HTTP.

 DNS (port 53, TCP and UDP): DNS uses TCP for zone transfer between DNS servers and UDP for resolving Internet
names to IP addresses.

 TFTP (port 69, UDP): TFTP is a connectionless service. Routers and switches use TFTP to transfer configuration files
and Cisco IOS images and other files between systems that support TFTP.

 SNMP (port 161 and 162, UDP): Simple Network Management Protocol (SNMP) facilitates the exchange of
management information between network devices.

 NETCONF (port 830, TCP): Network Configuration Protocol (NETCONF) is a successor to SNMP, using a reliable TCP
transport instead of UDP.

Secure Shell

The SSH protocol provides a secure virtual terminal for a user to log in to a remote host and execute commands. SSH uses
strong encryption for securing the exchange of information over an insecure network.
The main features of SSH are:

 Industry-standard management protocol

 Secure, bidirectional authentication with passwords or keys, or both

 Supports file transfer with the scp or sftp commands

 Support included in many operating systems

 Publicly and commercially available clients

SSH can be simply used from the command line (ssh hostname-or-ip) or through GUI clients, such as PuTTY.

Telnet

The Telnet protocol was designed to enable remote shell or console sessions across the network. While it was popular
years ago, its major disadvantage is unencrypted communication, which allowed hackers to steal credentials. As a security
best practice, SSH is now the recommended protocol.

Made obsolete as a virtual terminal by SSH, but the telnet command still is useful today to test connectivity.

When you see a Connected (or Open) response, you can assume that the remote device is reachable and that it listens on a
given TCP port (80 in the example).

Taking a closer look at the output, you will see it is an HTTP protocol response. Telnet uses the TCP session almost as-is
(without additional encoding), and this allows you to read raw HTTP messages. While this trick will only work with some
services, you can test for an open port with any TCP-based protocol.

Simple Network Management Protocol

In a complex modern-day network of multiple device types from multiple vendors, it can be a challenge to manage all
devices on your network and make sure that they are not only up and running, but also performing optimally. SNMP was
designed to meet this growing need for a standard of managing network devices. However, the protocol is quite old now
and never really took off as a configuration protocol, except for a couple of limited use cases. NETCONF is a newer and
better alternative.

However, SNMP works reasonably well for device monitoring and is still being widely used for this purpose.

Consider the following example. In one of the branch offices, some users are complaining about slow Internet connection.
You can use SNMP to monitor the behavior of the router that is connected to the Internet. CPU, memory, and link
overutilization usually are the reasons for poor router performance. Graphing router performance data in a network
management system (NMS), such as Cacti, shows that the router has high CPU usage. You conclude that the router has
increased CPU usage due to high traffic on the interface that is connected to the Internet. So, the router cannot process all
the traffic; therefore, the users are experiencing slow Internet connectivity.
SNMP consists of these three components:

 SNMP manager: Polls agents on the network and displays information.

 SNMP agent: Runs on managed devices, collects device information, and translates it into a MIB-compatible
format. It also generates traps that are unsolicited messages.

 MIB: Database of management objects (information variables).

Routers and other network devices keep statistics about the information of their processes and interfaces locally. SNMP on
a device runs a special process that is called an agent. This agent can be queried, using SNMP. SNMP typically is used to
gather environment and performance data such as device CPU usage, memory usage, interface traffic, and so on. By
periodically querying or "polling" the SNMP agent on a device, an NMS can gather or collect statistics over time. The NMS
polls devices periodically to obtain the values defined in the MIB objects that it is set up to collect. The NMS then offers a
look into historical data and anticipated trends. Based on SNMP values, the NMS triggers alarms to notify network
operators.

The SNMP manager polls the SNMP agents and queries the MIB via SNMP agents on UDP port 161. The SNMP agent also
can send triggered messages, called traps, to the SNMP manager on UDP port 162. For example, if the interface fails, the
SNMP agent can immediately send a trap message to the SNMP manager notifying the manager about the interface status.
This feature is extremely useful because you can get information almost immediately when something happens.
Remember, the SNMP manager periodically polls SNMP agents, so you will always receive the information on the next
agent poll. Depending on the interval, this could result in minutes of delay.

The MIB organizes configuration and status data into a tree structure. Objects in the MIB are referenced by their object ID
(OID), which specifies the path from the tree root to the object. For example, system identification data is located under
1.3.6.1.2.1.1. Some examples of system data include the system name (OID 1.3.6.1.2.1.1.5), system location (OID
1.3.6.1.2.1.1.6), and system uptime (OID 1.3.6.1.2.1.1.3).

The snmpwalk command is available for Linux systems. It pulls data (such as interface names) from the MIB tree of the
device.

Simple Mail Transfer Protocol

SMTP is also known as Simple Mail Transfer Protocol. "Simple" in its name signifies the protocol design used—a
client/server protocol, with features and message types being simple in nature.

SMTP is used for sending email in the following use cases:

 Email servers on the Internet exchange email using SMTP.

 Users use SMTP together with Post Office Protocol (POP) and Internet Message Access Protocol (IMAP) configured
within their email client.

 Applications use SMTP when handing off email alerts and notifications to the configured email server.

Application Connectivity Issues 

Applications are becoming more complex, with many moving parts. Any part that breaks affects the application. Because
most modern software relies heavily on connectivity to other systems and services, network-related issues typically have a
significant impact on application performance. To help troubleshoot these situations, here are some of the problems a
network can cause.

When facing network connectivity issues, consider how an application communicates with a remote server:

 The application performs a DNS lookup on the server name.

 The application sends an IP packet to the server.

 The server receives and processes the packet.

 The server sends the response packet back.

Issues at any of these steps will result in poor application performance. DNS malfunction, default gateway malfunction,
link utilization, NAT problems, and VPN blocking are all possible causes of reduced or broken connectivity.
DNS Malfunction

Name resolution issues often manifest as lost host connectivity and authentication failures.

Common problems that contribute to DNS malfunction include:

 No DNS server was defined, or the incorrect DNS server was defined

 Missing or incorrect DNS entry

 Wrong host name used (server01.example.net instead of server01.example.com)

 Invalid DNS server configuration

An easy way to observe DNS in action and verify it is functioning correctly would be to perform a
simple nslookup hostname in a command window. You can also use the ping command; it will work even if the other host
does not reply to ping requests. You have to look for the IP address in the command output.

Many of the issues you will encounter with the DNS can easily be fixed by using the correct server or target hostname. But
some, like the "Server failed" error in the output, are the result of misconfiguration or a failure of the DNS system itself
and will require the help of DNS administrators.

Default Gateway Malfunction

The default gateway takes on the role of providing the routing function to get packets off the current network and on to or
on its way to the destination network of interest. It is more generally known as a Layer 3 device and does not necessarily
need to be a physical router; it could also be a Layer 3 or multilayer switch, a firewall, or even a server with multiple
network interface cards (NICs) installed. These multiple NICs would be the equivalent to multiple interfaces on a router or
firewall.

Note

The name "Layer 3 device" originates from the third (network) layer of the Open Systems Interconnection (OSI) model.

A source host is able to communicate directly (without a router) with a destination host only if the two hosts are on the
same subnet. If the two hosts are on different subnets, the sending host must send the data to its default gateway, which
will forward the data to the destination. Once again, the default gateway is an address on a router (or Layer 3 switch)
connected to the same subnet that the source host is on.

Therefore, before a host can send a packet to its destination, it must first determine if the destination address is on its
local subnet. It uses the subnet mask in this determination. The subnet mask describes which portion of an IPv4 address
refers to the network or subnet and which part refers to the host.

If the default gateway configuration is incorrect or the default gateway is experiencing problems, then the host or hosts
using that gateway are isolated to that network only.
Possible points of failure to investigate would be:

 Host or default gateway with the wrong subnet

 Host or default gateway with the wrong subnet mask

 Interface on the default gateway is not up

 Default gateway is missing a route to the destination and does not know where to forward traffic

 Missing default route on the host

Sometimes, even though you configure addressing correctly, there is no connectivity due to a faulty link between devices.

The three main categories of issues are:

 Hardware failure

 Software failure (bug)

 Configuration error (such as disabled interface)

To verify the interface status, use the show interfaces (or equivalent) command.

Generally speaking, it is hard to troubleshoot faulty hardware or bugs in the networking software. So, the best approach is
often to simply keep replacing each component with a known good one until you find the component causing trouble.

As mentioned earlier, following the OSI model is always a solid approach to troubleshooting. Here, you see options for the
physical layer of the OSI model, options for investigating failures, and options for verifying if or which problem really exists.

Link Utilization

Link oversubscription is another possible cause of problems that manifest in a way similar to failing hardware. In both
cases, packets are lost, affecting application performance.

Full-duplex links today support specified bandwidth in both directions simultaneously. For example, if you are using a 1-
Gbps link, you can receive about a gigabit of data every second while also sending out that amount of data at the same
time, at least in theory. An unexpected effect that usually happens with link overutilization in one direction is that actual
data rates drop in the other direction as well.
The reason lies in the way that network protocols function. The majority of high-bandwidth applications use TCP as
transport. TCP requires periodic acknowledgments that the other side received data, which is how it can guarantee reliable
delivery of packets. Now, if these return acknowledgment packets cannot get back to the sender (because the link is full),
the sender will not know that they were successfully received and will resend them. Consequently, bandwidth is wasted,
and you will see lower data rates in the noncongested direction as well.

Dropped packets due to link oversubscription will cause the following:

 Decreased bandwidth

 Increased latency

 Timeouts

Modern operating systems provide network utilization statistics. For example, on Cisco devices, you can check the current
link utilization with the show interface command. From the output, focus on the bit rate, which is displayed in average bits
per second. The interval shown defaults to the last 5 minutes, which can be changed with the load-interval command.
Keep in mind this is an average bit rate, so there may still be short peaks of fully utilized links, causing packet drops.

NAT Issues

As useful and ubiquitous as NAT may be, it has disadvantages. Changes to address and port values have been known to
cause problems with applications and network devices.

Hosts behind NAT use private addresses, so they cannot be reached directly and require additional configuration of the
NAT device to be able to accept new connections. While workarounds exist, they are complex to implement. One such
example is NAT Traversal (NAT-T). NAT-T is a tunneling technique that uses UDP frames to discover if NAT is being
performed upstream from the tunnel endpoint to re-establish end-to-end connectivity, which is required by IPsec. IPsec
encrypts whole packets, including IP addresses, and would otherwise break under NAT.

VPN Blocking
It is not too hard for unwary Internet users today to fall prey to various computer viruses and malware. This can incur a
significant loss in productivity and assets. Therefore, organizations deploy firewalls with deep packet inspection as a best
practice.

As more traffic on the Internet is encrypted, so is malware and other unwanted traffic. This makes it harder for network
security appliances, such as firewalls, to separate normal and malicious traffic. One option is to use additional end-client
protection (antivirus) software, for example, but there are other options as well.

Some organizations opt to inspect all traffic in their network—for example, to prevent users from downloading malware.

This may interfere with encrypted traffic, like HTTPS or VPN connections:

 Direct connections are blocked; use of a proxy server is required.

 Traffic is routed through a next-generation firewall device, impersonating the destination server and enabling it to
decrypt traffic.

In the last case, you may have to install an additional root certificate authority or configure certificate pinning on clients.

Certificate pinning instructs the system to trust a given certificate for a specified host, regardless of the certificate signing
hierarchy. It is, simply put, a way to manually trust a host certificate.

These measures are required because, to applications, this setup looks exactly like a man-in-the-middle attack. In turn,
applications will refuse connections because of wrong server identity. If you encounter this situation, you will typically see
error messages such as server certificate mismatch.

Tools for Troubleshooting Connectivity Issues 

Because network connectivity can be disrupted in various ways, you will have to use different tools to locate and
troubleshoot the problem.

Facing a connectivity issue, you should create a troubleshooting plan:

1. Verify the DNS.

2. Verify first hop.

3. Verify path connectivity.

4. Check firewalls.

5. Verify traffic reaching the host.

This plan is just an example. You can also create your own. The purpose of the plan is to help you keep track of what you
have already checked and list the remaining possible causes to examine.

A good plan can significantly cut down the time you need to locate the issue. Starting by verifying the name resolution is a
good choice, because you want to be sure that you are connecting to the right host.

Verify DNS

You can check DNS name resolution using these two commands:

 nslookup: Displays the resolved IP address (if any), allowing you to make sure that you are connecting to the right
address. If there are DNS failures, this tool also allows you to examine the internals of the DNS system.
 ping: Also displays the resolved IP address. In addition, a successful ping to an IPv4 address means that the
endpoints have basic IPv4 connectivity between them.

Many networking engineers have come to rely on the ping tool for troubleshooting and with good reason. Originally
standing for Packet Internet Groper, the ping command is available on all major devices and operating systems. It uses a
series of Internet Control Message Protocol (ICMP) echo request and ICMP echo response messages to determine if a
remote host is active or inactive, the round-trip time (RTT) in communicating with the host, and (if any) packet loss.
The ping command first sends an echo request packet to an address, then waits for a reply. The ping is successful only if
the echo request gets to the destination, and the destination is able to send an echo reply to the source within a
predetermined time called a timeout. The default value of this timeout is 2 seconds on Cisco devices.

As you can see in the command output, the ping also measures RTT. You can use it as an approximate measurement of
delay. Furthermore, looking at how the times vary will give you a rough estimate of jitter, which is the variance of time
delay in packets on the network.

Therefore, a ping will allow you to do multiple checks with a single command. Regarding connectivity, if the ping is
successful, you know that both first hop and the full path to destination are operational. On the down side, if
the ping command fails, that may not really tell you much. Some administrators choose to disable ping functionality on
their servers, so you cannot distinguish this case from broken connectivity.

If the ping fails, you will have to verify connectivity in a different way. It makes sense to verify first hop as the next step.

Verify First Hop

In networking terminology, first hop signifies the first router or Layer 3 switch that a host uses to forward packets. For end
hosts, it represents a default gateway. If a host lacks connectivity to this router, it will only be able to communicate with
other hosts on the local subnet and nothing else. This is the reason to usually check first hop right after the DNS. It is
perhaps one of the most common network problems you will encounter.

For first-hop verification, use these commands in the following order:

 show ip route to display the default gateway configuration

 ping toward the default gateway to verify reachability and populate the ARP cache

 show ip arp to display the mapping of IP addresses to MAC addresses to verify connected devices

 show ip interface brief to display the IPv4 network configuration of the interfaces

You can verify first-hop reachability from either side of the connection—the target host or the first-hop router. Some
engineers prefer to do the verification from the router, because hosts may run different operating systems and you have
to be familiar with the commands specific to that operating system. The table lists the alternatives as used in different
operating systems.
To discover MAC address information, the Address Resolution Protocol (ARP) is used on Ethernet-connected networks.
Using a series of ARP request and ARP response messages, the initially unknown MAC address of the known destination IP
address is discovered.

If ARP is operating correctly, the MAC and IP address of the first-hop router (default gateway) will show up in the ARP
cache. Then, you know that you have a working Ethernet connection to the default gateway.

Verify Path Connectivity

Once you have ensured first-hop connectivity and the problem persists, the next step is to verify the path to the
destination host (or a proxy server if it is being used).

Traceroute is used to test the path that packets take through the network. It sends out either an ICMP echo request
(Microsoft Windows) or UDP messages (most implementations, such as Cisco IOS routers) with gradually increasing IPv4
Time to Live (TTL) values to probe the path by which a packet traverses the network. The first packet with the TTL set to 1
will be discarded by the first-hop router, which will send an ICMP "time exceeded" message that is sourced from its IPv4
address. The device that initiated the traceroute therefore knows the address of the first-hop router. When the TTL is set
to 2, the packets will arrive at the second router, which will respond with an ICMP time exceeded message from its IPv4
address. This process continues until the message reaches its final destination; the destination device will return either an
ICMP echo reply (Windows) or an ICMP port unreachable, indicating that the request or message has reached its
destination.

Use traceroute (or tracert on Microsoft Windows) to determine how far along the path data can successfully reach. But
note that the faulty hops (marked by the asterisk) may be filtering ICMP messages and still forward traffic.

Cisco traceroute works by sending a sequence of three packets for each TTL value, with different destination UDP ports,
which allows it to report routers that have multiple, equal-cost paths to the destination.
Another approach that can be used to verify that path connectivity is inspecting individual routing tables hop by hop.
However, this approach is not only more tedious, it also requires management access to routers, which is rarely available.

Check Firewalls

Any security devices and VPN connections along the data path need to be reviewed separately and with special
consideration. Because they were designed for filtering, legitimate traffic can easily be blocked by some default rule or a
rule that is unintentionally too broad. Firewalls are also one of the most common sources of connectivity issues.

What makes troubleshooting security devices especially challenging is:

 They often have huge rulesets.

 NAT adds complexity.

 Commands like traceroute and ping may not work through NAT.

 Limited personnel have access.

If available, make use of tools like the packet-tracer command on Cisco ASA adaptive security appliances to help you
understand what happens to packets.

Many firewalls also act as routers. If you believe that NAT and filtering are not an issue, you should be able to use tools like
traceroute and ping on the device itself (if you have access).

Verify Traffic Reaching Host

As the last step in troubleshooting network issues, you can verify that traffic is reaching the intended host, making sure
that the network indeed provides connectivity. Available on a majority of Linux distributions, tcpdump is an invaluable tool
that is used to capture and display ("dump") packets on the command line. An alternative for Microsoft Windows systems
is a GUI packet capture program called Wireshark.

The tcpdump and Wireshark programs allow you to:

 Capture and display IP packets

 See transmitted and received traffic

 Observe network behavior, performance, and traffic source

 Analyze network protocols

 Filter only relevant traffic with various options and filters

Observe the packet capture of an unsuccessful telnet connection attempt. The reply to the TCP SYN was a TCP RST packet
because no Telnet service is running on this host.

# tcpdump -i any port 22 or port 23

<output omitted>

21:38:57.969164 IP6 localhost.47744 > localhost.telnet: Flags [S], seq 1970484646, win 43690, options [mss
65476,sackOK,TS val 484369352 ecr 0,nop,wscale 7], length 0

21:38:57.969174 IP6 localhost.telnet > localhost.47744: Flags [R.], seq 0, ack 1970484647, win 0, length 0

The command arguments that are used instruct tcpdump to show traffic from any interface (-i any) and have as source or
destination ports either 22 or 23. Other useful filters include source and destination (src, dst) and also specific protocols
(udp, tcp, icmp).
Explaining the Impact of Network Constraints on Applications 

Now, you will finish the overall process of building a relationship between the network and applications by reviewing
examples that explain what affects latency, bandwidth, jitter, and packet loss on video conferencing, streaming, and
interactive web applications.

Traffic Characteristics

Before you delve into the impact that latency, bandwidth, jitter, and packet loss have, it is important to first define the
different traffic types, their characteristics, and their resource needs.

Data traffic is not real-time traffic. It comprises bursty (or unpredictable) and widely varying packet arrival times. Many
types of application data exist within an organization. For example, some are relatively noninteractive and therefore not
delay-sensitive (such as email). Other applications involve users entering data and waiting for responses (such as database
applications) and are therefore very delay-sensitive. You can also classify data according to its importance to the overall
corporate business objectives. For example, a company that provides interactive, live e-learning sessions to its customers
would consider that traffic to be mission-critical.

Voice and video traffic is real-time traffic and comprises constant and predictable bandwidth and packet arrival times.

Network  control traffic is related to the operation of the network itself. One example of this type of traffic is routing
protocol messages; the size and frequency of these messages vary, depending on the specific protocol that the network
uses and the stability of the network. Network management data is another example, including SNMP traffic between
network devices and the network management station.

Network Constraints

In networks, you will find a mix of data, voice, and video, as well as network control traffic. Each traffic type has different
properties:

 Bandwidth: The amount of free capacity needed on a link in order for the service to function smoothly.

 Delay (latency): The time it takes for a packet to reach its destination. One-way delay/latency is from the sender to
the receiver.

 Round-trip delay/latency: The combined time of packets and their replies to come back to the sender.
 Jitter: The time variance between the highest and the lowest value for delay/latency.

 Packet ioss: Packets that are not received, or lost in transit, for whatever causes.

The following table defines some of the main factors that affect these properties:

Application Types Most Affected

Video traffic comprises several traffic subtypes, including passive streaming video, real-time interactive video, and video
conferences. Video traffic can be in real time, but not always. Video has varied bandwidth requirements, and it comprises
different types of packets with different delay and tolerance for loss within the same session.

Video conferencing  has the same delay, jitter, and packet loss requirements as voice traffic. The difference is the
bandwidth requirements; voice packets are small, while video-conferencing packet sizes can vary, as can the data rate. A
general guideline for overhead is to provide 20 percent more bandwidth than the data currently requires.

Streaming video has different requirements than interactive video. An example of the use of streaming video is when an
employee views an online video during an e-learning session. As such, this video stream is not nearly as sensitive to delay
or loss as interactive video. Requirements for streaming video include a loss of no more than 5 percent and a delay of no
more than 4 to 5 seconds. Depending on how important this traffic is to the organization, it can be given precedence over
other traffic.

When you start watching a recording on the Internet, you might see messages such as "Buffering 50%" before the video
starts in the application that you are running. Buffering compensates for any transmission delays that might occur.

The impact of latency is most felt with:

 Voice traffic

 Video conferencing

 Real-time applications, such as Remote Desktop Protocol

Insufficient bandwidth affects the most:

 Video conferencing

 Streaming

 Interactive web applications

In general, video-conferencing traffic is one of the most demanding types of traffic. The presentation, processing, and
transport of video is much more complex than just voice or data. Requirements on brightness, contrast, color depth,
frames per second, and synchronizing the video with voice are just some of the attributes that determine video quality.
Video is far more sensitive to packet loss than other types of traffic. This sensitivity should be expected once you
understand that interframes require information from previous frames, which means that loss of interframes can be
devastating to the process of reconstructing the video image.

Like bandwidth, packet loss has the most effect on video conferencing and streaming:

 Degraded audio and video quality

 Loss of connection

 Buffers can be used to "stay ahead" of missing packets

Too much jitter presents a problem for the following:

 NTP clients

 Voice and video traffic

 Real-time applications

Jitter results from variation of delay between packets—more specifically, when this variation of delay exceeds a certain
threshold that applications can still handle. That is when variation of delay (jitter) takes place for a longer period of time
than which the receiving device can compensate. NTP clients are an example where jitter makes accurate time calculation
very hard, if not impossible.

Section 10: Employing Model-Driven Programmability

Introduction

When we write scripts and applications, we interact with remote systems through various application programming
interfaces (APIs). REST, for example, as the most common approach, specifies how we use HTTP and HTTPS to transfer
data and manage resources with HTTP methods. But it does not define the actual content of API calls, the data exchanged.
So, we still need to consult the API documentation and construct the payload manually.

The missing piece is a description of the resources, their structure and constraints. This is the purpose of a data model; to
tell us what can be configured. With a model, we specify how data is organized, which allows us to simplify and streamline
the way we use APIs.

Model-Driven Programmability Stack 

For automation, traditional CLI-based interfaces, with text commands and output, are insufficient. When networking
devices first started supporting APIs, these were RPC-style APIs, taking and executing CLI commands. They were not
programming-friendly. It was difficult to extract information from unstructured text, designed for humans. This led to
development of custom device APIs. However, these custom APIs often did not support full functionality and were
sufficiently different from CLI to require additional work to translate commands. Instead, a model-based approach to APIs
can be used to address these and other challenges.

Data models describe the syntax and semantics of working with specific data objects. They answer questions such as:

 How is a VLAN object structured? What properties does it have?

 What is the range of a valid VLAN ID?

 Can a VLAN name have spaces in it?


 Is the value a string or an integer?

One misconception is that data models are used to exchange data, which is not the case. Instead, protocols such as
Network Configuration Protocol (NETCONF) and RESTCONF send JSON/XML encoded documents, that are governed by a
given model.

As models focus on what is the content and not so much on how it is exchanged, they allow the API to do the following:

 Provide efficient and easy-to-use tooling to consume it (programming libraries).

 Support extensible and open interfaces (REST-based, NETCONF, and so on).

 Add flexibility and support for different types of encoding formats (XML and JSON).

 Support different types of transport.

The core components of the complete device API therefore include the following:

 Data models: The foundation of the API consists of data models. Data models define the syntax and semantics,
including constraints of working with the API.

 Transport: Model-driven APIs support one or more transport methods, including SSH, TLS, and HTTP or HTTPS.

 Encoding: Model-driven APIs support the choice of encoding, including XML and JSON, as well as custom
encodings such as Google protocol buffers.

 Protocols: Model-driven APIs also support multiple options for protocols, with the three core protocols being
NETCONF, RESTCONF, and gRPC.

Consider the following figure:

Applications can now use programming libraries or development kits that leverage data models to simplify access to the
API. Behind the scenes different protocols using various encodings and transports may be used to exchange data but the
application does not need to concern itself with that. It only needs to operate on the data models. Likewise, the API server
uses the same model, regardless of the protocol, allowing the server to support as many protocols as required.

The two main data encoding formats commonly used are XML and JSON. Each provides structured way of data formatting
to send data between two computer systems. Since data conforms to the model, it is much easier to navigate and extract
relevant information programmatically. This is in stark contrast to using Secure Shell (SSH) issuing CLI commands in which
data is sent as raw strings (text).

XML and JSON were chosen for data transmission because they have these features:
 Human readable, because they are self-describing

 Hierarchical, because they store values within values

 Parsable and used by many programming languages

However, in some cases other encodings are also used. The figure below compares types of API, including schema
language that is used to describe data models, as well as typical encodings and protocols.

As you can see, data models are not really new but are getting more attention as a building block of the model-driven APIs.

Network Automation and NETCONF 

The basic create, read, update, and delete (CRUD) mechanics for network device configuration and operational state are
encapsulated in NETCONF. NETCONF provides a formal and generally interoperable means of opening a secure
management session with a network device. It offers basic operations to act on the configuration data and get the
operational state, mechanisms for notifications, and a set of framework definitions and operations to tie these functions
together.

The basic purpose of NETCONF is to:

 Transport configuration payloads to a device, which is targeted at a specified configuration datastore

 Retrieve configuration data when queried

 Support notifications, often based on SNMP trap definitions


NETCONF is defined in RFC 6241. It is a protocol that is transported over TLS and SSH, which includes operations and
configuration datastore concepts that allow management of a network device. The protocol is exposed as an API that
applications can use to send and receive full and partial configuration data sets, and receive state and operational data
sets. An example of a client-side tool for NETCONF is the ncclient Python tool, which you use in the discovery labs in this
course.

Some primary features of NETCONF (RFC 6241):

 Defines framework for session management

 RPC messages to put and get configuration data

 Transaction-based communication

 Network service activation with networkwide transaction

 Datastores of configuration data

Configuration Datastores

NETCONF defines the existence of one or more configuration datastores and allows configuration operations on those
datastores. A configuration datastore is defined as the complete set of configuration data that is required to get a device
from its initial default state into a desired operational state. The configuration datastore does not include state data or
executive commands.

The existence of different datastores on a given device is advertised via capabilities, as defined in Section 8 of RFC 6241.
Capabilities are exchanged when you start a NETCONF session with a device, and you can see these capabilities in the
initial message from the NETCONF server on the device.

The running configuration datastore holds the complete configuration that is currently active on the network device. Only
one running configuration datastore exists on the device, and it is always present. NETCONF protocol operations refer to
this datastore using the <running> XML element. The running datastore may or may not support write operations.

If the running datastore cannot be written directly, you can still change the device configuration through
the candidate datastore, identified in messages with the <candidate> XML element.
Cisco IOS XE Software supports a candidate datastore, which needs to be enabled in the CLI. When using NETCONF with
Cisco IOS XE Software, you must explicitly state that it is the candidate datastore that is the target of operations, and you
must explicitly commit changes.

The information that can be retrieved from a running system is separated into two classes:

 Configuration data is the set of writable data that starts the system operations.

 State data is the nonconfiguration data on a system, or operational data, such as read-only status information and
collected statistics.

Protocol Operations

NETCONF provides a set of low-level operations to manage device configurations and retrieve device state information.
These operations include retrieving, configuring, copying, and deleting configuration datastores and retrieving state and
operational data. Additional operations can be provided, based on the capabilities that the device advertises.

All NETCONF operations are encoded as XML messages and data models are described with a language called YANG. The
main motivation for YANG was to provide a standard way to model configuration and operational data so that network
device configuration and monitoring could be based on common models. Common models should be interoperable across
different vendors, instead of being based on vendor-specific CLIs. It was also important to provide a means of defining such
models that were amenable to automated translation. The following figure describes the NETCONF architecture stack.

NETCONF provides the following operations:

 get retrieves running configuration and device state information—that is, operational data.

 get-config retrieves all or part of a specified configuration datastore.

 edit-config loads all or part of a specified configuration to the specified target configuration datastore.

 copy-config creates or replaces an entire configuration datastore with the contents of another complete
configuration datastore.

 delete-config deletes a configuration datastore. The <running> configuration datastore cannot be deleted.

 lock allows the client to lock the entire configuration datastore system of a device. Such locks are intended to be
short-lived. They allow a client to make a change without fear of interaction with other NETCONF clients, non-
NETCONF clients (for example, SNMP and CLI scripts), and human users.

 unlock releases a configuration lock that was previously obtained with the <lock> operation.

 close-session requests the graceful termination of a NETCONF session.


 kill-session forces the termination of a NETCONF session.

Note

For further details about NETCONF operations, see Section 7 of RFC 6241.

Protocol Message Examples

The next example shows an entire NETCONF RPC message, including the message ID (101), as defined by the base IETF
NETCONF namespace.

The operation within the message is get-config, with the target data source of the running configuration—that is, the
configuration that is used by the device for its current configuration state.

Once the operation is processed, a reply is sent back. The message ID of the reply corresponds to the message ID of the
original RPC. The data is for the current device configuration in the example RPC shown in the previous figure. The
response data contains several attributes, such as configuration of the network interfaces:

XML namespaces are widely used and have the following traits:

 They provide a means to mitigate element name conflicts.

 They are defined with the attribute xmlns:prefix="URI"; prefix is used as abbreviation of the namespace in the tag.

 You can have a default namespace using xmlns=url, eliminating the need to have an attribute in each tag.

Namespaces are present in both, requests and responses. They are especially important inside configuration elements. It is
quite possible that there would never be a conflict of XML tags when working with network devices, but you may want to
create custom XML objects. In doing so, you have the ability to create a namespace; it essentially becomes an identifier for
each XML element. That way, you can read data from more than one source. You can create a larger object and not
overwrite an object, because they would still be accessible by their separate namespace.

NETCONF over SSH

SSH is the main transport protocol used by NETCONF. There are a few different steps that occur during a NETCONF session:

1. Client connects to the NETCONF SSH subsystem.

2. Server responds with hello that includes NETCONF-supported capabilities.

3. Client responds with supported capabilities to establish a connection.


4. Client issues a NETCONF request (rpc/operation/content).

5. Server issues a response or performs operation.

The first step is to connect to the NETCONF server by using the following Linux command:

When you connect to the network device and establish a connection, the device sends a hello, and it includes all its
supported NETCONF capabilities. The Cisco Cloud Services Router (CSR) 1000V includes hundreds of capabilities.

The following output has been modified to improve readability.

Note

All messages to and from the NETCONF server or client must end with ]]>]]>. This way, the client, and server know that the
other side is done sending the message.

When the server sends its hello, the client needs to send a hello with its supported capabilities. You can respond back with
everything the server supports (assuming the client does as well) or just with the bare minimum edit and get commands.

The bare minimum is what is shown in the figure, just supporting the NETCONF base.
Once the client sends its capabilities, it can then send NETCONF requests. In this example, the client is sending a request to
perform the NETCONF get operation (denoted by <get>). It is then asking for a given section of configuration using a filter.
Based on the devices or servers being used, you often have to be aware of which model is being used (denoted by the XML
namespace).

This filter is selectively asking for the configuration information of GigabitEthernet1.

The server processes the client request and responds with the configuration as expected.
NETCONF also supports capability discovery and model downloads. Supported models are discovered using the ietf-
netconf-monitoring model. You can see the revision dates for each model in the capabilities response. Data models are
available for optional download from a device using the get-schema RPC.
Exploring YANG Models 

YANG is a modeling language that is used with NETCONF and in an increasing number of other domains. It is used to define
configuration and operational state data models, additional operational functions, and notifications that are transported
within NETCONF commands.

Here are some primary features of YANG (RFC 6020):

 Language for data modeling and RPCs

 Describes data structure, type, and constraints in the form of a schema

 Standardizes the representation of data

 Allows configuration to be checked if valid

 Originally intended for NETCONF payloads

 Now also used elsewhere as an interface definition language—for example, in OpenDaylight

YANG is a data modeling language that originally was defined in RFC 6020 circa 2010. It is used to model configuration and
state data that NETCONF manipulates, and the NETCONF remote procedure calls (RPCs) and notifications. Beyond its roots
in NETCONF, YANG is now used as a general-purpose interface definition and data modeling language in different
programming environments, including Cisco NSO, OpenDaylight (ODL), and with tools like YANG Development Kit (YDK).

The first version of NETCONF was defined before YANG, so the payload was, in effect, undefined. Configuration payloads
defaulted to the CLI that was used on the device in question. This approach is helpful because it reduces implementation
costs; it also allows timely access to new features that only the CLI can configure. However, it has all the disadvantages of
the CLI that were previously discussed.

Because the CLI is unstructured and not interoperable, YANG was proposed as a way to define common data models for
NETCONF configuration payloads and notifications. Currently, some NETCONF implementations use CLI, some use vendor-
specific YANG models, and some use common models. While YANG itself is a mature standard, standardization of data
models is still an ongoing effort, driven by multiple organizations.

YANG Structure

A YANG module defines a hierarchy of data that supports a complete description of all data that is sent between a
NETCONF client and server. YANG models the hierarchical organization of data as a tree in which each node has a name
and a value or a set of child nodes. YANG provides clear and concise descriptions of the nodes and the interaction between
those nodes.

Data models are structured with modules and submodules. A module can import data from other external modules and
include data from submodules. The hierarchy can be augmented, allowing one module to add data nodes to the hierarchy
defined in another module. This augmentation can be conditional, with new nodes appearing only if certain conditions are
met.
Models can describe constraints to enforce on the data, restricting the appearance or value of nodes based on the
presence or value of other nodes in the hierarchy. These constraints are enforceable by the client or the server.

A set of built-in types is defined, and a type mechanism exists through which additional types may be defined. Derived
types can restrict the set of valid values of their base type using mechanisms like range or pattern restrictions. They can
also define conventions for use of the derived type, such as a string-based type that contains a hostname.

YANG permits the definition of reusable groupings of nodes. The instantiation of these groupings can refine or augment
the nodes, allowing it to tailor the nodes to its particular needs. Derived types and groupings can be defined in one module
or submodule and used in that location or in another module or submodule that imports or includes it.

Data hierarchy constructs include defining lists where keys distinguish list entries from each other. Such lists may be
defined as user sorted or automatically sorted by the system. For user-sorted lists, operations are defined for manipulating
the order of the list entries.

Modules can be translated into an equivalent XML syntax called YANG-Independent Notation (YIN). YIN allows applications
that use XML parsers and Extensible Stylesheet Language Transformations (XSLT) capabilities to operate on the models.
The conversion from YANG to YIN is lossless, so content in YIN can be sent back into YANG.

A balance is struck between high-level data modeling and low-level bits-on-the-wire encoding. The reader of a YANG
module can see the high-level view of the data model and understand how the data will be encoded in NETCONF
operations.

YANG is an extensible language that allows standards bodies, vendors, and individuals to define extension statements. The
statement syntax allows these extensions to coexist with standard YANG statements in a natural way. Extensions in a
YANG module stand out sufficiently for the reader to notice them.

Some common extensions and models defined by IETF include:

 RFC 6022: YANG Module for NETCONF Monitoring

 RFC 7223: A YANG Data Model for Interface Management

 RFC: 7277: A YANG Data Model for IP Management

 RFC 8519: YANG Data Model for Network Access Control Lists (ACLs)

Other organizations providing YANG-related work:

 Vendors: Cisco IOS, IOS XR, IOS XE models

 CableLabs: Cable system management

 Open Networking Foundation: Service Provider networking

 OpenConfig Group: Publishes models, documentation, and other material for the community

Data Nodes Generically Mapped to XML

YANG is expressed in XML; that is, an instance of something that is modeled in YANG is an XML document. The figure
shows the generic rules for mapping YANG model elements to XML document elements.
An attribute of a YANG model is defined in a leaf, which maps to an XML document element of the same name. The value
appears between the open and close elements. A list includes multiple document elements. Note that lists of leaf objects
or attributes are unnamed and unordered and do not contain any <list> </list> elements. If you require such elements,
then you would use a container in the YANG model. Lists are used for multiple instances in an XML document, but not
attributes or leaf objects.

Leaf Statement with XML

To illustrate the leaf-attribute concept further, the following example shows the definition of a leaf called dataRecords.
Note that the definition of the leaf is comprehensive, so a client or server could use the model definition to validate the
data and apply constraints to it. The wire representation in a NETCONF message, however, includes only the XML element
and its value.

Example of a YANG model transformed in XML representation:

The type yang:counter64 comes from the ietf-yang-types module, also defined in RFC 6991. You can see the ietf-
yang.types.yang model file in ~/git/yang/standard/ietf/RFC.

List Statement with XML

This example shows a list of servers with leaf objects that are used for the attributes of the server. The XML representation
shows multiple server elements but does not contain any <list> </list> elements. Note that neither the ip or port leaf
attributes are mandatory, so they do not appear in the XML document unless there is a value for a given server instance.
This approach allows more efficiency in the XML encoding.
Also note the inet: prefix for the ip-address and port-number types. These items are defined in the ietf-inet-types module.

Container Structure with Augmentation

This example shows a container that is called "interfaces"  containing a list of interface types with several attributes
defined as leaf types. This example is based on RFC 7223.

YANG allows a module to augment a data model by inserting additional YANG nodes into the model. This ability allows
vendors to add vendor-specific parameters to standard data models in an interoperable way.
The augment statement allows a module or submodule to add to the schema tree, which is defined in a string that
identifies a node in the schema tree. This node is called the augment target node. It is augmented with the nodes that are
defined in the substatements that follow the augment statement.

In this example, IPv4 addresses are enabled on the interface node in the interfaces container with a when condition that
specifies that the interface is of a given type.

The when statement makes its parent data definition statement conditional. The node that is defined by the parent data
definition statement is only valid when the condition that is specified in the when statement is satisfied. As with many
other parts of YANG, the argument of the statement is an XPath expression.

If the when statement is a child of an augment statement, then the context node is the target node of the augment in the
data tree—the YANG model file for RFC 7223.

Interface IP Configuration Example

Let's now take a look at a real example.

The following (abbreviated) model describes IP configuration:

This model is defined in RFC 8344 as a standard way of configuring IP addresses on an interface. The full model also
contains additional parameters, such as IPv6 configuration, which we will not use for this example.

First, notice that above model references ietf-interfaces model (through the import statement). This is required because IP
configuration builds upon generic interface model, using the augment statement. To put it simply, ietf-ip module adds a
new ipv4 container to each interface on a device.

Inside the ipv4 container, a list (address) allows you to define multiple IP addresses for each interface. Each address also
comes with a prefix-length, which is used in calculating the subnet mask. Consider 192.168.1.10/24, where 192.168.1.10 is
the address and 24 is the prefix-length.

Note

You can find the full YANG model in the ietf-ip@2014-06-16.yang file in the ~/git/yang/standard/ietf/RFC directory on the
lab Student Workstation.

Using YANG model allows you to construct a filter (XPath) to limit results in NETCONF:
This way you only get the data you are interested in.

And you can reuse it to make changes (update an IP):

Alternatively, you can create the XML manually—by consulting the model.

Such an XML document is then sent through a NETCONF session to change the IP address to 192.168.1.15 on the target
device's eth0 interface.

The XML above uses the edit-config NETCONF RPC that loads all or part of a specified configuration. The device analyzes
the source and target configurations and performs the requested changes. Elements in the config subtree may contain
an operation attribute. The attribute identifies the point in the configuration at which to perform the operation and may
appear on multiple elements throughout the config subtree. If the operation attribute is not specified, then the default
operation is merge and the configuration is merged into the configuration datastore.

The operation attribute can take one of the following values:

 merge: The configuration data that the element containing this attribute identifies is merged with the
configuration. Data is merged at the corresponding level in the configuration datastore that the target parameter
identifies. This behavior is the default.

 replace: The configuration data that the element containing this attribute identifies replaces any related
configuration in the configuration datastore that is identified by the target parameter. Only the configuration that
is actually present in the config parameter is affected.
 create: The configuration data that is identified by the element containing this attribute is added to the
configuration only if the configuration data does not exist on the device. If the configuration data exists, an <rpc-
error> element is returned with an <error-tag> value of "data-exists."

 delete: The configuration data that is identified by the element containing this attribute is deleted in the
configuration datastore that is identified by the target parameter.

Remote Procedure Call

YANG also supports the definition of RPCs. In other words, in addition to the intrinsic NETCONF operations—get, get-
config, and similar—additional operations on the server and its data sets can also be defined in YANG. YANG is used like an
Interface Definition Language (IDL), similar to the Common Object Request Broker Architecture (CORBA) IDL or Web
Services XML message definitions.

This example shows an RPC activate-software-image, which is defined in YANG, with the corresponding XML documents
for the NETCONF RPCs.

The example shows the definition of a message to activate a given software image. The operational context is that network
devices may have several images that are stored in flash memory, including the current running image, the previous
image, and an image for upgrade. This RPC can be used to activate a specific image and make it the running image for the
device. NETCONF does not support this common operational requirement as an intrinsic operation.

The definition of the RPC on the left contains the names of the operations, input parameters, and output parameters. The
NETCONF <rpc> XML documents on the right show what an instance of the RPC request and response might look like on
the wire, with the namespaces omitted for brevity.

Notifications

The next listing shows a notification, with the YANG definition followed by an example in XML encoding.
NETCONF notifications are defined in RFC 5277. This optional capability is an addition to the base NETCONF definition. The
notification mechanism is conceptually one of subscribing to a stream of notifications with a filter, and with start and stop
times for notification replay.

The notification statement takes one argument, which is an identifier, followed by a block of substatements that holds
detailed notification information.

The example has the YANG definition for a link failure notification, including the interface name and the administration
and operational statuses. It is possible for an interface to be in the administration status of "up", meaning it is configured
to be available. However, in the example, it is operationally down—perhaps because someone disconnected the cable
from the port.

Perform Basic NETCONF Operations


You will install the netconf-console client and learn how to invoke a NETCONF hello exchange between the NETCONF client
and Cisco CSR1kv1 with netconf-console, get configuration from the device, and create a custom RPC. First, you will set up
the NETCONF interface on the router and invoke a hello exchange. Next, you will learn how to get the desired
configuration from the device with NETCONF. Finally, you will create and send a custom RPC to get the same configuration
using XML.
student@student-workstation:~ /working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password
cisco --hello

<?xml version='1.0' encoding='UTF-8'?>


<nc:hello xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
<nc:capabilities>
<nc:capability>urn:ietf:params:netconf:base:1.0</nc:capability>
<nc:capability>urn:ietf:params:netconf:base:1.1</nc:capability>
<nc:capability>urn:ietf:params:netconf:capability:writable-running:1.0</nc:capability>
<nc:capability>urn:ietf:params:netconf:capability:xpath:1.0</nc:capability>
<nc:capability>urn:ietf:params:netconf:capability:validate:1.0</nc:capability>
<nc:capability>urn:ietf:params:netconf:capability:validate:1.1</nc:capability>
<... output omitted ...>
<nc:capability>
urn:ietf:params:netconf:capability:notification:1.1
</nc:capability>
</nc:capabilities>
</nc:hello>
Note:

When you connect to the network device and establish a connection, the device sends a hello, and it includes all its
supported NETCONF capabilities. Cisco CSR 1000V includes hundreds of capabilities.

student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password


cisco --hello | grep Cisco-IOS-XE-native

<nc:capability>http://cisco.com/ns/yang/Cisco-IOS-XE-native?module=Cisco-IOS-XE-native&amp;revision=2018-07-
27</nc:capability>

student@student-workstation:~/working_directory$ netconf-console -h |grep get-config

[--get-config [GET_CONFIG]] [--kill-session SESSION_ID]


Use with --get, --get-config, or --copy-config.
XPath filter to be used with --get, --get-config, and
--get-config [GET_CONFIG]
student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password
cisco --get-config

<?xml version='1.0' encoding='UTF-8'?>


<data xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<… output omitted …>
</native>
<… output omitted …>
<interfaces xmlns="http://openconfig.net/yang/interfaces">
<… output omitted …>
<interfaces xmlns="urn:ietf:params:xml:ns:yang:ietf-interfaces">
<… output omitted …>

Note:

The same configuration can be described using different YANG models such as Cisco-IOS-XE-native, open config, and IETF.

student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password


cisco --get-config --dry

<?xml version='1.0' encoding='UTF-8'?>


<nc:rpc xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:68b27a38-e146-492f-a1d2-
e52acb8ef5de">
<nc:get-config>
<nc:source>
<nc:running/>
</nc:source>
</nc:get-config>
</nc:rpc>

student@student-workstation:~/working_directory$ netconf-console --host=10.0.0.20 --port 830 --user cisco --password


cisco --get-config /native --dry

<?xml version='1.0' encoding='UTF-8'?>


<nc:rpc xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:76741fff-0a25-4055-b3f4-
4d806b536cbd">
<nc:get-config>
<nc:source>
<nc:running/>
</nc:source>
<nc:filter type="xpath" select="/native"/>
</nc:get-config>
</nc:rpc>

Note:

As you can see, there is a difference whether netconf-console sends a --get-config RPC with or without the filter. XPath is
the default filter available with the said command. XPath uses path expressions to choose nodes in an XML file.

student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password


cisco --get-config /native
<?xml version='1.0' encoding='UTF-8'?>
<data xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<version>16.9</version>
<... output omitted ...>

<... output omitted ...>


<interface>
<GigabitEthernet>
<name>1</name>
<ip>
<address>
<primary>
<address>192.168.0.30</address>
<mask>255.255.255.0</mask>
</primary>
</address>
</ip>
<mop>
<enabled>false</enabled>
<sysid>false</sysid>
</mop>
<negotiation xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-ethernet">
<auto>true</auto>
</negotiation>
</GigabitEthernet>
<GigabitEthernet>
<... output omitted ...>
</interface>
<... output omitted ...>

Note:

The interfaces are inside the interface tag in the native module.


student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password
cisco --get-config /native/interface

<?xml version='1.0' encoding='UTF-8'?>


<data xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface>
<...output omitted ...>
</interface>
</native>
</data>

student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.2 --port 830 --user cisco --password


cisco --get-config /native/interface/GigabitEthernet

<?xml version='1.0' encoding='UTF-8'?>


<data xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface>
<GigabitEthernet>
<... output omitted ...>
</GigabitEthernet>
<...output omitted ...>
</interface>
</native>
</data>

Note:

Because there are only the GigabitEthernet interfaces, the configuration output is the same.

student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password


cisco --get-config /native/interface/GigabitEthernet[name=5]

<... output omitted ....>


<interface>
<GigabitEthernet>
<name>5</name>
<description>interface description</description>
<ip>
<address>
<primary>
<address>10.0.0.20</address>
<mask>255.255.255.0</mask>
</primary>
</address>
</ip>
<mop>
<enabled>false</enabled>
<sysid>false</sysid>
</mop>
<negotiation xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-ethernet">
<auto>true</auto>
</negotiation>
</GigabitEthernet>
<... output omitted ...>

Note

The /GigabitEthernet is a list. To specify the position number inside the list, you need to write it inside the "[]" (square
brackets). To inspect the YANG structure and types of the modules, you can use the --get-schema command.

student@student-workstation:~/working_directory$ netconf-console --help |grep rpc

[--rpc [RPC]] [--sleep SLEEP] [-e EXPR] [--dry]


--rpc [RPC] Takes an optional filename (or '-' for standard input)
NETCONF rpc operation (w/o the surrounding <rpc>).
student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password
cisco --get-config /native --dry

<?xml version='1.0' encoding='UTF-8'?>


<nc:rpc xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:76741fff-0a25-4055-b3f4-
4d806b536cbd">
<nc:get-config>
<nc:source>
<nc:running/>
</nc:source>
<nc:filter type="xpath" select="/native"/>
</nc:get-config>
</nc:rpc>

<?xml version='1.0' encoding='UTF-8'?>


<nc:rpc xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:76741fff-0a25-4055-b3f4-
4d806b536cbd">
<... output omitted ...>

Note:

The RPC sends the XML version, UTF encoding, NETCONF version, and message ID. This information is added inside every
RPC that is sent with the --rcp command. You do not need to specify it inside the custom-rpc.xml file. Everything else
inside the RPC tag is called the payload and needs to be specified.

Step 26

Send the RPC.

Answer:

student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password


cisco --rpc custom-rpc.xml
Operation failed: XMLSyntaxError - Namespace prefix nc on get-config is not defined, line 2, column 17 (rpc-wrong.xml,
line 2)
Note:

It looks like the custom-rpc.xml file invokes a syntax error because the namespace prefix is not defined. Although the --
rcp command adds the namespaces before it sends the RPC, netconf-console first checks the payload for a namespace.
Because there is no namespace prefix stated inside the custom-rpc.xml file, the operation fails with a namespace prefix
error.

Step 27

Update the custom-rpc.xml file with the correct prefix nc on get-config. Check the correct XML syntax again with --get-
config and --dry. Check the nc namespace prefix inside the RPC tag.

Answer:

student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password


cisco --get-config --dry

<?xml version='1.0' encoding='UTF-8'?>


<nc:rpc
<nc:get-config>

Step 28

Copy the nc namespace prefix to the get-config tag inside your custom-rpc.xml file.

Answer:

<nc:get-config xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
<nc:source>
<nc:running/>
</nc:source>
<nc:filter type="xpath" select="/native"/>
</nc:get-config>

Note:

Alternatively, you can define the default namespace prefix xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" inside the get-
config tag. That way, you can skip the namespace inside every other XML tag. If there is no namespace inside other tags,
netconf-console will apply the last given (default) namespace.

Step 29

Send the custom RPC again.

Answer:

student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password


cisco --rpc custom-rpc.xml

<?xml version='1.0' encoding='UTF-8'?>


<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-
id="urn:uuid:d580f15d-6cad-4c8f-a309-e831492c43e8">
<data>
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<version>16.9</version>
<... output omitted ...>
Step 30

Update the custom RPC so that you only get the configuration of the GigabitEthertnet5 interface with the full XPath.

Answer:

<nc:get-config xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
<nc:source>
<nc:running/>
</nc:source>
<nc:filter type="xpath" select="/native/interface/GigabitEthernet[name=5]"/>
</nc:get-config>

Step 31

Send the updated custom RPC to retrieve the configuration of the GigabitEthernet 5 interface.

Answer:

student@student-workstation:~/working_directory$ netconf-console --host 10.0.0.20 --port 830 --user cisco --password


cisco --rpc custom-rpc.xml

<?xml version='1.0' encoding='UTF-8'?>


<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-
id="urn:uuid:9a2137f4-5a51-4792-a60b-cbf64efdca1b">
<data>
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface>
<GigabitEthernet>
<name>5</name>
<... output omitted ...>

Utilizing Data Models with RESTCONF Protocol 

HTTP-based RESTCONF provides a programmatic interface that is based on standard mechanisms for accessing
configuration and state data. You can also access data model-specific RPC operations and events that are defined in the
YANG model. It is defined in RFC 8040.

RESTCONF offers these characteristics:

 Functional subset of NETCONF

 Exposes YANG models via a REST API (URL)

 Uses HTTP or HTTPS as transport

 Uses XML or JSON for encoding

 Developed to use HTTP tools and programming libraries

 Uses common HTTP verbs in REST APIs


RESTCONF is a REST-like protocol that provides a mechanism over HTTP for accessing data that is defined in NETCONF
datastores and modeled in YANG.

RESTCONF combines the HTTP protocol simplicity with the predictability and automation potential of a schema-driven API.
A client can determine all management resources for YANG models and NETCONF capabilities. Therefore, the URIs for
custom protocol operations and datastore content are predictable, based on the YANG module definitions. The generation
of the code to support RESTCONF APIs, and the mapping of these API calls to NETCONF, can be automated because the
mapping from a YANG model to a RESTCONF Uniform Resource Identifier (URI) is well-defined.

RESTCONF helps support a common REST-based programming model for network programming in general. This model
aligns with the wider trend in infrastructure programming to support REST APIs.

This mapping follows the general pattern of most REST APIs. Resources representing configuration data can be modified
with the DELETE, PATCH, POST, and PUT methods. Data is encoded with either XML or JSON.

The HTTP GET operation represents the same semantics as the NETCONF GET and get-config operations, and can also be
used for notifications. The HTTP PATCH operation supports partial configuration updates in a way that is similar to the
NETCONF edit-config operation with operation=merge. The HTTP PUT operation is similar to PATCH, but it is typically used
to replace the contents of a named resource, rather than changing attribute values of a resource.

The HTTP POST operation is used for NETCONF RPCs and, in some circumstances, to create a resource. The HTTP DELETE
operation is equivalent to the NETCONF edit-config with operation=delete.

The client can also access the YANG libraries that the server implements—that is, the capabilities.

The API resource contains the entry points for the RESTCONF datastore and operation resources. It is the top-level
resource that is referred to by the notation {+restconf}. It has the media type application/yang.api+xml or
application/yang.api+json, depending on whether the encoding of the payload document is XML or JSON.

The YANG tree diagram for an API resource is as follows:


RESTCONF does not support all the NETCONF operations. Specifically, these operations are not supported:

 Configuration locking

 Candidate configuration

 Startup configuration

 Validate

 Confirmed commit

You can perform more granular operations when doing a change within NETCONF, such as specifying whether you want to
replace an object, update it, or delete it. This example shows how RESTCONF operations map directly back to their
counterparts in NETCONF.

Examples:

 GET http://csr1kv/restconf/api/config/native

1. Retrieve a full running configuration as an object.

 GET http://csr1kv/restconf/api/config/native/interface

1. Retrieve interface-specific attributes.

 GET http://csr1kv/restconf/api/config/native/interface/GigabitEthernet/1

1. Retrieve interface-specific attributes for GigabitEthernet1.

RESTCONF utilities and tools:

 The same tools that are used for native REST interfaces are used for RESTCONF:

1. Python requests module

2. Postman

3. Firefox RESTClient

Note:

There are no API docs, so YANG tools will be used to generate the URL and request body.

Get Interface

This example shows a request and response for an interface:


This example request is made on the resource {+restconf}/data, which is a mandatory resource that represents the
combined configuration and operational state data resources that a client can access. The client request is shown on the
left, and the server response is on the right. In this case, an interface resource is being requested, to a depth of 1.

The depth parameter is used to specify the number of nest levels that are returned in a response for a GET method. It is
defined in the IETF RESTCONF protocol draft, Section 4.8.2, which says: The "depth" parameter is used to limit the depth of
subtrees returned by the server. Data nodes with a depth value greater than the "depth" parameter are not returned in a
response for a GET method.

The requested data node has a depth level of "1." If the "fields" parameter (Section 4.8.3) is used to select descendant
data nodes, then these nodes and all their ancestor nodes have a depth value of 1. This fact has the effect of including the
nodes that are specified by the fields. The nodes are included even if the depth value is less than the actual depth level of
the specified fields. Any other child node has a depth value that is 1 greater than its parent.

The value of the depth parameter is either an integer from 1 to 65535 or the string "unbounded." "Unbounded" is the
default string.

Get Interface Description

This example shows how to request only the interface description:

This RESTCONF GET example is a request for the interface description only. It is a specific example of how to construct a
request URL to select leaf nodes from within a model structure.

Get YANG Library Version

This example shows how to get information on the YANG library version:
The example request on the right is made on the {+restconf}/yang-library-version resource.

This mandatory leaf identifies the revision date of the  ietf-yang-library YANG module, which is implemented by a server.
This example shows how a client can determine which models the server supports and how the client can interact with it.

Invoke RPC

This example shows how to invoke an RPC:

This request is made on the {+restconf}/operations resource. This resource is an optional resource that provides access to
the data model-specific protocol operations that the server supports, like YANG RPCs.

The operation resource is defined with the YANG RPC statement, or is a data model-specific action that is defined with a
YANG action statement. It is invoked using a POST method on the operation resource POST
{+restconf}/operations/<operation>.

The output shows a module with an RPC reset with no input or output parameters. The request to invoke that operation is
a POST on the operation with a name that is created from the module name and the RPC name, which are separated by a
colon.

If the operation is invoked without errors, and there is no output section, then the response message must not include a
message body in the response message. It must send a "204 No Content" status line instead, as shown in the example. If
the operation input is not valid, or the operation is invoked but errors occur, the server must send a message body
containing an errors resource.

Using Python Scripts and Cisco SDKs 

A Software Development Kit (SDK) is a collection of tools that are used for developing software or applications. It is usually
supplied by software and or hardware suppliers to ease integration with their products. An SDKIt often consists of APIs,
tools, sample code, documentation, and much more.
Cisco developed various Python libraries and SDKs to help you interface with Cisco products. One example is the Cobra
Python SDK for provisioning Cisco ACI, freely available from GitHub. But not all Cisco-developed SDKs are specific to Cisco
devices. Some are useful across a wide variety of systems.

The YANG Development Kit (YDK) is an SDK that provides an easy way to access APIs modeled with YANG.

YANG Development Kit

The main goal of YDK is to reduce the learning curve for interfacing with YANG-based APIs, such as NETCONF and
RESTCONF. When using models, the programmatic interface (functions, data objects, URLs), called an API binding, can be
automatically generated from data models, therefore the term "model-driven." As you can see in the figure below, the
generated bindings directly correspond to the YANG model.

YDK's model-based bindings offer the following benefits:

 Simplify application development

 Abstract transport, encoding, modeling language

 API generated from the YANG model

 One-to-one correspondence between model and class hierarchy

 Multiple language (Python, C++, Ruby, Go, and so on)

YDK uses language-specific bindings, created from YANG models. Use pre-built, such as YDK-Py, or generate your own.
YDK-Py is a set of pregenerated Python modules that adhere to common YANG models. For example, you can use YDK-Py
to use Python to configure a device that adheres to a given model such as the OpenConfig Border Gateway Protocol (BGP)
model. For more advanced users who are developing custom data models, you can autogenerate your own Python
bindings using YDK-gen. It is freely available on Cisco DevNet GitHub community at https://github.com/CiscoDevNet.

Generating YDK Bindings

YANG data models contain the information required to generate the API bindings. YDK-gen ingests and reads the models,
and its output are actual Python objects that can be used to interface with devices that support that given model.

To create custom Python bindings you will need:

 YANG module(s)

 Profile file

1. Contains metadata (header)

2. References modules

3. List dependencies

4. In JSON format

The first requirement is, of course, the YANG module or modules. Above listing contains a sample YANG file we will use.
For simplicity, it only describes configuration of a managed system's host name and search domains (as used with DNS).
This could also have been a device-specific data model, like Cisco IOS XE's, or a set of standard models, such as IETF's.

You also need to construct a JSON profile to use with generate.py script.


Referenced models can include files, directories, or Git repositories.

A profile is simply metadata on the project, but more importantly, it states which YANG modules to include in the bundle.
The figure uses the file parameter to specify modules, but you can also use Git or whole directories as well. Instead of file,
you can use the git key to autoclone Git repositories or use the dir key if you have multiple modules in a given directory or
directories on your local machine.

The last step is to create the bindings. You do this by running the generate.py --python --bundle <profile> command, as
seen above.

Note:

It is a common practice to store your profile in the profiles subdirectory of the YDK-gen directory.

Once you run the command, generated files are placed in ./ydk-gen/gen-api subdirectory. Here is an example output after
the bindings are generated in the gen-api subdirectory:

You can start using them in Python immediately. Note how the path to Python module is constructed in the output above,
as this is reflected in the import statements used inside scripts, such as this next one.
Observe how every object in the YANG module gets an equivalent Python object.

The most important point to realize is that the modules are autogenerated. Without using model-driven APIs, each of
these Python objects would need to be manually created. It would be a long and arduous process adding in error handling
and client-side checks. Because these objects are autogenerated from models, all of that happens automatically.

Using YDK

YDK is composed of a core package that defines services and providers, plus one or more module bundles that are based
on YANG models. Each module bundle is generated using a bundle profile with YDK-gen and contains definitions for data
objects. While data objects are also useful on their own, the true power of YDK comes from combining them with services
and providers.

Functionality provided by different YDK components:

 Models group Python objects created for each YANG model (autogenerated).

 Services perform operations on model objects using providers.

 Providers implement services over different protocols.

This design allows us to separate the interface to operations (provided by services) from its implementation (providers).

Code using different protocols/providers can be mostly the same.

Another significant benefit of model-driven APIs is client-side validation. It means your application that is written using
YDK-Py bindings already understands the constraints that are embedded in the YANG model. Therefore, if you go to push a
change to a device, an error is raised in Python (or a given language) before an API call is even made to the device.
YDK service will automatically perform local (client-side) validation for you:

 Type checks: enum values, string, number, and so on.

 Value checks: range, format, and so on.

 Semantic checks: key uniqueness/presence, mandatory values, and so on.

Use Cisco SDK and Python for Automation Scripting 

You will learn how to configure devices using the NETCONF interface, install YDK, and inspect the Python YDK-gen bindings.
First, you will set up the NETCONF interface on the router and access the router configuration. Next, you will install YDK
and inspect a Cisco IOS XE YDK-gen binding. Finally, you will write Python code to get the interface configuration of the
device using YDK-Py and update the description of the interface.
Inspect Device Configuration

In this procedure, you will inspect the NETCONF interface on the Cisco CSR 1000V router, or manually enable the interface
if needed, and check the router configuration.

Step 1

From the desktop, open a terminal and use SSH to CSR1kv1. The router IP address and credentials are in the Job Aids.

Answer:

student@student-workstation:~$ ssh cisco@10.0.0.20

Password:

csr1kv1#

Step 2

Check that the NETCONF interface is enabled.

Answer

csrv1k1# show running-config | include netconf-yang

netconf-yang

Step 3

If netconf-yang is not included in the running configuration of the device, you need to manually enable the NETCONF
interface. Enter configuration mode with the configure terminal command.

Answer

csrv1k1# configure terminal

csrv1k1(config)# netconf-yang

csrv1k1(config)#

Step 4

Exit configuration mode first, and then once again, check that the NETCONF interface is indeed enabled.

Answer

csrv1k1(config)# exit

csrv1k1#show running-config | include netconf-yang

netconf-yang

Step 5

Check the router interfaces and the corresponding details with the show ip interface brief command.

Answer
csrv1k1# show ip interface brief

Interface IP-Address OK? Method Status Protocol

GigabitEthernet1 192.168.0.30 YES NVRAM up up

GigabitEthernet2 192.168.100.20 YES NVRAM up up

GigabitEthernet3 unassigned YES NVRAM administratively down down

GigabitEthernet4 unassigned YES NVRAM administratively down down

GigabitEthernet5 10.0.0.20 YES NVRAM up up

Install YDK and Inspect a Python Binding

You will install the YDK and Cisco IOS XE model and inspect a Python binding.

Step 6

Open a terminal and go inside the /home/working_directory folder.

Answer

student@studentworkstation:~#cd ~/working_directory/

Step 7

Install the YDK with pipenv.

Answer

student@student-workstation:~/working_directory$ pipenv install ydk

Creating a Pipfile for this project...


Installing ydk...
Installation Succeeded
<... output omitted ...>

Note:

The libydk library requirement is already installed on this Student-VM. If you are going to try to install the YDK in your local
environment (VM), you will first need to install this library. You can install it by using
the wget https://devhub.cisco.com/artifactory/debian-ydk/0.8.4/bionic/libydk-0.8.4-1.amd64.deb and sudo gdebi
libydk-0.8.4-1.amd64.deb commands for the Debian-based operating system. The full documentation for the YDK-gen
installation is available at https://github.com/CiscoDevNet/ydk-gen.

Step 8

Install the Cisco IOS XE model with pipenv.

Answer

student@student-workstation:~/working_directory$ pipenv install ydk-models-cisco-ios-xe

Installing ydk-models-cisco-ios-xe...
Installation Succeeded
<... output omitted ...>
Step 9

Open Visual Studio Code, and open the /home/working_directory folder. Open the cisco-xe.py file, and check the
containing code.

Answer

# Import the Cisco IOS XE Native model


# create a Native object
# print a dir() of the Native object

Step 10

Import the required Cisco_IOS_XE_native module from the ydk.models.cisco_ios_xe module as xe_module.

Answer

# Import the Cisco IOS XE Native model


from ydk.models.cisco_ios_xe import Cisco_IOS_XE_native as xe_native

Step 11

Create a Native object from the xe_model.

Answer

# create a Native object


native = xe_native.Native()

Step 12

Issue a dir(native) to see all available methods and attributes of the system and print it. The important method for the
following task is the interface.

Answer

# print a dir() of the Native object


print(dir(native))

Step 13

Run the code. The important method for the following task is the interface.

Answer

<... output omitted ...>


'Interface'
<... output omitted ...>
Prepare a Python Script and Retrieve Device Configuration using Ydk-Py

You will prepare a Python script and retrieve device configuration using YDK-Py. You will retrieve interface configuration
for the GigabitEthernet5 interface using two different approaches. Finally, you will update the description of the
GigabitEthernet5 interface.

Step 14

Open Visual Studio Code, and open the /home/working_directory folder. Open the cisco-xe1.py file, and check the code.
Answer

from ydk.services import CRUDService


from ydk.providers import NetconfServiceProvider
from ydk.models.cisco_ios_xe import Cisco_IOS_XE_native as xe_model

# CSR1kv1 Credentials
ip = '0.0.0.0'
port_n = 0
user = 'username'
paswd = 'password'
proto = 'protocol'

if __name__ == '__main__':

provider = NetconfServiceProvider(address=ip, port=port_n, username=user, password=paswd, protocol=proto)


# open the connection w/ CRUDService
# create a new instance of Native Interface object
# read the interfaces with the help of read function
# print the primary address of the fifth gigabitethernet interface
exit()

Step 15

Replace the existing credentials with the correct credentials.

Answer

#CSR1kv1 Credentials
ip = '10.0.0.20'
port_n =830
user = 'cisco'
paswd = 'cisco'
proto = 'ssh'

Step 16

Open the connection with the CRUDService module.

Answer

crud = CRUDService()

Note:

The full documentation of the CRUDService module is available


at http://ydk.cisco.com/py/docs/api/services/crud_service.html.

Step 17

Initiate the Native Interface object with the xe_model. Save the object into a variable named xe_interfaces.

Answer

# create a new instance of Native Interface object


xe_interfaces = xe_model.Native.Interface()
Note:

The full documentation of the Cisco_IOS_XE_native model is available


at: http://ydk.cisco.com/py/docs/gen_doc_0fe171c66ddd61f15cb095cc5be949d69c0fa46e.html .

Step 18

The read function reads entities from the device. The function takes two arguments. The first argument is the provider
(ServiceProvider), and the second is the filter (entity or entities). Use the read function from the CRUDServices model to
read the xe_interfaces, and save the object to a variable called interfaces_data.

Answer

# read the interfaces with the help of read function


interfaces_data = crud.read(provider, xe_interfaces)

Step 19

Print the primary address of the fifth Gigabit Ethernet interface. Note that Python starts counting at 0, so the 4 in the list
index is correct.

Answer

# print the primary address of the fifth gigabitethernet interface


print(interfaces_data.gigabitethernet[4].ip.address.primary.address)

Step 20

Run the code.

Answer

10.0.0.20

Step 21

Now, open the /home/working_directory folder once again. This time, open the cisco-xe2.py file, and check the code.

Answer

from ydk.services import CRUDService


from ydk.providers import NetconfServiceProvider
from ydk.models.cisco_ios_xe import Cisco_IOS_XE_native as xe_model

#CSR1kv1 Credentials
ip = '10.0.0.20'
port_n = 830
user = 'cisco'
paswd = 'cisco'
proto = 'ssh'

if __name__ == '__main__':

provider = NetconfServiceProvider(address=ip, port=port_n, username=user, password=paswd, protocol=proto)


crud = CRUDService()
# get hold of only the fifth GigabitEthernet interface without the other interfaces
xe_int_giga = xe_model.Native.Interface.GigabitEthernet()
xe_int_giga.name = '5'
# read the interface
intf_data = crud.read(provider, xe_int_giga)
# print the primary addres of the interface
print(intf_data.ip.address.primary.address)
# update the description of the interface
# print the description of the interface
exit()

Step 22

Run the code.

Answer

10.0.0.20

Step 23

Take a closer look at the way the second approach creates the interface object.

Answer

# get hold of only the fifth GigabitEthernet interface without the other interfaces
xe_int_giga = xe_model.Native.Interface.GigabitEthernet()
xe_int_giga.name = '5'

Note

The xe_int_giga variable creates a Gigabit Ethernet object instance. With the xe_int_giga.name variable, you specify the
desired interface. The xe_int_giga variable now holds the fifth (5) GigabitEthernet as an object and can later be read as
such (without other Gigabit Ethernet interfaces).

Step 24

Now, look at the way the print statement is done.

Answer

# print the primary address of the desired interface


print(intf_data.ip.address.primary.address)

Step 25

Write the missing statements. First, write the description of the interface, and then update the description with the update
function. The update function takes the same two arguments as the read function. Use the update function from the
CRUDService.

Answer

# update the description of the interface


intf_data.description = 'interface description'
crud.update(provider, intf_data)
Step 26

Print the updated interface description to make sure that the description is indeed on the interface.

Answer

# print the description of the interface


print(intf_data.description)

Step 27

Run the code.

Answer

10.0.0.20
interface description

Step 28

Open a terminal and use SSH to CSR1kv1 to inspect the transaction on the device. Check the committed interface
configuration on the device.

Answer

student@student-workstation~$ ssh cisco@10.0.0.20

Password:

csr1kv1# show interfaces GigabitEthernet 5

GigabitEthernet5 is up, line protocol is up


Hardware is CSR vNIC, addres is 0050.569c.c526 (bia 0050.568c.c526)
Description: interface description
<... output omitted ...>

Model-Driven Programmability in a Cisco Environment 

One of the major byproducts of using models is model-driven programmability, which fully decouples transport, protocol,
and encoding from the model being used. Over the past few years, YANG was predominantly used for NETCONF, which
encodes only in XML. However, because models were first written in YANG, it was easy to build new tooling that
autogenerated URLs and bodies that took advantage of models using a REST API in both JSON and XML. The model then
becomes the definition of what can be done on a device that is completely decoupled from the encoding method.

Even though YANG is becoming the most widely used and standardized way of describing data models in networking,
support is not yet ubiquitous.

Some devices and platforms may still be model driven but not support YANG, such as:

 Cisco Nexus 9000 and 3000 Series with NX-API REST

 Cisco Application Centric Infrastructure (ACI)


They implement an object model, like the example shown in the figure. Each of the items represents an object that
describes a part or an aspect of the system. As the model only describes data (device configuration for example), it allows
you to use different protocols and APIs for management.

Using a data model allows a platform like Cisco ACI to provide the following characteristics:

 Structured, computer-friendly access to data

 Choice of transport, protocol, and encoding

 Model-driven APIs for abstraction and simplification

 Wide standard support while using open source

 Deploys services faster and simpler

 Simplifies application development

 Models manage abstractions of the underlying network device data structures (configurations, state data, and so
on)

These systems use a custom object model that may offer the same properties as if it was built using YANG. For example,
with Cisco ACI, everything is an object. Every object has associated properties and constraints. These constraints are
defined in the Cisco ACI Management Information Model as opposed to a YANG model. This is just a different way to
model network devices. However, if YANG is not used, any associated YANG tooling is not supported. But Cisco ACI already
has a robust toolset of libraries and an object model browser that you can use instead.

Cisco NX-OS Programmability

The Cisco NX-OS open platform allows for programmatic access to Cisco Nexus platforms, providing network
administrators with increased scale, agility, and efficiency. Open NX-OS on Cisco Nexus platforms offers a rich software
suite that is built on a Linux foundation that exposes APIs, data models, and other programmatic constructs.

This support is delivered by several means, including NX-API CLI and NX-API REST, NETCONF, on-box Linux and Python
scripting capabilities, and several traditional means already familiar to network administrators, such as the Scheduler
feature, Embedded Event Manager (EEM), and PowerOn Auto Provisioning (POAP).

Cisco Nexus Programmability Features


Here are the most common scenarios for device configuration and usage of device APIs:

 Day 0 provisioning: Zero-touch device provisioning is commonly associated with compute devices, but network
devices have had this capability for years. POAP was designed to provide advanced Day 0 provisioning capabilities
using an extensible framework. POAP includes the ability to execute Python scripts as part of its workflow. Today,
POAP can download and install additional management agents and apply specific configurations that are based on
information such as location in a network topology.

1. A similar approach is achieved by using Preboot Execution Environment (PXE). PXE has extended its
presence into the network as infrastructure devices are increasingly managed more like servers. Cisco NX-
OS uses iPXE, which utilizes an open source network firmware that is based on gPXE/Etherboot.

 Base features: In addition to providing traditional SNMP support, the Cisco Nexus platform also provides Python
scripting capability on devices to provide programmatic access to the switch CLI to perform various tasks, including
POAP and EEM actions. The Python interpreter is included in the Cisco NX-OS Software.

 APIs: Cisco NX-OS provides a built-in web server to respond to HTTP calls to the switch to improve accessibility,
extend capability, and improve manageability of Cisco NX-OS. APIs include NETCONF, NX-API CLI, NX-API REST, and
XMPP.

 Linux on the switch: Cisco Nexus switches have always been built upon a Linux foundation making available a
native Linux Bourne Again Shell (Bash), but today, Cisco NX-OS also provides a Linux guest shell, which is a
separate Linux environment running on the switch inside a container. Currently utilizing a CentOS distribution, an
important benefit for the guest shell is the ability to securely run third-party applications that monitor and control
the switch.

 Configuration management: Cisco NX-OS incorporates a set of tools, features, and capabilities that enable
automation. Modern configuration management tools like Puppet, Chef, and Ansible drive programmability.

NX-API CLI

The following are main features of the NX-API CLI:

 REST-like API that enables programmatic access to Cisco Nexus devices

 Improves accessibility of the CLI by making them available off box

 Supports show commands, configurations, and Linux Bash

On Cisco Nexus devices, CLIs are run only on the device and used far too often to manage data center networks. NX-API
improves the accessibility of these CLIs by making them available outside of the switch by using HTTP and HTTPS. You can
use the NX-API as an extension to the existing Cisco Nexus CLI. The NX-API CLI API is great for network engineers getting
started with the API because it still makes use of commands. It sends commands to the device, wrapped in HTTP and
HTTPS, but receives structured data back. You have the ability to send show commands, configuration commands, and
Linux commands directly to the switches using NX-API CLI.

Transports are supported for NX-API:

 Runs on HTTP and HTTPS

 CLI commands are encoded into the HTTP/HTTPS POST body

 The request/response format is encoded with JSON-RPC, JSON, or XML

 NGINX HTTP back-end web server to listen for HTTP requests

NX-API CLI uses HTTP and HTTPS as its transport. CLIs are encoded into the HTTP and HTTPS POST body. Here is one
example of an HTTP request using JSON-RPC encoding:

The NX-API back end uses the NGINX HTTP server. The NGINX process, and all its children processes, are under Linux
cgroup protection where the CPU and memory usage are capped. If the NGINX resource usage exceeds the cgroup
limitations, the NGINX process is restarted and restored. The NX-API back end uses a lightweight on-box web server to
listen for HTTP requests, which are converted to CLI and used to retrieve data or push configurations. The request-and-
response format is encoded with JSON-RPC, JSON, or XML.

NX-API supports XML, JSON, and JSON-RPC, and commands are sent to a single HTTP request within a CLI wrapper. The
following examples show the same request and response for show hostname for all three data encoding formats.
NX-API supports HTTPS; therefore, all communication to the Cisco Nexus device can be encrypted. By default, NX-API is
integrated into the authentication system on the device. You access the device through the NX-API using user accounts
containing a username and password, which are contained in the HTTP header.

NX-API is disabled by default and can be enabled by using the feature manager CLI command. NX-API provides a session-
based cookie, nxapi_auth, when users first successfully authenticate. The session cookie expires in 600 seconds, which is a
fixed value that cannot be modified.

With the session cookie, the username and password are included in all subsequent NX-API requests that are sent to the
device. If the session cookie is not included with subsequent requests, another session cookie is required and is provided
by the authentication process.

The NX-API feature must be enabled via the CLI.

 Enable the feature via the command line.

 Identify the alternate port being used (if any).

 Identify an HTTPS certificate file to use.

 Enable the NX-API Sandbox.

Main attributes of the NX-API Sandbox:


 NX-API is available on the switch itself and accessed via a web browser.

 There are helpful buttons available with commonly used built-in scripts.

 it is supported on all Cisco Nexus platforms.

The NX-API Sandbox is available on the switch itself and accessed via a web browser. There are helpful buttons available
with commonly used built-in scripts. It is a great way to become familiar with the API because you get to visualize the
JSON/XML objects coming back without having to write actual code.

The buttons on the right are used to select JSON-RPC, JSON, or XML encodings. Commands are entered in the Command
Box and executed with the Post button. The request being sent to the device is displayed in the Request box in the lower
left, and the response output from the device is displayed in the lower right.
When you are using JSON-RPC, you are always sending a list of JSON objects (dictionaries), even if it is a list of one (single
command). Here is an example of sending show version with the associated request and response. The response is similar
in that it is always a list of dictionaries for JSON-RPC.

Note

When you use JSON encoding, you only receive a list in the response when you send more than one command. On the
other hand, JSON-RPC always replays with a list of dictionaries.

Here is another example of sending a command but this time using the JSON format.

Note

JSON sends a dictionary when sending one command but sends a list of dictionaries when sending more than one
command.
The NX-API Sandbox can convert CLI into Python automatically to help get you started. In the Request box, click
the Python button. It uses a built-in Python requests module to convert the commands to a Python script that can be
copied into a .py file.

NX-API REST

NX-API REST is supported on Cisco Nexus 9000 and 3000 Series Switches starting with 7.0(3)I2(2). NX-API REST is the next-
generation API for the Cisco Nexus platform in that it supports sending and receiving objects in the API payload. If you
recall, NX-API CLI supports sending commands to the device while receiving structured data back (JSON, XML). With NX-API
REST, it is completely based on structured data. Therefore, JSON/XML payloads are sent to the device in the HTTP request
and received from the device in the response.

This list outlines the NX-API REST implementation:

 NX-API REST is an evolved version NX-API CLI.

 Complete REST interface that brings model-driven programmability to standalone Cisco Nexus family switches.

 Configuration and state information of the switch is stored in a hierarchical tree structure that is known as the
MIT.

The implementation of NX-API REST is similar to the model that is used by Cisco ACI. All information about the switch,
including configuration and state data, is stored in a hierarchical tree called the Management Information Tree (MIT).
Every object in the tree can be directly accessed via the REST API.

Here are the main features of NX-API REST:

 Object instances are referred to as managed objects.

 Managed objects are also of a certain type of class.

 A unique distinguished name can identify every managed object in the system

 URLs and URIs map directly to distinguished names identifying objects on the tree

 Data can be encoded in XML or JSON

Every object in the MIT is referred to as a managed object. Each managed object is also of a certain type of class. For
example, Ethernet interfaces are of type l1PhysIf, and switch virtual interfaces (SVIs) are of type sviIf, but all interfaces are
of type intf. These types are called classes.
It is important to understand the relationship, between distinguished names and classes because you can make an API to
the Cisco Nexus switch with NX-API REST, using a distinguished name-based query or class-based query. For example, you
can query a single interface (distinguished name) or query all interfaces of a given type (class).

NX-API REST operates in forgiving mode, which means that missing attributes are substituted with default values (if
applicable) that are maintained in the internal Data Management Engine (DME). The DME validates and rejects incorrect
attributes. The API is also atomic. If multiple managed objects are being configured and any cannot be configured, the API
stops its operation. It returns the configuration to its prior state, stops the API operation that listens for API requests, and
returns an error code.

URLs and URIs map directly to distinguished names identifying objects on the tree. Any data on the MIT can be described
as a self-contained structured text tree document encoded in XML or JSON.

NX-API REST supports three methods:

 GET: Used to retrieve and read information from the MIT.

 DELETE: Used to delete and remove an object from the MIT.

 POST: Used to create or update an object within the MIT. In NX-API REST, POSTs to the API are idempotent,
meaning that the change is made just once no matter how many times the API is called.

There is a dedicated API to handle authorization when using the API. The device returns a token that is then sent in
subsequent API calls that perform CRUD operations.

Regarding Content-Type and Accept, Cisco Nexus switches ignore them, because the payload format must be specified as a
file extension. All API calls using NX-API REST will have either ".json" or ".xml" appended to them. This notation lets the
switch know what the Content-Type is and how to respond—for example, with which encoding format.

The URL format that is used can be represented as follows:

The various building blocks of the preceding URL are as follows:

 System: System identifier; an IP address or DNS-resolvable hostname

 mo | class: Indication of whether it is a managed object or tree or class-level query

 class: Managed-object class (as specified in the information model) of the objects queried; the class name is
represented

 dn: Distinguished name (unique hierarchical name of the object in the MIT tree) of the object queried

 method: Optional indication of the method being invoked on the object; applies only to HTTP POST requests

 XML | json: Encoding format

 options: Query options, filters, and arguments

Here are GET method examples for NX-API REST:

 http://n9k/api/mo/sys/intf/phys-[eth2/5].json

 http://n9k/api/mo/sys/bgp/inst.json

The typical sequence of configuration is:

 Authenticate: Call https://<IP of Nexus switch>/api/mo/aaaLogin.xml with a payload in XML. This call returns a
cookie value that the browser uses for the next calls.

 Send HTTP POST to apply the configuration: The URL of the POST message varies depending on the object. Here is
an example: https://<IP of Nexus switch>/api/mo/sys/bgp/inst.json. Api indicates that this call is to the
API. Mo indicates that this call is to modify a managed object. Bgp/inst refers to the BGP instance,
and .json indicates that the payload is in JSON format. If the end of the URL is .xml, that would mean that the
payload is in XML format.

 Verify the HTTP status code: You should want a response of 200 OK. With the capability to address and access an
individual object or a class of objects with the REST URL, you can achieve complete programmatic access to the
entire object tree and to the entire system.

Here is an example of a managed object-based or distinguished name-based query. It is based on the URI being used:
/api/mo/sys/intf/phys-[eth2/5].json. You can also see that it is an HTTP GET, so you are only retrieving information with
this API call.

Remember that /api/mo signifies that it is a distinguished name-based query. Therefore, a specific object is being queried
—in this example, physical interface Ethernet2/5. You can also see in the response two keys that are used quite often. The
first is "totalCount" and this count is "1," which makes sense because there is only one Ethernet2/5 interface. All data in
NX-API responses is returned in the "imdata" key; "imdata" is a list of dictionaries (JSON objects), and in this case, it is a list
of one.

In this example, you can see that Postman is being used to make an HTTP POST request. This request is performing an
admin down (shutdown) of Interface Ethernet2/5.
You can see that /api/mo/sys/intf.json is the URI being used, and as such, it means that it is a distinguished name-based
query. But in this example, you can see that the parent object "intf" is specified in the URL, and the individual object being
modified is in the JSON payload.

You could have also used the full URI to specify Eth2/5 as well (/api/mo/sys/intf/phys-[eth2/5].json) and kept the same
JSON body.

In this example, Postman is being used to make an HTTP GET request. This request is in contrast to previous examples
because it is a class-based query. The URI being used here is /api/class/l1PhysIf.json. Notice how it starts with /api/class
instead of /api/mo.
All Ethernet interfaces are of type l1PhysIf, which means you can query the class to obtain information for all interfaces
(objects) within the class.

Cisco nxtoolkit

Cisco has published a Python library called nxtoolkit that is a set of modules that simplify getting started with Python to
program against NX-API REST. The library can be used to perform common operations such as collect information from
Cisco Nexus devices but also to make configuration changes. Using nxtoolkit eliminates the need to worry about the
underlying URLs being used because they are abstracted away from the developer and built into the library itself.

While nxtoolkit is a Python library that simplifies working with NX-API REST on Cisco Nexus switches, it also comes with a
collection of prebuilt scripts that perform common operational tasks. These prebuilt scripts can be used immediately to
eliminate the need to write any code. All that you need to do is enter specific device information such as credentials and IP
address information and execute the script.

Visore is a tool for NX-API REST that is built into each switch. It is a managed object browser. It allows you to browse and
navigate the MIT in real time and inspect the state of each and all objects. Visore is a great tool to understand and learn
about the relationship between all objects of the system. You can access Visore by navigating to
http(s)://<nexus>/visore.html and authenticating using standard device credentials.
The following figure shows a class-based query being made for the class rmonEtherStats. Because it is a class, it is going to
return interface stats for all associated interfaces. Notice how 63 objects are returned. Only the first one is shown in the
figure, but you can see the first one displayed also includes an attribute for its distinguished name.

Visore is helpful because it has two links in the center that say "Display URI of last query" and "Display last response." By
clicking "Display URI of last query," you get to see the URI, which for this API call was:
/api/node/class/rmonEtherStats.xml?query-target-filter=and(eq(rmonEtherStats.collisions,"0"))

Once you have the URI, you are able to make native REST calls using Postman or the equivalent. As you can see, the API
also supports filters, and using Visore simplifies learning how to use them by seeing the URIs generated automatically. This
API call queried the device for all objects of class type rmonEtherStats but then added a filter to only return those objects
that had 0 collisions.

The previous figure was used to show how to make a class-based query for a specific interface. This example builds on it by
using the distinguished name provided in the API response from the class-based. This query, because it is based on the
distinguished name, is specifically querying a single given object.

If you wanted to get the results in JSON, you could use the URI generated and add ".json" to it, as shown here:

/api/node/mo/sys/intf/phys-[eth1/3]/dbgEtherStats.json

This response would include the Ethernet statistics just for Ethernet1/3 because it is a distinguished name-based query.

This figure compares and contrasts class-based and distinguished name-based queries one more time.
You can query all switched virtual interfaces on the system with the class-based query:

Notice how there were 10 objects that were returned with this API call. It means that there were 10 SVIs on the switch
when this API was executed on the switch.

You can also query a single SVI such as "interface vlan200" with the following distinguished name-based query:

Note

The addition of the /node in the resource is optional, making API calls back into the system.

Section 11: Deploying Applications

Introduction
When hearing the term "software engineering," most people think of writing code to solve various problems. But that is
only part of the job. In the past, this meant creating a special executable that enabled users to install software on their
machines. But in the world of web applications and cloud services, software vendors set up and upgrade software for
users. Providing the platform for running applications is also the responsibility of the vendor. Therefore, developers are
now more intimately involved with software operation and are writing tools to make it easier. With so many different tools
and ways to deploy and operate software, this has become very much a field of its own.

Application Deployment Types 

The word "server" was first mentioned in the 1960s during the study of a queuing theory that predicted one or more
waiting lines, or queues, in which jobs were waiting to be processed. An entity that processes jobs from a queue is called a
server.

The term "computer server" also dates back to 1960s to an early document describing the Advanced Research Projects
Agency Network (ARPANET), the predecessor of the Internet.

The purpose of a computer server is to serve or share data between different nodes. Today, multiple types of servers exist:

 Application servers

 Catalog servers

 Computing servers

 Database servers

 File servers

 Network servers

 Web servers

While "server" describes a computer program that is performing a task, a program must run on computer hardware, which
includes CPU, memory, I/O devices, and more. Hardware requirements for servers vary widely, depending on the purpose
of the server. Each server type has its own requirements, and a dedicated hardware platform is needed to maximize the
number of tasks that a server can perform. These hardware platforms are known as bare-metal servers.

A bare-metal server is a physical computer server with dedicated hardware and is dedicated to a single tenant. Bare-metal
servers are not shared between customers. Each server may be used by multiple users and may perform multiple tasks,
but it is dedicated entirely to a single customer.

Increasing demand for servers added to the requirements for servers, which changed with each new generation. Keeping
up with these changes can be difficult for users. Another disadvantage was that each hardware server could only perform a
specific task, which meant the hardware could not be repurposed for different server tasks.

To answer these problems, IBM started working on a system that would have:

 Multifunctional hardware to replace dedicated hardware

 Backward compatibility

The result was the CP-40 operating system. This was only used in lab and was a foundation to the CP-67, a virtual machine
operating system developed in 1967 for the IBM System/360 Model 67 mainframe. One of the most important features of
this operating system was time sharing, which allowed multiple users to access the system at the same time. Time sharing
was the first in a series of features that led to virtualization as it is known today.

Virtualization today mostly refers to hardware virtualization, the creation and maintenance of so-called virtual machines.

Virtual machines have these advantages:

 Snapshots: A state of a virtual machine can be recorded at any point of time.


 Migrations: A snapshot of a virtual machine can be moved to another host machine with its own hypervisor.

 Failover: A virtual machine can be run on any other host server if the primary server fails.

 Backup and restore: A snapshot of a virtual machine can be created at any time and restored when needed.

Virtual machines also have some disadvantages:

 Underutilized hardware resources: Some of the hardware resources are used for running hypervisors, so not all
the resources are available for virtual machines.

 Large resource usage: Each virtual machine runs its own operating system, which also decreases the amount of
the resources that are available for the server to perform tasks.

Modern virtualization platforms can decrease the amount of unused resources by dynamically reallocating them between
virtual machines based on current requirements. However, these techniques still do not allow virtual machines to directly
communicate with the physical resources. They must still communicate through the hypervisor.

To compensate for the disadvantages of virtual machines, container technology was developed. Container technology
removes the abstraction layer and directly uses the host operating system to provide an environment for multiple services
(containers) to run on the same physical server. Container technology has these advantages:

 Direct access to bare-metal hardware

 Optimal use of system resources

Container technology on bare-metal servers also shares advantages with virtual machines:

 Deploy applications inside portable environments

 Provide resource isolation between containers

However, container technology on bare-metal servers also has some disadvantages:

 Physical server replacement is difficult. When replacing a bare-metal server, the container environment must be
recreated from scratch on the new server.

 Container platforms do not support all hardware configurations.

 Bare-metal servers do not offer rollback features.

As already discussed, multiple deployment types are available, each with their own advantages and disadvantages. It is
your choice to select the appropriate type based on the application needs and your company policies and processes.

Bare-Metal Servers

A bare-metal server is a physical server that is dedicated to a single tenant. It can run multiple applications, but the
resources are not shared with other tenants.

Bare-metal servers have these advantages:

 Performance: Physical server resources can be optimized for a specific workload.

 Security: Data, applications, and other resources are physically isolated.

 Reliability: Physical resources are dedicated to a specific workload and are not shared.

Virtual Machines

A virtual machine is an emulation of a computer system running on a shared host. Each virtual machine consists of its own
environment (including operating system, libraries, and applications) and is not aware of other virtual machines running on
the same physical host. Communication between applications inside a virtual machine and physical resources is through an
abstraction layer called a hypervisor. This abstraction layer is responsible both for resource allocation and isolation.
Virtualization consists of multiple layers:

 Host machine: A physical server that supports virtualization.

 Hypervisor: Computer software that runs on the host machine and manages the virtual machines, also known as
the virtualization layer.

 Virtual machine: A virtual computer that emulates a physical computer system. It has its own operating systems as
well as dedicated software to perform a specific task. Software that is executed on these virtual machines is
separated from the underlying hardware resources.

Containers

Container technology uses host operating system features to provide an isolated environment for multiple applications to
run on the same server. An early example of the technology behind containers is the chroot command, which provides
isolation in UNIX-based operating systems. The chroot (change root) command allows users to change the root directory
for a running process, which makes it possible to isolate system processes into separate file systems without affecting the
global system environment. The chroot command was added to the seventh edition of the UNIX operating system in 1982.
The environment created with the chroot command is called chroot jail.

In 2006, Google engineers started implementing process containers, later renamed control groups (cgroups), a feature that
allows isolation and limiting of the resource usage (CPU, memory, disk I/O, network, and so on) for a certain process. This
feature was merged into Linux kernel 2.6.24 in 2008.

Control groups provide:

 Resource limiting: Limit CPU, memory, disk I/O, and network for a specific group

 Prioritization: Some groups might have a larger share of resources than others.

 Accounting: Measurement of group resource usage

In 2008, the Linux Containers (LXC) technology was developed. LXC provides virtualization at the operating system level by
allowing multiple Linux environments to run on a shared Linux kernel, where each environment has its own process and
network space.

The operating system has these features:

 chroot

 Process and network space isolation

 cgroups

Container solutions are now also supported on Microsoft Windows and macOS.

The most widely used container solution used today is Docker. Docker was released in 2013 and enables users to package
containers so that they can be moved between environments. Docker initially relied on LXC technology, which was
replaced by the libcontainer component in 2014.

Docker is popular because it contains more features than its predecessor, LXC:

 Portable deployments across machines: You can use Docker to create a single object (image) containing all your
bundled applications. The image can then be installed on any other Docker-enabled host.

 Versioning: Docker can track versions of containers, inspect differences between versions, and commit new
versions.

 Component reuse: Docker allows building and stacking of already created packages.


 Shared images: Anyone can upload new images to a public registry of Docker images.

Docker originated from a public cloud service called dotCloud and is used in many platform as a service (PaaS) offerings
today. PaaS provides a platform that enables customers to develop, run, and manage applications without the complexity
of building and maintaining the application infrastructure.

Container solutions are now supported on Linux operating systems, Microsoft Windows, and macOS.

Application Deployment Models 

Cloud computing, often referred to simply as cloud, represents delivery of computing resources (servers, storage,
databases, software, networking, and more) over the Internet on a pay-as-you-go basis. Cloud computing has become
popular in recent years, but the history of cloud computing dates back to 1950s, where large-scale mainframes were built
to be used by different corporations and schools. The mainframe hardware was installed in a server room, and users were
able to access it via terminals, dedicated stations for accessing the mainframe. Mainframes were large and expensive, and
organizations could not afford a mainframe for each user, so it became common practice to allow multiple users to access
the same data storage from any station.

Cloud computing can be divided into three groups:

 Public clouds

 Private clouds

 Hybrid clouds

What Is a Public Cloud?

Public clouds are the most popular cloud deployment solution. Resources are owned by a third-party cloud service
provider and are physically located in the provider data center. No resources are located at the customer data center.
Physical and virtual resources in the provider data center also are shared between multiple organizations. Customers and
end users access these services over the Internet using a web browser.
Public clouds have multiple advantages:

 No capital expenditures: There is no need to purchase hardware or software; services are on a pay-as-you-go


basis.

 No maintenance: Maintenance is provided by a cloud service provider.

 Scalability: Resources in the data center can be scaled easily.

 Reliability: High availability of services offers greater reliability.

Multiple vendors offer public cloud solutions:

 Amazon Web Services

 Microsoft Azure

 Google Cloud Platform

What Is a Private Cloud?

Resources of a private cloud are used exclusively by a single organization. They can be physically located at the data center
of an organization or in a cloud provider data center. Services in private cloud are always maintained on a private network,
and the hardware and software are dedicated to a single organization. Private clouds often are used by financial
institutions, government agencies, and other organizations with business-critical operations.

Private clouds have multiple advantages:

 Flexibility: Resources are dedicated only to your organization, so you can customize the environment based on
your business needs.

 Security: A higher level of security is available because resources are not shared with other organizations.

 Scalability: Private clouds can still afford the scalability of public clouds.

What Is a Hybrid Cloud?

A hybrid cloud is a combination of a public and private cloud, often referred to as “the best of both clouds.” Hybrid clouds
combine on-premises infrastructure (private cloud) with public infrastructure (public cloud). In a hybrid cloud, data can be
moved between a private and public cloud for greater flexibility. An organization can use a public cloud for low-security
use cases and a private cloud for high-security use cases. Hybrid clouds also offer so-called “cloud bursting,” which enables
organizations to expand services from a private to a public cloud in high-demand situations.

Hybrid clouds have multiple advantages:

 Cost effectiveness: With the scalability available in public clouds, you can pay additional fees only when needed.

 Flexibility: You can use a public cloud for additional resources only when needed.

 Control: Sensitive data can be kept in a private cloud, while other data can be kept in a public cloud.

Most vendors that offer public and private clouds also offer hybrid cloud deployment models.

Application Deployment Options

Most cloud computing services belong to one of these categories:

 Infrastructure as a service (IaaS)

 PaaS

 Software as a service (SaaS)


IaaS is the most basic category of cloud computing services. You rent the IT infrastructure (network devices, storage
systems, servers and virtual machines) from a cloud provider.

PaaS refers to an on-demand environment for developing and testing software applications. It is designed to save time for
developers when creating and deploying web or mobile applications.

SaaS is a way of delivering software applications over the Internet, typically on a subscription basis. End users are only
provided an application, while service providers manage the underlying infrastructure and the application.

Edge Computing

Edge computing is a network solution that brings computing resources as close to the source of data as possible to reduce
latency and bandwidth use. Today, more applications are moving to the cloud, and multiple clouds are being deployed.
The increased number of endpoints dramatically increases the volumes of data that need to be processed, and
transporting the data to central locations for processing becomes expensive. At the same time, users want high-quality
experiences, best possible application performance, and security across data. To solve these issues, a new service
architecture is being introduced: edge computing, which is based on distributing computing capacity to the edge of the
network.

Edge computing focuses on:

 Lowering the latency between the end user device and a processing and storage unit to get better performance

 Implementing edge offloading for greater network efficiency

 Performing computations and reducing transport costs

Two examples of using edge computing include:

 Radio access network

 5G Core network
Edge Computing Overview 

A new, more sustainable approach to networking is necessary, with an architecture that is more open and places
computing capacity in the best location for a set of services. The end user experience drives the perceived value of your
services and is directly related to how the network performs and how the required latency is achieved.

Low latency creates a good user experience for many types of services. However, low latency is not equivalent to close
proximity. A properly designed IP network supports both low latency and optimal economics. In the enterprise
environment, the location of an enterprise might be the optimal location for edge computing. Potential locations include
metro data centers and repurposed central offices or public exchanges, but not cell sites.

Edge computing is a distributed network architecture that moves your compute, storage, communication, control, and
decision making closer to the network edge or where the data is being produced to mitigate the limitations in the current
infrastructure.

You do not necessarily have to do all tasks at the edge of your network, but there are some good use cases for this
approach:

 Bandwidth reduction: The cost that is incurred in sending large quantities of data can be reduced.

 Filtering: Filtering allows you to capture only relevant data flows and transport.

 Latency optimization: Some types of data are susceptible to latency and require more real-time data flows.

 Partitioning: Partitioning helps to balance and allocate resources across the network.

 Simplified applications: Simplified applications help normalize data and the data organization process.

 Dynamic changes: Data can be redirected based on content and priority.

 Analytic support: Data can be used for analytics and higher-level systems.

 Network efficiency: Edge computing allows you to use the network more efficiently.

Three major architectural shifts underpin the emergence of the edge computing network infrastructure:

 Decomposition: Network functions are separated (control/signaling and user/data) for optimization of resources.
 Disaggregation into software and hardware: Software-centric solutions use off-the-shelf or white-box hardware,
which can be procured separately.

 Infrastructure convergence: Fixed and mobile networks share a common 5G Core (5GC)-based infrastructure for
efficient operational practices.

The 5G system promotes the emergence of an edge infrastructure that combines decomposed subscriber management
from a converged core with the data plane of a wireline access node—for example, DSL access multiplexer/optical line
terminal (DSLAM/OLT)—as well as upper layers of the Third Generation Partnership Project (3GPP) radio stack. Edge
computing use cases are driven by the need to optimize infrastructure through offloading, better radio, and more
bandwidth to fixed and mobile subscribers.

Some organizations are testing edge computing at the cell site itself. At first glance, this approach might appear reasonable
because it puts computing as close as possible to the mobile subscribers. However, several issues result:

 It is operationally complex because of the typically large number of cell sites.

 It is expensive because of enclosures, power, and heating, ventilation, and air conditioning (HVAC) needs.
Specialized servers may be needed instead of tapping mass-scale production servers.

 New trends in radio are for leaner cell-site architectures composed primarily of lean elements such as remote
radio heads.

Note

Cloud radio access networks (C-RANs) do not have packet awareness at the cell site.

Instead of focusing on proximity, you can focus on addressing latency requirements. A good IP design can cure latency
issues between a centralized metro location and the cell site. The economics are important for the location of the edge in
edge computing. You need to consider capital expenditures (CAPEX) and operating expenditures (OPEX) to ensure a good
IP network design. An edge server that is closer to the cell site means less IP network growth but more cost (OPEX and
CAPEX) for the edge servers because of the larger number of locations that need to be supported. An edge server that is
located farther from the cell site means that the operator must deploy more IP network capacity. However, it can lower
edge server costs due to the economies of scale of the centralized metro location. The higher network capacity is easily
manageable through priority queues on latency-sensitive traffic.

To determine the location of the edge computing node, consider these questions:

 Can the more efficient location be the customer premises equipment (CPE)?

 Can the more efficient location be located on the customer premises?

 If the edge location is optimally in the network, is there enough low-latency queuing to the endpoint device?

 For mobile access use cases, can the locations be mapped to a present or future C-RAN central unit location?
DevOps Practices and Principles 

DevOps is a change in culture and process that emphasizes increased collaboration between teams such as software,
developers, IT operations, and other services (for example, quality assurance). Though DevOps has primarily focused on
development and systems, the principles solve real-world problems that network operators struggle with daily as well.

Software developers need to write, test, deploy, and improve on their product as quickly and efficiently as possible, but
traditionally, deploying these new services comes to a bottleneck as the infrastructure requires configuration changes to
accommodate the new service. Sometimes, however, the configuration changes are very minor, and operations risk-
mitigation measures can slow deployment to a crawl.

DevOps merges the most effective principles of successful software development with those of effective operations while
having increased communication between all teams involved. This new model breaks down silos among teams, reduces
waste, and increases business agility and product reliability.

What is DevOps?

 A change in operational approach and mindset

 A cultural movement

What are some DevOps principles?

 Iterative

 Incremental

 Continuous

 Automated

 Self-service

 Collaborative

 Holistic

Here are some DevOps principles and their characteristics:

 Iterative: This type breaks the working process of DevOps into smaller bits. This allows tests to be included in the
early stages of DevOps and helps with faster error checks.

 Incremental: Projects need to be developed in small and rapid incremental cycles.

 Continuous: Merge the development (testing) and deployment into a single improved and simpler process.
 Automated: Everything that can be automated should be automated. This adds speed and precision to the
process.

 Self-service: Every IT engineer should have the same development environment to develop and test projects.

 Collaborative: The DevOps team needs to be united, work together, and help each other during the entire DevOps
life cycle.

 Holistic: The DevOps process needs to be treated as a whole process, rather than just a couple of smaller tasks.

The term "DevOps" is quite common in the IT world now. However, defining DevOps can be very difficult because of the
somewhat ambiguous nature of its philosophies, goals, and role in the software development life cycle. One way to
demystify the DevOps paradigm is to investigate the characteristics of organizations that have adopted the DevOps model.
Examining how these organizations are structured, how they produce new applications, and how they provide continual
improvement is very helpful in narrowing down a clear definition.

Some primary characteristics of these organizations are:

 Embracing new technologies

 Embracing a collaborative culture

 Maintaining a well-defined common goal among teams

It is also important to note what DevOps is not. DevOps is not hardware that can be purchased or a piece of software that
can be installed. Though there are software tools that are often used within a DevOps culture, organizations that embrace
DevOps are embracing just that—a new culture.

For a movement to be long-lasting and successful, it needs to clearly outline its principles and practices. Prosperous
DevOps teams need to keep focusing on those principles and practices rather than just focusing on the infrastructure and
tools.

An organization that wants to implement DevOps should compare processes that it currently has with the DevOps
principles and see where DevOps practices can add worth.

Gap Between Dev and Ops

To understand DevOps fully, you need to first understand the origins of DevOps. One of the primary reasons for this
change in operational approach and mindset was the big gap between development and operations teams.

Developers care about writing software. They care about writing software that is high quality and meets customer
expectations. They care about application programming interfaces (APIs), libraries, and code. Success is determined by
whether the software does the job and meets expectations, and, of course, if it was done on time. However, developers
traditionally do not pay as much attention to what happens after the software application goes into a production data
center. This issue creates the problem between development and operations.

Operations cares about software standards and about stability. The operations department has rigid change management
windows that are used for rolling out new software that is written by developers. Success for operations is a stable
environment. The drivers and the definition of success is clearly different between developers and operations.
Furthermore, as the business continues to drive the development of new applications, developers continue to write the
software under their strict deadlines, but the operations department often finds that it is very difficult to roll out new
software because of the backlog of change requests.
This figure shows how the "wall of confusion" inadvertently creates silos within an IT organization. One of the primary
goals of DevOps is to break down silos and enhance communication between teams. This goal is extremely important for
organizations that are seeking to deploy software and services faster and more frequently.

The problem lies in the fact that development and operations are often in completely different, isolated parts of an
organization.

Development is trying to serve the business through features and software, and operations is trying to serve the business
through stability. The problem is having both work together successfully.

Practicing DevOps

DevOps professionals come up with many practices and principles. Whatever these practices and principles are in the end,
they should all help an IT organization deliver high-quality software and apps to end users.

The core practices are:

 Automated provisioning

 Automated release management

 Continuous build

 Continuous delivery

 Continuous integration

 Incremental testing

 Self-service configuration
DevOps uses a practice called automated provisioning to boost the productivity and the solidity of existing operations that
utilize enablers like containerization, software-defined networking (SDN), Network Functions Virtualization (NFV), and
much more. This practice helps save time and decreases errors when deploying services with automated procedures that
do not need human interventions.

To plan, manage, schedule, and control software built in different stages and environments, development teams use the
release management process. The goal is to automate whatever you can automate, so DevOps should automate this
process as well.

Continuous build is the first step in the continuous integration and continuous deployment (CI/CD) process. It is an
automated build on every check-in, but it does not prove functional integration of the code that has been added or
updated yet.

The next step is continuous integration (CI), which includes the first step (continuous build) and adds end-to-end
integration tests and unit tests to prove the integration of the added code. Each change in code is built and checked by
tests and other verifications to detect any integration errors as quickly as possible. The best practice is to implement small
changes in code and to version control repositories frequently.

Continuous delivery (CD) is the final step in this ongoing process. It automates the delivery of software and applications to
chosen infrastructure environments. DevOps mostly works in development and testing environments that are different
from the production environment (for example, development and testing). CD provides an automated way to push the
code changes to production.

Incremental testing is one of many integration tests. It has the advantage that errors are found early and in a smaller
collection. This approach makes it easier to detect the root cause of the errors. A test is done after each of the steps. There
are many ways to approach this practice (top-down, bottom-up, functional incremental, and so on).

Self-service configuration in DevOps permits developers to deploy applications by themselves at their own pace. This task
was previously performed by the IT team. This approach reduces operational costs, increases efficiency, accelerates
releases, and reduces the cycle times.

DevOps Challenges and Solutions

When development and operations teams do not work together, they become isolated from each other and sometimes
even from the rest of the organization. This practice creates an unproductive environment for the implementation of new
applications.
Implementing DevOps to resolve the gap between Dev and Ops is the right idea. However, shifting to DevOps comes with
challenges of its own.

Organizations will probably deploy projects, code, and so on, using the DevOps approach, onto existing infrastructure
populated with some legacy code. To avoid the manual alternatives of provisioning and modeling, organizations need to
provide end users with self-service access to both legacy and new applications in a form of cloud sandbox platform. This
ensures that every end user has the same clean environment to work in.

Secure applications and software are critical. A frequent problem in IT organizations is that security is usually addressed at
runtime instead of addressing it as a part of the whole DevOps workflow. It becomes a secondary component of the
DevOps life cycle. Often, organizations put security in second place because of the delivery time and a shortage of people
working on a project. Whatever the reason may be, organizations should not overlook security, regardless of the situation.
The security team should be engaged early in the development and be present at every step of the DevOps workflow. This
is the only way to ensure the application or software is secure.

Documentation is one of the least favorite tasks for IT engineers, but when implemented properly, it can be of great value.
Source code, design, and user documentation need to be consistent on every platform. A solution can be provided in the
form of web pages where documentation is regularly updated and maintained and is accessible to everyone working on
the project. Writing documentation also can be automated with the right kind of tools. DevOps-based documentation
needs to provide all DevOps teams with a familiar and confident source of information about the project.

Sometimes, organizations implement multiple tools at once, just because they are new. This is also known as the Shiny
Object Syndrome. Although new tools can appear very useful at first, they still need to be chosen carefully. Therefore,
DevOps tools should be adopted only if and when they are needed. Proper tool management can be very helpful. Adopt
only the tools that you need for a specific project, and always keep the focus on your team and goals over the new and
shiny tools.

DevOps teams sometimes have issues with executive support. Properly structured teams can function by themselves
(almost) unassisted, but they still require help from management, particularly in the early parts of the DevOps life cycle..
Sometimes, teams cannot obtain the right tools, get proper training, or even get adequate support from the leadership.
Changes in structure must happen at all positions and include enhanced levels of communication and commitment from
everyone. Because there is no single formula to implement DevOps at an organization, management needs to discover the
best strategic plan for their team.
Components of a CI/CD Pipeline 

DevOps team automation reaches its peak in the application and software deployment process. Since the start of the Agile
movement in the IT industry, organizations are challenged with faster development and deployment of secure and reliable
applications. A task that was once done manually and could provide only about a couple of releases a year at most is now
being automated to be able to provide monthly releases.

The philosophy of a CI/CD pipeline is quite straightforward. A CI/CD pipeline provides a single source of truth for software
and applications that is accessible to all those working on a given project. Code changes, test logs, and deployment logs are
also available for examination, which enables faster feedback. A properly implemented CI/CD pipeline also can be very
useful for onboarding junior and new DevOps engineers.

With a CI/CD pipeline, you can automate every step in your application delivery process.

Besides automating the process, the goal of CI/CD is to give the DevOps teams insight into multiple crucial components
and steps of the development and deployment process, such as source code, static code analytics, unit tests, integration
tests, packaging, and deploying.

A CI/CD pipeline consists of:

 Source code repository

 Build stage (source code, dependencies, and static code analytics)

 Test stage (unit and integration tests [end-to-end tests])


 Deploy stage (packaging, staging, and production)

The pipeline can be triggered manually and scheduled or automated with a commit by a DevOps engineer. The source
code repository notifies the CI/CD tool, which then runs the matching pipeline.

A DevOps engineer commits new (or updated) code with the help of Git, which then triggers a CI/CD pipeline. Git is the
most commonly used modern-version control system. It has a distributed architecture instead of only a single point for the
full version history of software (as was the case with the older-version control systems like Apache Subversion [SVN]).

A runnable instance is built during the build stage from the source code, along with all its dependencies that the code
needs. Code written in statically typed programming languages (such as C, C++, Java, or Go) needs to be compiled first.
Code written in dynamically typed programming languages (such as Python, PHP, and JavaScript) does not require this
step. Before the code is first executed, a code analytics tool such as a linter analyzes the source code to signal any errors,
bugs, stylistic errors, and suspicious constructs. This process is called static code analytics. If the build stage does not pass
for any reason whatsoever, the core configuration of the project needs to be addressed instantly.

The next steps in the pipeline process are the test and deploy stages.

To inspect the behavior of the software, code needs to be validated with different tests. Depending on the scale of the
project, the tests can be run either in a single stage or in multiple stages. The first tests should be unit tests, although the
developers should test their code as much as they can before they commit it. Unit tests are faster and demand less
resources than integration tests. Execute end-to-end tests only after the unit tests are successfully completed. It is
important that this stage provides useful and fast feedback. This feedback ensures that the DevOps team is more efficient
and needs less time to find any bugs and errors. Note that unit and end-to-end testing also can run in parallel if they do
not consume too much time and resources.

The deploy stage begins after the pipeline has built a runnable instance of the software that has passed all tests. There can
be many different deploy environments, but most pipelines consist of just two:

 Staging server: Used by the DevOps team internally

 Production server: Used by end users or customers

The staging environment is designed to be as close to the real-life production environment as possible. This characteristic
enables DevOps to deploy work-in-progress applications or software to staging for additional tests and reviews. The
production environment is for the final working product only and should only contain that.

Before deployment is finished, the project can be packaged. For example, a Docker image can be built and pushed to the
Docker registry, or for open source projects, the whole project can be zipped into a single file for easier downloading.
Packaging can be done in multiple stages, from build to deploy, depending on the projects and needs.

The deployment to both environments can be automated, but because the production server needs to be handled with
extreme caution, many DevOps teams choose to deploy to production manually.

Good practices for a CI/CD pipeline:


 Always use the same environment.

 The master repository should hold only up-to-date, documented, and working code.

 Use code reviews to merge requests to the test and master repositories.

 Developers should maintain separate repositories.

A good CI/CD pipeline should take off the pressure of development and deployment from DevOps teams and not further
complicate the process. That is why it is important that the pipeline is fast, reliable, and implemented with the right tools
and infrastructure.

Every developer should have a separate repository for development. Not all commits to all repositories need to trigger a
CI/CD pipeline—although this is something to strive for—but it is good practice to have at least a test repository in
addition to the master repository, which can also trigger the pipeline.

Merge requests and code reviews need to be done for commits to the test and master repository. This helps catch errors
and bugs even before the pipeline is triggered, and it is usually done by senior engineers and team leads. Once the code is
reviewed and merged, the merge request is closed. A test repository can contain some work-in-progress code, but the
master repository needs to hold only up-to-date, documented, and working code.

A pipeline run should not modify the environment in which the next pipeline will run. Each pipeline run needs to start from
the exact same, clean environment.

CI/CD Pipeline Configuration

There are a lot of tools that can help you implement a CI/CD pipeline. Tools based on Git (GitLab, GitHub, and Bitbucket) all
provide you with a code repository as well as with the CI/CD pipeline, while tools like Jenkins or CircleCI do not provide
software repositories. Tools such as Jenkins, CircleCI, and so on rely on code repositories, hosted somewhere else, to
trigger the pipeline with the help of APIs.

Two of the most common CI/CD pipeline tools are Jenkins and GitLab.

CI/CD Pipeline Tools: Jenkins

In the following figure, you can see a successful CI/CD pipeline configured on Jenkins. The figure is taken from the Blue
Ocean plug-in, which can be installed on Jenkins.
In the figure, you can see a Python unit test done (on Jenkins) in the Test stage and configured with the Jenkinsfile.

Jenkins is an open source automation server with many plug-ins for the CI/CD pipeline. Jenkins uses a file called Jenkinsfile
for the configuration. Jenkinsfiles are written in a programming language called Apache Groovy. Groovy takes the syntax of
Java and combines the features of Python (and other scripting languages), such as defining a variable without specifying a
type.

The main difference of Jenkins from GitLab is that the extensions of native functionalities are done with the help of plug-
ins, which usually translates to expensive maintenance. GitLab on the other hand is open-core, which means that any
changes done to the GitLab codebase are automatically tested and maintained.

CI/CD Pipeline Tools: GitLab

GitLab also provides a pipeline as code, which is an upgrade from the GUI type of pipeline. This approach gives DevOps
teams more freedom in the pipeline process. It also makes rollbacks much easier, and it provides a built-in lint tool that
makes sure the YAML file is valid, as well as version control and audit trails.

Example of a simple GitLab pipeline:

A GitLab pipeline consists of:

 Commit: A change in the code

 Job: Runner instructions

 Pipeline: A group of jobs divided into different stages

 Runner: A server or agent that implements each job separately, which can spin up or down if needed

 Stages: Parts of a job (for example, build or tests). Multiple jobs inside the same stage are executed in parallel.

Whenever a build fails at any step of the pipeline, a notification mail is automatically sent to the committer.
Essential Bash Commands for Development and Operations 

DevOps, software, and other IT engineers spend a lot of time using the terminal, and they need to be comfortable with
shell scripts.

A shell is a special-purpose program (also known as a command interpreter) that gives an interface to the operating
system. Bourne Again Shell (Bash) is a shell that includes many useful features from other UNIX shells and some additional
extensions. It was written as a reimplementation of the Bourne shell (sh), which is the oldest of the widely known shells.
The shells are implemented for interactive use and for construction of shell scripts, which are text files that contain shell
commands. This enables the operating system to read commands from the CLI or from a file.

Bash is the default shell for UNIX-like operating systems, such as CentOS and Ubuntu, and conforms to the IEEE Portable
Operating System Interface (POSIX) P1003.2/ISO 9954.2 Shell and Tool standard.

There are many ways you can communicate with or manipulate a computer. Most people are familiar with desktop
environments like Windows, Gnome for Linux, or Aqua for macOS. These provide everyday users with an easy-to-navigate
graphical experience of the operating system.

As an IT engineer, on the other hand, you will spend most of the time using a terminal. Everything that you can do with a
Linux GUI can be done with the terminal faster and more efficiently once you become familiar with the CLI.

Bash provides the CLI for UNIX-like operating systems. It is both an interactive command language and a scripting language
that contains mechanisms found in other computer programming languages such as:

 Loops

 Variables

 Conditional statements

 Functions

 I/O commands

Bash is simple to use and requires little training to get started.

Getting Started with Bash


The following example shows two ways to create a new folder, also known as a directory. You can create a new directory
with Bash commands in the terminal (CLI) or with a couple of clicks in the GUI.

If you would like to use the CLI terminal, the following figure illustrates the path that you will take.

However, if you would like to take a GUI-based approach, refer to the following figures.
To get started with IT engineering, you need to familiarize yourself with the Bash syntax and some basic commands. Bash
commands always start with a command (such as mkdir), followed by arguments (such as folder name). Flags also may be
used.

Example:

To view manual pages (how-to) for a specific command, type man <command> and learn more on how to use a given
command and the command flags.
You can quit the manual with the q command.

In addition to writing commands directly into the CLI, you can also write a Bash script as a file and then execute the
commands from the file itself. To create or edit a script file, you can use vi, the default editor for files in Linux. Create the
file by typing vi <filename>.The file extension is .sh.

Vi is the default editor for files in Linux. Every Bash shell script starts with a shebang (#!). The shebang is used in scripts to
signify an interpreter for execution in the UNIX-like operating system. After the shebang comes the absolute path to the
Bash interpreter (/bin/bash). This makes sure that Bash will be used to interpret the script even when executed under
another shell.

Example of a Bash file:

There are two ways to execute a Bash script file:

 Type chmod +x <filename>. Please note that you need to be the file owner to get the right permissions to be able
to make file executable. To execute the file, type in ./<filename>.

 Run the sh (shell) command <filename> (for example: sh bashfile.sh).

File Management with Bash

You can create/make (touch), remove (rm), or copy (cp) files.

Before you move on with any kind of file manipulation, you need to get familiar with file permissions inside the UNIX-like
operating system.

Permissions are based on two factors:

 Permissions that are assigned to a specific user and a group

 Permissions that are assigned to a specific action (read, write, execute)

You can create directories with the command mkdir (short for make directory). To create a file, type touch <filename>. Be
careful with the filename extensions; for example, filename.txt creates a text file, filename.py creates a Python file, and so
on. Type cp <filename> <newfilename> to copy a file inside the same directory. To remove a file, type rm <filename>.
Directory Navigation with Bash

A directory is a container or folder that contains files or other directories. A file system is the hierarchy of directories that
are known as the directory tree. The pwd command displays your current working directory, or in other words, the
directory you are working in at a given time.

This figure is a partial file system tree for Ubuntu.

To better navigate trough directories and files, you need to understand the hierarchy layout of directories inside the UNIX-
like operating system. The root directory sits on top of the whole structure. All other directories and files are kept under
the root directory. You can see in the previous figure that the desktop directory is inside the Cisco directory just under the
home directory.

The cd (change directory) command takes a directory name as an argument. If you start inside the desktop directory, and
you want to move to some folder directory, which is inside your current directory, you need to type cd foldername to
progress to that folder directory.

Another useful command is ls. It lists information about the FILEs (the current directory by default) and sorts entries
alphabetically if flag --sort is not specified. With the -l flag, you can view permissions, the owner of the file, and the group
that they are in.

There are two paths to a directory:


 Relative path

 Absolute path

The absolute path is the address that is relative to the root directory and begins with a slash (/). The relative path is the
address that is relative to the current, or working, directory and does not begin with the slash (/).

Environmental Variables with Bash

The Bash environment is made up of VARIABLE=VALUE entries. To get to know the current environment, with all its
variables, type env. If you do not provide any other arguments, the env command runs another program in a custom
environment without modifying the current one and prints the variables of the current environment.

Output of the env command:

 Type of shell (for example, Bash)

 Current user

 Default paths

By default, any variables that are created in a parent process are not available to the child process. You will need to use
the export command for that to be possible. The export command ensures that exported variables in the parent process
can be used in the child process (or subshells for that matter).
Utilize Bash Commands for Local Development 

Learn some basic Bash commands and syntax, and familiarize yourself with the Linux terminal (CLI). You will learn how to
navigate the file system, manage files, and configure a local development environment with required environmental
variables.

Navigate File System

In this task, you will navigate the file system, create files, and create a directory.

Step 1

Open the terminal with a right-click on the mouse anywhere on your desktop.

Step 2

Use the pwd command to identify your current working directory.

Answer

student@student-workstation:~$ pwd

/home/student

Step 3

Use the env command to view variables of the current environment.

Answer

student@student-workstation:~$ env

<… output omitted …>

LESSCLOSE=/usr/bin/lesspipe %s %s

XDG_MENU_PREFIX=gnome-

LANG=en_US.UTF-8

DISPLAY=:0

GNOME_SHELL_SESSION_MODE=ubuntu
COLORTERM=truecolor

USERNAME=student

JAVA_HOME=/usr/java/jre1.8.0_181

XDG_VTNR=1

SSH_AUTH_SOCK=/run/user/1000/keyring/ssh

MANDATORY_PATH=/usr/share/gconf/ubuntu.mandatory.path

XDG_SESSION_ID=1

USER=student

DESKTOP_SESSION=ubuntu

QT4_IM_MODULE=xim

TEXTDOMAINDIR=/usr/share/locale/

GNOME_TERMINAL_SCREEN=/org/gnome/Terminal/screen/663b45e8_c345_4faa_9974_ec4a9f89640d

DEFAULTS_PATH=/usr/share/gconf/ubuntu.default.path

PWD=/home/student

HOME=/home/student

TEXTDOMAIN=im-config

SSH_AGENT_PID=2188

QT_ACCESSIBILITY=1

XDG_SESSION_TYPE=x11

XDG_DATA_DIRS=/usr/share/ubuntu:/usr/local/share:/usr/share:/var/lib/snapd/desktop

XDG_SESSION_DESKTOP=ubuntu

GTK_MODULES=gail:atk-bridge

WINDOWPATH=1

TERM=xterm-256color

SHELL=/bin/bash

VTE_VERSION=5202

QT_IM_MODULE=ibus

XMODIFIERS=@im=ibus

IM_CONFIG_PHASE=2

XDG_CURRENT_DESKTOP=ubuntu:GNOME

GPG_AGENT_INFO=/run/user/1000/gnupg/S.gpg-agent:0:1

GNOME_TERMINAL_SERVICE=:1.91

XDG_SEAT=seat0

SHLVL=1

GDMSESSION=ubuntu
GNOME_DESKTOP_SESSION_ID=this-is-deprecated

LOGNAME=student

DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus

XDG_RUNTIME_DIR=/run/user/1000

XAUTHORITY=/run/user/1000/gdm/Xauthority

XDG_CONFIG_DIRS=/etc/xdg/xdg-ubuntu:/etc/xdg

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/java/
jre1.8.0_181/bin

<… output omitted …>

Step 4

Use the ls command to list all files inside the current directory.

Answer

student@student-workstation:~$ ls

Desktop Documents Postman Templates

dns-cap3.pcap Downloads Public Videos

dns-cap4.pcap examples.desktop scripts working_directory

dns-test2.pcap Music set_resolution.sh

dns-test.pcap Pictures Student_Files_linux

Step 5

Go to the desktop directory.

Answer

student@student-workstation:~$ cd Desktop/

student@student-workstation:~/Desktop$

Step 6

Create a new directory named Student-folder, and navigate to the Student-folder directory with the help of the && (AND)
operator.

Answer

student@student-workstation:~$ mkdir Student-folder && cd Student-folder

student@student-workstation:~/Desktop/Student-folder/$

Step 7

Navigate to your home directory.

Answer

student@student-workstation:~/Desktop/Student-folder/$ cd ../../

student@student-workstation:~$

Note
There are always at least two entries inside a directory: . (dot), which points to the directory itself, and .. (dot dot), which
points to its parent directory—the directory above the current directory in the file system hierarchy. Each directory has a
parent directory except for the root directory, which points to itself with both . (dot) and .. (dot dot) and has no parent
directory.

Manage Files and Variables

In this task, you will manage files, directories, and environmental variables. You will learn how to export newly created
variables and how to prepare your own working environment.

Step 8

Navigate to the Student-folder directory.

Answer

student@student-workstation:~$ cd Desktop/Student-folder/

student@student-workstation:~/Desktop/Student-folder/$

Step 9

Check the full path to the Student-folder with the help of the pwd command.

Answer

student@student-workstation:~/Desktop/Student-folder/$ pwd

/home/student/Desktop/Student-folder

Step 10

Create a new variable named home_pwd, and assign the full Student-folder path to the variable.

Answer

student@student-workstation:~/Desktop/Student-folder/$ home_pwd=/home/student/Desktop/Student-folder/

Step 11

Open a new shell with the bash command.

Answer

student@student-workstation:~/Desktop/Student-folder/$ bash

Step 12

Use the echo command to print the home_pwd variable.

Answer

student@student-workstation:~/Desktop/Student-folder/$ echo $home_pwd

student@student-workstation:~/Desktop/Student-folder/$

Note

Variables that are created in a parent process are not available to the child process (or subshells) by default. For that to be
possible, you will need to export them.

Step 13

Create a new variable named home_pwd, and assign the full path to the Student-folder to the variable.

Answer

student@student-workstation:~/Desktop/Student-folder/$ home_pwd=/home/student/Desktop/Student-folder/
Step 14

Export the home_pwd variable.

Answer

student@student-workstation:~/Desktop/Student-folder/$ export home_pwd

Step 15

Again, open a new shell with the bash command.

Answer

student@student-workstation:~/Desktop/Student-folder/$ bash

Step 16

Print the exported home_pwd variable.

Answer

student@student-workstation:~/Desktop/Student-folder/$ echo $home_pwd

/home/student/Desktop/Student-folder/

Step 17

Use the touch command to create two empty text files named file1.txt and file2.txt inside the current directory,

Answer

student@student-workstation:~/Desktop/Student-folder/$ touch file1.txt file2.txt

Step 18

List all the files inside the Student-folder to check if your two files have been created.

Answer

student@student-workstation:~/Desktop/Student-folder/$ ls

file1.txt file2.txt

Step 19

Once again, navigate to your home directory.

Answer

student@student-workstation:~/Desktop/Student-folder/$ cd ../../

student@student-workstation:~$

Step 20

Copy file1.txt to the working directory with the help of the home_pwd variable that you created, and name it file3.txt.

Answer

student@student-workstation:~$ cp $home_pwd/file1.txt file3.txt

Step 21

List all files inside your home directory.

Answer

student@student-workstation:~$ ls
<… output omitted …>

file3.txt

<… output omitted …>

Step 22

Remove the copied file3.txt.

Answer

student@student-workstation:~ rm file3.txt

Step 23

Check if the file is removed.

Answer

student@student-workstation:~ ls

student@student-workstation:~

Desktop dns-cap4.pcap dns-test,pcap Downloads lab_solutions Pictures

<… output omitted …>

Step 24

Print only the newly created home_pwd variable inside the env command with the grep command and pipe (|).

Answer

student@student-workstation:~$ env | grep “home_pwd”

home_pwd=/home/student/Desktop/Student-folder/

Note

The operator | is called a pipe. Sending data from one program to another is called piping. Piping feeds the standard
output (STDOUT) from the program on the left side as a standard input (STDIN) to the program on the right side.
The grep command processes text line by line, and outputs any lines that match any given pattern.

Section 12: Automating Infrastructure

Introduction

To effectively automate network infrastructure, you must understand the tools that are available for use and how to utilize
these tools. Ultimately automating your network infrastructure is done through scripting and model-driven
programmability. Important concepts to understand are the building blocks of infrastructure as code, such as Ansible and
continuous integration and continuous deployment (CI/CD) pipelines, and how intent-based networking (IBN) helps
achieve this goal.

Since the very beginning of computer networking, network configuration practices have centered on a device-by-device
manual configuration methodology. In the early years, this did not pose much of a problem, but more recently, this
method for configuring the hundreds if not thousands of devices on a network has been a stumbling block for efficient and
speedy application service delivery. As the scale increases, it becomes more likely that any changes that are implemented
by humans are going to have a higher chance of misconfigurations, whether simple typos, applying a new change to the
wrong device, or even missing a device altogether. Unfortunately, performing repetitive tasks that demand a high degree
of consistency always introduces a risk for error. The number of changes humans are making is increasing, because there
are more demands from the business to deploy more applications at a faster rate than ever before.

The solution lies in automation. The economic forces of automation are manifested in the network domain via network
programmability, software-defined networking (SDN) and IBN concepts. Network programmability helps reduce operating
expenditures (OPEX), which presents a very significant portion of the overall network costs and speeds up service delivery
by automating tasks that are typically done via the CLI. The CLI simply is not the optimal approach in large-scale
automation.

SDN and IBN 

For many years, designing and then building a network started with the network, meaning the devices in the network.
Often, the needs of the application or data that you were putting on the network were never considered until you installed
the application and it did not work as needed. These same networks were decentralized in nature, demanding that
engineers often configure each device, device by device, for their specific needs. Then, this configuration was repeated for
the specific needs of each application, taking days if not weeks or longer to finally deploy an application.

SDN moves away from a decentralized approach to management and to a centralized controller. It also moves away from
configuring exact settings on each device and centrally configures the needs of the application. Those needs or intents are
pushed from a central controller to the devices where the application resides. SDN seeks to program network devices
either through a controller or some other external mechanism. It allows the network to be managed as a whole and
increases the ability to configure the network in a more deterministic and predictable manner. Several common themes in
this trend are disaggregation of the control and data planes of a network device, virtualization of network functionality,
policy-centric networking, and movement toward open protocols. Examples of SDN are Cisco Application Centric
Infrastructure (ACI) in the data center, Cisco Digital Network Architecture (DNA) Center in the campus, and Cisco Network
Services Orchestrator (NSO) for enterprise- and service provider-level administration. SDN is a foundational building block
of IBN.

A trend in the networking industry is to focus on business goals and applications. IBN transforms a hardware-centric,
manual network into a controller-led network that captures business intent and translates it into policies that can be
automated and applied consistently across the network. The goal is for the network to continuously monitor and adjust
network performance to help assure desired business outcomes. IBN builds on SDN principles, transforming from a
hardware-centric and manual approach to designing and operating networks that are software-centric and fully automated
and that add context, learning, and assurance capabilities. IBN captures business intent and uses analytics, machine
learning, and automation to align the network continuously and dynamically to changing business needs. That means
continuously applying and assuring application performance requirements and automating user, security, compliance, and
IT operations policies across the whole network.

An example where SDN and IBN can be used is a modern-day data center. A modern data center is at a large scale and
hardware-dense, comprising multiple different technologies from multiple different vendors. Such a large-scale,
multivendor, and dense environment will not support a common command line but instead needs to be configured
through multiple different clients and GUIs.

Properties of a modern-day data center:

 Large scale

 Densely populated

 Multiple technologies

 Multivendor

Device-by-device management in a modern data center does not scale.

Consider a scenario where you are asked to simply deploy a new host in an enterprise application. This new host may be a
virtual machine (VM). That VM needs to be created and an operating system installed. The newly installed operating
system needs to be customized. The host will need access to storage, be placed on a VLAN, and given an address that is
part of a routable subnet. Just in that scenario alone, for example, an engineer will need to use the vSphere Client to
create a new VM; install and customize. Windows Server through the server GUI; configure access to existing logical unit
numbers (LUNs) on an EMC storage array using an Unisphere Management session; and via the CLI, likely extend an
existing VLAN on Cisco switches and ensure routed connectivity through Cisco routers. This scenario uses four different
vendors across five different technologies and four different client interfaces.

Going through such a decentralized management approach is simply not scalable, quick, consistent, or secure, to name a
few attributes. To scale to the needs of a modern data center, you need a new approach; Cisco SDN and taking an intent-
based approach to network configuration does exactly that.

SDN Characteristics and Components

SDN addresses the needs for:

 Centralized configuration, management, control, and monitoring of network devices (physical or virtual)

 The ability to override traditional forwarding algorithms to suite unique business or technical needs

 Allowing external applications or systems to influence network provisioning and operation

 Rapid and scalable deployment of network services with life-cycle management


SDN characteristics and components are as follows:

 Network devices: Physical and virtual devices that participate in the network, responsible only for forwarding
packets based on instructions from the SDN controller. They communicate with SDN controller's southbound
interface.

 SDN controller: The brains of SDN, responsible for communicating with network devices and applications and
services using application programming interfaces (APIs). The SDN controller has full awareness of the network
state by keeping information in a local database.

 Southbound interface: This interface is a layer of device drivers that the SDN controller uses for interacting with
physical and virtual devices in the network.

 Northbound interface: Representational State Transfer (REST) APIs facing outside the network so applications and
services can interact with the SDN controller and use resources in the network.

 Network management applications and services: Clients accessing resources in the network using REST APIs.
Clients can be automation user applications, automation servers, or software libraries for many programming
languages like Python, Java, Ruby, and others.
Northbound APIs or northbound interfaces are responsible for the communication between the SDN controller and the
services that run over the network. Northbound APIs enable your applications to manage and control the network. So,
rather than adjusting and tweaking your network repeatedly to get a service or application running correctly, you can set
up a framework that allows the application to demand the network setup that it needs. These applications range from
network virtualization and dynamic virtual network provisioning to more granular firewall monitoring, user identity
management, and access policy control. Currently, REST API is predominantly being used as a single northbound interface
that you can use for communication between the controller and all applications.

SDN controller architectures have evolved to include a southbound abstraction layer. This abstraction layer abstracts the
network away to have one single place where you start writing the applications to and allows application policies to be
translated from an application through the APIs, using whichever southbound protocol is supported and available on the
controller and infrastructure device. This new approach allows for the inclusion of both new and southbound controller
protocols and APIs, including but not limited to:

 OpenFlow: An industry-standard API defined by the Open Networking Foundation (ONF), OpenFlow allows direct
access to and manipulation of the forwarding plane of network devices such as switches and routers, both physical
and virtual (hypervisor-based). The actual configuration of the devices is by the use of Network Configuration
Protocol (NETCONF).

 NETCONF: An IETF standardized network management protocol. It provides mechanisms to install, manipulate,
and delete the configuration of network devices via remote procedure call (RPC) mechanisms. The messages are
encoded by using XML. Not all devices support NETCONF; the devices that do support it advertise their capabilities
via the API.

 RESTCONF: In simplest terms, RESTCONF adds a REST API to NETCONF.

 OpFlex: An open-standard protocol that provides a distributed control system that is based on a declarative policy
information model. The big difference between OpFlex and OpenFlow lies with their respective SDN models.
OpenFlow uses an imperative SDN model, where a centralized controller sends detailed and complex instructions
to the control plane of the network elements to implement a new application policy. In contrast, OpFlex uses a
declarative SDN model. The controller, which, in this case, is called by its marketing name Cisco Application Policy
Infrastructure Controller (APIC), sends a more abstract policy to the network elements. The controller trusts the
network elements to implement the required changes using their own control planes.

 REST: The software architectural style of the World Wide Web. REST APIs allow controllers to monitor and manage
infrastructure through the HTTP and HTTPS protocols, with the same HTTP verbs (GET, POST, PUT, DELETE, and so
on) that web browsers use to retrieve web pages.

 SNMP: Simple Network Management Protocol (SNMP) is used to communicate management information between
the network management stations and the agents in the network elements.

 Vendor-specific protocols: Many vendors use their own proprietary solutions that provide REST API to a device.
For example, Cisco uses NX-API for the Cisco Nexus family of data center switches.

SDN Benefits

SDN offers the following benefits:

 Centralized provisioning: Every networking device is provisioned from an SDN controller. There is no need for a
network administrator to connect to any device and configure it using CLI.

 Network security: The SDN controller knows everything about the network and enables easy collection and
analysis of network traffic. This information can be used to automatically respond to suspicious activity. Also, by
using centralized provisioning, policies can be applied consistently through the whole network and applied to all
devices.

 Faster deployments: Applications and services can be deployed faster by using open APIs. Opening a ticket and
waiting for the network team to configure devices and security policies to enable your new application becomes a
thing of the past.
 Programmable: Network infrastructures do not have to be rebuilt to be used for a new purpose. They can be
programmed to change on demand without the need to manually configure a single device. Applications can
consume REST APIs on the northbound interface to request a network change. The SDN controller translates that
request and uses southbound interface APIs to configure network devices for a new purpose.

SDN allows network engineers to provision, manage, and program networks more rapidly, because it greatly simplifies
automation tasks by providing a single point of administration for the programming of the infrastructure. The nature of
controller-based networking makes centralized policy easy to achieve. Networkwide policy can be easily defined and
distributed consistently to the devices connected to the controller. For example, instead of attempting to manage access
control lists (ACLs) across many individual devices, a flow rule can be defined on the central controller and pushed down to
all the forwarding devices as part of the normal operations.

Compared with traditional networking, controller-based networking makes it easy to define special treatment for specific
network traffic. Instead of adding complexity to the network through advanced mechanisms like policy-based routing, a
traffic flow rule can be defined on the controller and pushed down to all the forwarding devices as part of normal
operations. The largest benefit here is that there is a device, a controller, that has a unified view of the network in one
location.

With automated processes, the time to provision a new service or implement a change request is drastically reduced.
What would previously take days or weeks to implement can be automated to run in hours, along with testing and
verification. Another important step in automation is life-cycle management, from Day 0 design and installation of the
infrastructure components to Day 1 service enablement and Day 2 management and operations. Also, after the customer
no longer needs the service, you must deallocate the resources that are used and clean up the configuration on the
devices. Even with proper change management procedures, this process is tedious at best if performed manually. If the
process is fully automated, you can make sure that the same configuration changes that were applied when provisioning
the new service will be removed when it is deprovisioned.

Intent-Based Networking

IBN adds context, learning, and assurance capabilities to SDN by tightly coupling policy with intent. “Intent” enables the
expression of both business purpose and network context through abstractions, which are then translated to achieve the
desired outcome for network management. In contrast, SDN is purposely focused on instantiating change in network
functions.
Three foundational elements of IBN are as follows:

 The translation element enables the operator to focus on "what" they want to accomplish, and not "how" they
want to accomplish it. The translation element takes the desired intent and translates it to associated network
policies and security policies. Before applying these new policies, the system checks if these policies are consistent
with the already deployed policies or if they will cause any inconsistencies.

 Once the new policies are approved, the activation element automatically deploys the new policies across the
network.

 With the assurance element, an intent-based network performs continuous verification that the network is
operating as intended. Any discrepancies are identified; root-cause analysis can recommend fixes to the network
operator. The operator can then "accept" the recommended fixes to be automatically applied, before another
cycle of verification. Assurance does not occur at discrete times in an intent-based network. Continuous
verification is essential because the state of the network is constantly changing. Continuous verification assures
network performance and reliability.

IBN is a form of network administration for automating administrative tasks across the network by allowing only the goal
of a request to be specified. For example: "Make sure that this application can access that server."

It is up to the IBN system to determine the following:

 Determine intent

 Find network devices between the application and server

 Find an optimal route between the application and server

 Configure network devices required for the optimal route


In some ways, IBN is similar to SDN:

 They both rely on a centralized controller to manage devices.

 They are both aware of the current network state and state of every device.

Where they differ is how the network is administered. Software-defined networks focus on how a specific set of network
devices should operate, while intent-based networks are focused on what must be done to get to the final goal. This level
of abstraction is a primary component in IBN. You can think of IBN as a next step in the evolution of SDN.

IBN Characteristics

Every intent-based network, together with characteristics of an SDN, incorporates the following features:

 Translation and validation: Every intent is verified that it can be properly executed. Only after successful
validation can the translation of the intent to valid actions begin.

 Automation: Resource allocation and policy enforcement are done automatically after the desired state of an
intent is known.

 State awareness: The current state of the network is always known by gathering and monitoring data from all
network devices.

 Assurance and optimization: By learning from gathered data, IBN can assure that the desired state of a network is
always maintained. This feature is where artificial intelligence and machine learning become an important part of
the network.
IBN Benefits

In addition to the benefits of having a software-defined network, the following are benefits of having IBN:

 Reduces complexity: Management and maintenance of an IBN network becomes much simpler.

 Simplifies deployment: Additional network services deploy faster by abstracting network actions. You do not need
to specify what, where, or how to configure specific devices or services; you only request the final state.

 Strengthens security: Machine-learning algorithms and artificial intelligence can learn and respond to new threats
before they become an issue.

 Improves agility: Having network data in one place helps the network to quickly adapt to changes and have all
services available despite failures.

 Eliminates repetition: Any repetition is prone to errors. Even by using APIs, programmers can make errors.
Automation is essential to eliminating repetitive tasks and keeping manual labor to a minimum.

IBN Architecture

An enterprise network infrastructure may be managed in different domains, separating the operational duties into campus
and branch sites, WAN, data center, and the cloud. Applications hosted in the data center or cloud, as well as clients, may
also have their own operational procedures and therefore be considered domains. In an IBN, one or more domains are
expected to be governed by a controller, which provides a holistic view of the infrastructure and maintains a consistent
state (configurations, software images, and so on).

The intent-based system accommodates this arrangement of network infrastructure into domains. Translation and
orchestration capabilities are applied across domains, allowing for the characterization of networkwide intent-based
policies across the campus and branch sites, WAN, data center, and the cloud. An orchestration function disseminates the
captured policies to the relevant domains, which also enables restriction of the scope by design of some policies.

Automating the translation of the model-based policies into device-specific configurations, and instantiating these into the
network infrastructure, is covered by the domain-specific controllers. IBN assurance functions may apply to a particular
domain to ensure adherence to the expressed intent-based policy. Additionally, assurance functions operate across
domains to check for compliance with the expressed intent networkwide and end to end (from application to application,
regardless of where the apps are hosted).

The following figure illustrates additional functional details of the translation, activation, and assurance building blocks of
IBN and how they relate to different infrastructure domains. The figure also highlights the feedback loop that sends
insights gained by assurance back into the activation functions for ongoing optimization of the network.

IBN is made possible thanks to next-generation, fully programmable ASICs developed and integrated into IBN-capable
routers and switches. Modern router and switch operating systems, such as Cisco IOS-XE or NX-OS systems, provide the
necessary APIs for configuring these ASICs. The operating systems also are available in a virtual form factor (on the server
and in the cloud) and a physical form factor (embedded in a switch and router), making it very versatile.

For example, in the desktop and campus domains, you will find a Cisco Catalyst 9000 Series family of switches, extending
the Catalyst architecture to wireless and wired devices, while the routers provide edge connectivity at the branch and
headend locations. Finally, a controller, such as Cisco DNA Center within the campus domain and vManage for software-
defined WAN (SD-WAN) administration, manages all the status, configuration, and analytics.

SDN and IBN Examples

The network needs to support an increasingly diverse and fast-changing set of users, devices, applications, and services. It
needs to ensure fast and secure access to and between workloads wherever they reside. And for the network to work
optimally, all this needs to be achieved from end to end, between users, devices, apps, and services across each network
domain—campus, branch, WAN, data center, hybrid cloud, and multicloud. This means that organizations need a way to
align to application performance and security requirements across domains of the enterprise network and also service
provider networks, which is provided by different SDN and IBN techologies.

Modern-day examples of SDN and IBN technologies include:

 Cisco ACI

 Cisco DNA Center

 Cisco SD-WAN
 Cisco Software-Defined Access (SD-Access)

 Cisco NSO

Cisco IBN solutions extend across campus and branch access networks with Cisco DNA, across the WAN with Cisco SD-
WAN, and across distributed application environments with Cisco ACI. In the service provider networks, Cisco NSO
supports the process of validating, implementing, and abstracting network configuration and network services, providing
support for the entire transformation into IBN. Applied policies and assurance integration across these domains enables
consistent performance, compliance, and security enforcement that allows IT and business intent to be expressed in one
domain and then exchanged, enforced, and monitored across all of them.

These technologies are built from the ground up with APIs first and programmability in mind. They provide a centralized
point of network configuration and applying the desired configuration, intent, or desired state to the devices where
needed.

For example, Cisco ACI and Cisco SD-Access policy integration maps Cisco ACI application-based microsegmentation in the
data center with Cisco SD-Access user group-based segmentation across the campus and branch. Now, security
administrators can automate and manage end-to-end segmentation seamlessly with uniform access policies—from the
user to the application. With such segmentation, policies can be set that allow Internet of Things (IoT) devices to access
specific applications in the data center or allow only financial executives and auditors to access confidential data. This is
just one example of how Cisco solutions are enabling consistent multidomain policy segmentation and assurance for end-
to-end alignment to business intent.
Infrastructure as Code 

Traditional infrastructure management often looks like this:

 A problem is realized and needs to be resolved.

 Often, a user contacts the helpdesk, and the user or an administrator creates a ticket.

 An engineer opens the ticket and investigates it via the CLI or a GUI on a device-by-device and system-by-system
basis.

 Maybe, the ticket is escalated to the next-level engineer.

 This process is repeated until the original problem can be resolved.

This process is too slow, too decentralized, and too manual to scale to the level of management needed for a modern-day
data center. With infrastructure as code (IaC), you follow a simple mantra: "If you do it more than once, automate it."

IaC allows you to identify the state or outcome, produce instructions to accomplish the desired state, and then reuse,
repeat, and evolve to meet new needs as your environment grows. Here, you no longer focus on infrastructure but instead
on the articulation of the business outcome.

IaC is a way of defining, managing, and interacting with your physical and virtual resources by using machine-readable
configuration files and scripts instead of an interactive GUI or CLI. These files are often part of the application itself and
contain instructions on how to configure, create, and destroy resources in the infrastructure on demand or automatically.

Tools that are involved in provisioning such an infrastructure are text editors, version control systems, and scripts.

Editing a configuration file or a script, applying configuration to infrastructure, and committing changes to a remote code
repository is a new process that replaces the legacy workflow. The legacy workflow was to copy CLI commands, edit them
to reflect the wanted changes, and paste them to select devices. This practice and the tools it uses are the primary
elements of what is known as DevOps (development and operations merged).

The tools used in IaC are always evolving, but their areas of focus are consistent:

 Centralized storage

 Collaboration

 Life-cycle management

 Automation
With IaC, you start by identifying the steps and tasks that you repeat regularly—for example, an administrator who checks
the status of their perimeter devices and critical servers every morning before the users start to file into the office. Those
GUI or CLI steps will then be coded to produce the desired outcome. The code or script can be used to re-create the
desired outcome quickly and identically.

Network as code is the application of IaC—more specifically, to the network infrastructure or what is referred to as the
networking domain.

NetDevOps is the practice of ongoing development of your network infrastructure using DevOps tools and processes to
automate and orchestrate your network operations.

IaC Benefits

When defining your IaC, you may observe the following benefits:

 Capturing your actions "as code" allows you to capture the result or desired state configuration.

 The declarative model allows you to focus on the state instead of which actions are needed to get there.

 Life-cycle management helps you with evolution of your code from concept to creation to collaboration and use.

When that desired final state of your infrastructure is defined within code, you can do the following:

 Store your code in a repository for safekeeping as a backup of your network configuration.

 Evolve as your needs change or your network grows, because you are able to update and add to your code to
meet these changes.

 Collaborate with others in your team and around the world as you work toward that final desired state.

 Versioning allows the developers within your team to create a copy of the existing repository or an entirely new
"fork."

 Repeatable; once perfected, this code is your product that can be repeated wherever and whenever needed.

IaC Tools

It is important to understand that IaC is not just about the code you create but also managing, storing, collaborating on,
and controlling the version of your code. Here are a few tools to help you manage the life cycle of your IaC.
Here are a few tools that are essential to the IaC life cycle:

 GitHub and GitLab

 Chef and Puppet

 Ansible

 Cisco NSO

 Terraform

 Integrated development environments (IDEs)

Cisco DevNet and Google Postman also are invaluable resources.

Github is the largest online service when offering version control and collaboration services.

 Version control

 Collaboration

 Bug tracking

 Wiki

 Free public repositories

 Founded in 2008

 Acquired by Microsoft in 2018

Github is a web-based service helping developers centrally manage, store, collaborate on, and control versions of code.
You have the options of performing all management functions via the web interface or downloading and installing a client
on your computer.

Github is available at https://www.github.com.

The Cisco Data Center specific repository on Github is available at https://www.github.com/Datacenter.

The Cisco DevNet repository can be found at https://www.github.com/ciscodevnet.

Chef and Puppet are configuration management systems that require an agent present on a managed host.

 Open source

 Configuration management

 Agent

 Pull operations
Chef takes a "recipe" (a single operation) and "cookbooks" (a collection of operations) approach to specifying the steps
needed to get to the desired configuration. This process is a procedure-based approach to configuration management.
Recipes and cookbooks are used to describe the procedure necessary to get your desired state.

Chef is available at https://www.chef.io.

Chef and Puppet have many characteristics in common. One important difference is in their approach. Chef focuses more
on the control of your nodes, while Puppet focuses more on writing the actual configuration files. In other words: Chef is
procedural, while Puppet is declarative. Chef and Puppet both require agents to be installed on a managed device, which is
instructed that a change as been made. They then pull new instructions from your management station and execute them
to get to desired state.

Puppet is available at https://puppet.com.

Ansible is younger than other configuration management systems but the most widely adopted.

 Red Hat

 Open source

 Configuration

 Agentless

 Push operations

Instead of managing each individual node or system, Ansible uses a model-based approach to your infrastructure by
describing how the components and systems are related to one another. Ansible is agentless and uses playbooks to define
the declared changes and final state. Ansible playbooks are written using YAML files that are human and machine
readable. You issue commands from your management station with Ansible installed, and the commands are executed on
the remote system. This approach is known as push.

Ansible is available at https://www.ansible.com.

Cisco NSO is software for automating services across physical and virtual networks:

 Cisco

 Enterprise

 Service provider

 Vendor agnostic

 APIs

Cisco NSO is an enterprise- and service provider-level software automation platform that operates across physical and
virtual devices. Operations can be accomplished through automation, a self-service portal, and manual provisioning. The
latest innovations in Cisco NSO allow the administrator to create their own element drivers, or non-Cisco vendor
definitions. You can find out more about Cisco NSO at http://www.cisco.com/go/nso.

Terraform is an open source IaC software tool:

 HashiCorp

 Execution plans

 Cloud agnostic

 APIs

Terraform is a tool to aid the provisioning of your infrastructure. It uses "execution plans" written in code. These execution
plans outline what will happen when you run your code. Terraform builds a graph of your resources and can be used to
automate changes. Terraform is available at https://www.terraform.io.
System Management with Ansible 

The challenge with modern-day data centers lies in their complexity, density, and multivendor solutions spanning across
multiple different technologies. The solution is to programmatically manage a data center full of devices using a scripting
language like PowerShell or a full programming language such as Python. These options bring their own challenges of
complex features, syntactical instructions, and the learning curve that comes with both. For that reason, another piece in
the DevOps puzzle was developed, Ansible. Ansible is an open source provisioning software that allows for centralized
configuration management.

Ansible originally was written by Michael DeHann of AnsibleWorks, which was acquired by Red Hat in 2015. Red Hat in turn
was acquired by IBM in 2019. Ansible is free and open source and is included as part of Fedora. It is also available for RHEL,
CentOS, Scientific Linux, and other operating systems through their respective package management systems or the
Python package management tool pip.

Ansible can be used for almost any automation task. It has these characteristics:

 Free, open source

 Used for:

1. Provisioning

2. Configuration management

3. Deployment
 Uses its own declarative language

 Agentless

 Serverless

Unlike other management platforms and services, Ansible does not require an agent to be installed on the system that it
manages, nor does Ansible need or use a centralized controller for configuration management. Automation can be
performed from any management system referencing inventories, modules, or playbooks.

A declarative approach means that you only tell Ansible what you want to achieve as a final goal instead of encoding all
instructions to reach it. For example, you do not have to tell Ansible where a specific service resides, how to start it, and
what to do after it starts. You simply say: “I want this service to be started, and then I want another service to be
restarted.”

More information on Ansible is available at https://www.ansible.com.

System Management with Ansible: Components

The components of Ansible come together to make a very powerful management system.

Understanding each component and their relationships is essential to using their power:

 Ansible control station is the management station and launching point for all Ansible activities. Unlike many other
management platforms, Ansible does not require a dedicated server or elaborate hardware to manage an
environment. You could literally manage an enterprise from your personal laptop.

 Ansible modules are a standalone collection of commands that are written in a standard scripting language (for
example, Python) and used to execute the desired state change. An administrator can write their own Ansible
module using any language so long as that language supports JavaScript Object Notation (JSON) as a data format.

 Playbooks: Are files, also used to definite the desired or final state but are used to orchestrate operation across
multiple nodes.

 Inventory files contain systems managed by Ansible. Within an inventory file, an administrator groups managed
systems. Inventory files can be dynamically built by contacting a remote API that responds with a valid JSON
response.

 YAML commonly is referred to as a configuration file and a destination for data being stored. Ultimately, YAML is a
data format.

 Transported over Secure Shell (SSH) by default and with PowerShell support for Windows nodes over the WS-
Management protocol.

 Ansible Tower is a web service console (GUI) following the REST standard for programmability. It is a licensed
product from Red Hat, based on the open source AWX Project.
System Management with Ansible: Tools

Managing Ansible is a simple process—simple enough that you are largely able to select your terminal program of choice
to connect to your management server running Ansible. You can select your preferred text editor for creating and editing
inventory files, playbooks, and modules, and use your preferred version control service to manage access to code and
control collaboration and editing.

Ansible allows you to pick the tools that you already know:

 Linux Operating System

 Terminal program

 Text editor

 Version control system

Examples of such tools might be PuTTY for SSH access, Visual Studio Code for configuration file management, and GitLab
for storage, collaboration, and version control. All this can be done from any Linux-based operating system with Python
and Ansible installed.

How Ansible Works

Working with Ansible requires only a few quick installation and update steps. Once the installation and update is complete,
you are able to configure Ansible operations and defaults. An inventory file and modules will work together to help you
execute your changes to the specified target systems.

Getting Ansible to run is simple and straightforward. Follow a few simple steps and complete your Ansible installation in a
few moments:

 Installation

 Ansible configuration files

 Inventory files

 Modules

1. Built-in

2. Custom

 Execution

Installing Ansible can be done in multiple ways, but the easiest one is to use Python pip. With your Python virtual
environment active, execute pip install ansible and you are done. You can then execute ansible -version to verify that
installation was successful.
Many operating systems offer prepackaged Ansible in their respectful repositories (yum install ansible, apt-get install
ansible).

Ansible configuration files are used to configure Ansible operations. The base configuration file is located at
/etc/ansible/ansible.cfg.

Inventory files enable an administrator to define an inventory of systems against which Ansible modules will be executed.
The basic contents of an inventory file are hosts and groups. Host entries point to the Domain Name System (DNS) name
or IP address of a managed end system, while groups are a collection of hosts under a collective label.

How Ansible Works: Push Model

With Ansible installed and upgraded and a list of devices within your network defined in an inventory file, it is time to
manage configuration modules and execute them to make changes to your managed devices.

Ansible modules can be thought of as a small program pushed to and run on the managed device to achieve the desired
configuration state of that device. Most modules are standalone. Ansible also gives an administrator the ability to write
their own module using standard scripting languages such as Python. With more than 750 built-in modules that are
organized by vendor and technology, Ansible enables administrators to ramp up and manage their environment quickly
and easily.

There are many Cisco built Ansible modules for various data center technologies in the Cisco Data Center GitHub
repository at https://www.github.com/datacenter. More specifically; Cisco ACI modules for Ansible are
at https://github.com/datacenter/aci-ansible.

You can use ansible-doc modulename to quickly view the information and examples on how to use the installed module.
More information can be found at https://docs.ansible.com.

Once your modules are installed, they can be executed from the Ansible host against the systems defined with your
inventory file using the ansible-playbook  example-playbook.yml  command. Ansible will connect to all systems defined in
the inventory simultaneously, make the prescribed changes, and display a status of "PLAY RECAP" on the terminal screen.

Ansible also supports ad hoc execution of modules using ansible  host1,group2 -m modulename -a


moduleargs,  where host1,group2 are entries that are defined in your inventory file, modulename is the name of the
Ansible module you want to execute, and moduleargs are the required arguments for that specific module.

Some modules don't require arguments and can be called only with modulename; for
example, ping module ansible host1,host2,host3 -m ping  would return success if all hosts are reachable.
Infrastructure Automation with Ansible Playbooks 

Unlocking the true power of Ansible is done through orchestration, and orchestration is done through Ansible Playbooks.

Now that Ansible is installed and updated, and your managed devices are defined through an inventory file, it is time to
use Ansible on a large scale. For that, you will use Playbooks and other Ansible operations.

Keep in mind that a modern-day data center is made up of multiple different vendors and technologies. The process of
making the needed changes in the required order against the specific technology, following the parameter of that
technology, is known as orchestration. Ansible Playbooks is a powerful method to use Ansible for orchestration.

Ansible Playbooks: Terms

As with any technology, a set of terms and their definitions is important to better understanding that technology. Ansible
uses the following terms for orchestration through Playbooks:
Example:

My inventory file contains server1, server2 and server3 host entries that are part of internet_servers group, which should


have ssh_server, http_server  and php_server roles applied.

Ansible Playbooks: Components

An Ansible playbook is a simple human-readable text file in YAML format, with keywords that Ansible recognizes as
instructions. Because YAML is a plaintext file, it is ideal for collaboration and makes troubleshooting much easier because
line numbering and version control provide insight into  what was changed and when and who made changes.

A playbook consists of a (optional) name,  hosts,  and tasks that should be performed. There are a lot of optional keywords
that change the way that Ansible interacts with hosts that are defined in your inventory file. For example, gather_facts is
an instruction to Ansible that enables or disables runtime variables available to your tasks. Setting it to no will make
execution faster, but your tasks lose access to variables that Ansible collects before execution (ansible_distribution would
be empty instead of value Ubuntu or RedHat).

A pound sign (#) indicates a comment; everything from here to the end of the line is ignored by Ansible.

The vars section contains user-defined variables that are later referenced in the tasks section. Variable expansion is done
by enclosing the variable name inside double-curly brackets, "{{ variable }}."

The tasks section contains tasks that are executed in the order they appear, and they are executed in linear fashion on
each host (when one task finishes on all hosts, then execution of the next task starts). This can be changed by
setting strategy to free under the top-level section of the playbook.
Ansible will run in parallel on a maximum five hosts, meaning that you must wait longer for each task to complete if you
add more hosts. This behavior can be changed with forks set to 20 under the top-level section of the playbook.

Each task section starts with an (optional) name, followed by a module name that you want to use (apt and service in this
example). Module parameters belong under the module name and must be indented to indicate that they belong to the
module and not the task. Some module parameters are required; others are optional. You can find out which parameters
are required by using an equals (=) sign in the output of ansible-doc apt.

This playbook is made of two tasks and does the following:

 Declares a variable named packages

 Connects to server1 and server2

 First task:

1. Updates the apt cache with the latest package information from remote repositories

2. Installs tcpdump and apache2 packages

 Second task:

1. Starts the apache2 web server

2. Enables the apache2 web server to start on boot

Ansible Playbooks: Inventory File

An inventory file is a collection of all your hosts that are managed by Ansible. It is a simple plaintext file where you specify
your hosts, logical groupings, and special variables for Ansible itself.

The example inventory file has the following information:

 A group named servers with two hosts, each with a special variable ansible_port  to indicate that sshd is running
on a nonstandard port.

 Group switches with two hosts

 Group routers with two hosts

 Group datacenter with groups servers, switches, and routers as members

You can now target any of the hosts or groups in your inventory by changing the hosts keyword in your playbook.
Group datacenter  is convenient when you want to execute something on all the desired groups or set a variable for all
groups in one place.

There is also a predefined group named all, which you can use to target all hosts in your inventory.

Ansible: Executing the Playbooks

You can execute the playbook by using the ansible-playbook -u root example.yml  command, where -u username is your
username for administrative login and example.yml the name of your playbook.
Ansible uses the root username by default, so -u can be omitted.

 "Gathering Facts" runs before any tasks to collect system information and populate variables that your tasks might
need. You can disable this behavior by setting gather_facts to false.

 The "Install packages" task runs first.

 "Start and enable web server" runs second.

 "PLAY RECAP" informs you how many tasks made changes on each host. Notice that Gathering Facts reports "ok,"
because when nothing changes on the destination host, the result status is "ok" instead of "changed."

Ansible also offers loops and templates, which are used to extend the power of Ansible Playbooks, all for scale and
dynamic orchestration. Dynamic inventories are available from cloud providers so that you do not have to constantly
update your inventory file; instead, your inventory information is built every time that you execute a playbook. Ansible
does this by contacting the API specified by your cloud provider, which responds with a properly formatted JSON response.

Construct Infrastructure Automation Workflow 

Automating infrastructure tasks involves using different technologies and vendor APIs to achieve the desired state of your
infrastructure. Ansible offers a plethora of modules to interact with your physical and virtual resources, and there is always
a possibility to write your own code if something is not supported.

In the first part, you will write a Bourne Again Shell (Bash) script and an Ansible playbook that installs packages, configures
a TFTP service, and creates an administrative user on the virtual machine.

In the second part, you will write two Python scripts to provision a router, which will make use of your TFTP service by
saving the configuration each time the router configuration is saved.
Test Ansible Task

You will install Ansible inside a Python virtual environment, create a valid inventory file, and test your Ansible installation
by executing a simple task on your server using the ping module. Your task will make sure that Ansible can connect to your
server and execute commands.

Step 1

Launch Visual Studio Code from your desktop.

Step 2

Click File > Open Folder, then in the upper-left corner click Home, select working_directory, and click OK.

Step 3

Now click the New File icon next to your folder name, and create a file named inventory.cfg.

Step 4

Enter server on a single line. Save and close the file.

Step 5

Click View > Terminal to open a terminal in Visual Studio Code.

Step 6

Type pwd to verify that you are in the /home/student/working_directory folder

Answer

student@student-vm:~/working_directory$ pwd

/home/student/working_directory

Step 7

Install the Python virtual environment required Ansible packages by executing the pipenv install command.

Answer

student@student-workstation:~/working_directory$ pipenv install

Creating a virtualenv for this project…

Pipfile: /home/student/working_directory/Pipfile

Using /usr/bin/python3.7m (3.7.5) to create virtualenv…

Step 8

Now, execute the pipenv shell command to activate your new environment. Notice that your prompt now shows the
name of your active virtual environment in parentheses (working_directory).

Answer

student@student-workstation:~/working_directory$ pipenv shell

Launching subshell in virtual environment…

student@student-workstation:~/working_directory$ . /home/student/.local/share/virtualenvs/working_directory-
NLDM5Sxr/bin/activate

(working_directory) student@student-workstation:~/working_directory$

Note

For clarity, the (working_directory) part of the prompt will be omitted from output from now on.
Step 9

Create a new SSH key pair so that you will be able to connect to your server without a password. Press enter to use the
default settings and overwrite the old key.

Answer

student@student-workstation:~/working-directory$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/student/.ssh/id_rsa):

/home/student/.ssh/id_rsa already exists.

Overwrite (y/n)? y

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/student/.ssh/id_rsa.

Your public key has been saved in /home/student/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:qYIDesta0q9gfllOrddwOumU1kiA1t9uNcG7F1R9p1E student@student-workstation

The key's randomart image is:

+---[RSA 2048]----+

| oE|

| o . o +|

| oo o . oo|

| . o .. + . |

|. .oS. + . |

|.o . o.+=.. o . |

|+.* * o=*+ . . |

|o* * +o=.. . |

|.o=.. o.. |

+----[SHA256]-----+

Step 10

Copy the public key to the server.

Answer

student@student-workstation:~/working_directory$ ssh-copy-id root@server

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/student/.ssh/id_rsa.pub"

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

root@server's password:

Number of key(s) added: 1


Now try logging into the machine, with: "ssh 'root@server'"

and check to make sure that only the key(s) you wanted were added.

Step 11

Test that you can connect without a password.

Answer

student@student-workstation:~/working_directory$ ssh root@server

Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-60-generic x86_64)

* Documentation: https://help.ubuntu.com

* Management: https://landscape.canonical.com

* Support: https://ubuntu.com/advantage

This system has been minimized by removing packages and content that are

not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

Last login: Mon Oct 28 11:57:37 2019 from 172.20.0.1

root@server:~# exit

logout

Connection to server closed.

Step 12

Execute an ad hoc task to confirm that you are able to execute commands.

Answer

student@student-workstation:~/working_directory$ ansible server -m ping

server | SUCCESS => {

"changed": false,

"ping": "pong"

Create Ansible Playbook

You will now construct a playbook and use it to provision your server. Your playbook will consist of one task, which will
make sure that the SSH service is running and enabled on boot.

Step 13

Create a new file inside your working_directory and name it provision-server.yml.

Step 14

The first line of your playbook is a name.

Answer

- name: Initial system setup

Step 15

The second line indicates hosts that your playbook will provision.
Answer

hosts: server

Step 16

Now, create a task that will make sure that your service is enabled after system restart.

Answer

tasks:

- name: Make sure ssh service is enabled upon system boot

service: name=ssh enabled=yes

Step 17

Your playbook should now look like this:

Answer

- name: Initial system setup

hosts: server

tasks:

- name: Make sure ssh service is enabled upon system boot

service: name=ssh enabled=yes

Step 18

Make sure that there are no syntax errors in your playbook.

Answer

student@student-workstation:~/working_directory$ ansible-playbook --syntax-check provision-server.yml

playbook: provision-server.yml

student@student-workstation:~/working_directory$

Step 19

Execute your playbook and check PLAY RECAP.

Answer

student@student-workstation:~/working_directory$ ansible-playbook provision-server.yml

PLAY [Initial system setup]


********************************************************************************************

TASK [Gathering Facts]


*************************************************************************************************

ok: [server]

TASK [Make sure ssh service is enabled upon system boot]


******************************************************************

ok: [server]

PLAY RECAP
****************************************************************************************************
*********
server : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

student@student-workstation:~/working_directory$

Note

Your play reports that two tasks returned "ok," because the Gathering Facts task does not make any changes, and the SSH
service is already enabled upon system restart.

Perform Initial System Setup

You will now write a Bash script for generating a Linux-compatible password hash, and then you will create a task that will
create a new user admin that uses that password hash.

Step 20

Write a Bash script called make_password.sh, and add logic that will ask the user for a password twice and populate the
pass1 and pass2 variables.

Answer

#!/bin/bash

# force caller to enter password twice

read -s -p 'Enter password: ' pass1

echo

read -s -p 'Repeat password: ' pass2

Step 21

Now, add the part that checks if pass1 and pass2 are the same.

Answer

# make sure passwords match

if [ $pass1 != $pass2 ]; then

echo "passwords do not match"

exit 2

fi

Step 22

Add the last part, which outputs a password hash.

Answer

# return sha512 hash

echo $pass1 | mkpasswd --method=sha-512 --stdin

The final contents of your script should look like this:

#!/bin/bash

# force caller to enter password twice

read -s -p 'Enter password: ' pass1

echo

read -s -p 'Repeat password: ' pass2


# make sure passwords match

if [ $pass1 != $pass2 ]; then

echo "passwords do not match"

exit 2

fi

# return sha512 hash

echo $pass1 | mkpasswd --method=sha-512 –stdin

Step 23

Make the script executable.

Answer

student@student-workstation:~/working_directory$ chmod 700 make_password.sh

Step 24

Run the script and provide 1234QWer twice as a password.

Answer

student@student-workstation:~/working_directory$ ./make_password.sh

Enter password:

Repeat password:
$6$EsC9meJyM$eOumIORpN7GEzlt6A.Sev2OPv9wUjiTQniy2WZ2WYQogReCUtL6Zke8EpLsBE5JIqQkr5aKGSdUNS5z/bqDls/

Note

Your password hash will not be the same as the one shown here.

Step 25

In your playbook, create a task that will create an admin group for the admin user.

Answer

- name: Create admin group

group: name=admin state=present

Step 26

Create a task that will create an admin user, and use the password hash from your make_password.sh script for the
password value.

Answer

- name: Create admin user

user:

name: admin

group: admin

home: /home/admin

password:
$6$EsC9meJyM$eOumIORpN7GEzlt6A.Sev2OPv9wUjiTQniy2WZ2WYQogReCUtL6Zke8EpLsBE5JIqQkr5aKGSdUNS5z/bqDls/
shell: /bin/bash

state: present

Configure Service

You will now install and configure a TFTP service (tftpd-hpa) that allows your routers to save their configuration every time
a change occurs.

You need to make sure that routers can write their configuration inside the /var/lib/tftpboot directory by changing the
configuration file with the lineinfile module.

Step 27

Create a new task in your playbook. This task will install the TFTP server.

Answer

- name: Install TFTP server packages

package: name=tftpd-hpa state=present

Step 28

Create a task that enables write operations for TFTP clients.

Answer

- name: Configure TFTP server for write access

lineinfile:

path: /etc/default/tftpd-hpa

line: 'TFTP_OPTIONS="--secure --verbose -4 -c"'

state: present

Step 29

Create a task that sets the correct directory permissions.

Answer

- name: Configure /var/lib/tftpboot permissions

file: path=/var/lib/tftpboot owner=root group=tftp mode=0775

Step 30

Create a task that starts TFTP service and enables it upon system restart.

Answer

- name: Start tftpd-hpa service and enable it upon system boot

service: name=tftpd-hpa state=restarted enabled=yes

Step 31

Execute the playbook.

Answer

student@student-workstation:~/working_directory$ ansible-playbook provision-server.yml

<...ouput omitted...>

server : ok=8 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0


student@student-workstation:~/working_directory$

Configure Device

Your Cisco NSO is not managing your routers yet. You will now write a Python script that interacts with Cisco NSO using the
REST API, and create a new device named csr1 based on information in XML format.

Step 32

Create a new file in your working_directory and name it csr1.xml. This file will contain basic information that informs your
Cisco NSO how to connect to your device.

Step 33

Your file should have the following content:

Answer

<device xmlns="http://tail-f.com/ns/ncs">

<name>csr1</name>

<address>192.168.0.30</address>

<port>22</port>

<authgroup>default</authgroup>

<state>

<admin-state>unlocked</admin-state>

</state>

<device-type>

<cli>

<ned-id xmlns:cisco-ios-cli-3.8="http://tail-f.com/ns/ned-id/cisco-ios-cli-3.8">cisco-ios-cli-3.8:cisco-ios-cli-3.8</ned-id>

</cli>

</device-type>

</device>

Step 34

Now, create a new file in your working_directory and name it nso_device.py. This script will make use of your XML file and
send it to Cisco NSO.

Step 35

First, you need to import the requests module for sending HTTP requests to Cisco NSO.

Answer

import requests

Step 36

Now, add REST API variables for credentials, URL, headers, and the source of XML data for the new device.

Answer

# credentials

API_USER = 'admin'
API_PASS = 'admin'

# nso server address

API_BASE = 'http://nso:8080'

# api headers

API_HEAD = {

'Accept': 'application/vnd.yang.data+xml'

Step 37

Create a session object and use the .auth() method to pass your credentials.

Answer

api_session = requests.Session()

api_session.auth = (API_USER, API_PASS)

Step 38

Here is where you define the URL, read XML data, and send it using the PUT method.

Answer

# create nso device from csr1.xml

api_endpoint = f'{API_BASE}/api/running/devices/device/csr1'

with open('csr1.xml') as xml:

api_response = api_session.put(api_endpoint, headers=API_HEAD, data=xml)

print(f'-> PUT: {api_endpoint}')

print(f' -> RESPONSE: {api_response.status_code}')

Step 39

Instruct Cisco NSO to fetch SSH host keys from your device.

Answer

# tell nso to fetch ssh keys

api_endpoint = f'{API_BASE}/api/running/devices/device/csr1/ssh/_operations/fetch-host-keys'

api_response = api_session.post(api_endpoint, headers=API_HEAD)

print(f'-> POST: {api_endpoint}')

print(f' -> RESPONSE: {api_response.status_code}')

Step 40

Instruct Cisco NSO to get current configuration from the device and save it. From here on, Cisco NSO is aware of the device
state.

Answer

# tell nso to sync configuration from device

api_endpoint = f'{API_BASE}/api/running/devices/device/csr1/_operations/sync-from'

api_response = api_session.post(api_endpoint, headers=API_HEAD)


print(f'-> POST: {api_endpoint}')

print(f' -> RESPONSE: {api_response.status_code}')

Step 41

Your script should have the following content:

Answer

import requests

# credentials

API_USER = 'admin'

API_PASS = 'admin'

# nso server address

API_BASE = 'http://nso:8080'

# api headers

API_HEAD = {

'Accept': 'application/vnd.yang.data+xml'

api_session = requests.Session()

api_session.auth = (API_USER, API_PASS)

# create nso device from csr1.xml

api_endpoint = f'{API_BASE}/api/running/devices/device/csr1'

with open('csr1.xml') as xml:

api_response = api_session.put(api_endpoint, headers=API_HEAD, data=xml)

print(f'-> PUT: {api_endpoint}')

print(f' -> RESPONSE: {api_response.status_code}')

# tell nso to fetch ssh keys

api_endpoint = f'{API_BASE}/api/running/devices/device/csr1/ssh/_operations/fetch-host-keys'

api_response = api_session.post(api_endpoint, headers=API_HEAD)

print(f'-> POST: {api_endpoint}')

print(f' -> RESPONSE: {api_response.status_code}')

# tell nso to sync configuration from device

api_endpoint = f'{API_BASE}/api/running/devices/device/csr1/_operations/sync-from'

api_response = api_session.post(api_endpoint, headers=API_HEAD)

print(f'-> POST: {api_endpoint}')

print(f' -> RESPONSE: {api_response.status_code}')

Step 42

Run your script and check the responses. Your device can now be provisioned from Cisco NSO.
Answer

student@student-workstation:~/working_directory$ python nso_device.py

-> PUT: http://nso:8080/api/running/devices/device/csr1

-> RESPONSE: 201

-> POST: http://nso:8080/api/running/devices/device/csr1/ssh/_operations/fetch-host-keys

-> RESPONSE: 200

-> POST: http://nso:8080/api/running/devices/device/csr1/_operations/sync-from

-> RESPONSE: 200

student@student-workstation:~/working_directory$

Configure Archive

Your device is now ready to be configured by Cisco NSO. You will now write a Python script that enables archiving to the
TFTP service running on your virtual machine.

Step 43

Create a new file in your working_directory and name it archive.xml. This file will contain a small piece of XML data that
enables archiving.

Step 44

Your file should have to following content:

Answer

<config>

<archive xmlns="urn:ios">

<path>tftp://192.168.0.20/$h</path>

<write-memory/>

</archive>

</config>

Step 45

Create a new file in your working_directory and name it nso_archive.py. This script will make use of your XML file and
send it to Cisco NSO.

Step 46

The first part of the script is the same as nso_device.py.

Answer

import requests

# credentials

API_USER = 'admin'

API_PASS = 'admin'

# nso server address

API_BASE = 'http://nso:8080'
# api headers

API_HEAD = {

'Accept': 'application/vnd.yang.data+xml'

api_session = requests.Session()

api_session.auth = (API_USER, API_PASS)

Step 47

Here is where you read archive1.xml and send it to Cisco NSO using the PATCH method.

Answer

# use archive.xml to configure csr1 using nso

api_endpoint = f'{API_BASE}/api/running/devices/device/csr1/config'

with open('archive.xml') as xml:

api_response = api_session.patch(api_endpoint, headers=API_HEAD, data=xml)

print(f'-> PATCH: {api_endpoint}')

print(f' -> RESPONSE: {api_response.status_code}')

Note

The PATCH method is used on Cisco NSO objects that already exist; that way, you update a part of the device
configuration.

Step 48

Your script should have the following content:

Answer

import requests

# credentials

API_USER = 'admin'

API_PASS = 'admin'

# nso server address

API_BASE = 'http://nso:8080'

# api headers

API_HEAD = {

'Accept': 'application/vnd.yang.data+xml'

api_session = requests.Session()

api_session.auth = (API_USER, API_PASS)

# use archive.xml to configure csr1 using nso

api_endpoint = f'{API_BASE}/api/running/devices/device/csr1/config'

with open('archive.xml') as xml:


api_response = api_session.patch(api_endpoint, headers=API_HEAD, data=xml)

print(f'-> PATCH: {api_endpoint}')

print(f' -> RESPONSE: {api_response.status_code}')

Step 49

Run your script and check the responses.

Answer

tudent@student-workstation:~/working_directory$ python nso_archive.py

-> PATCH: http://nso:8080/api/running/devices/device/csr1/config

-> RESPONSE: 204

student@student-workstation:~/working_directory$

Verify Service Provisioning

You will now verify that your virtual machine and TFTP service are working and that your router is using the service to
archive the configuration every time that the configuration is saved.

Step 50

Use SSH to connect to your router and accept an RSA fingerprint. Use cisco as the username.

Answer

student@student-workstation:~/working_directory$ ssh cisco@csr1

The authenticity of host 'csr1 (192.168.0.30)' can't be established.

RSA key fingerprint is SHA256:XQn/9LIaHRFpukxs49PbAa1wV7+A/Vtfr2ZI4Qnd2/Y.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'csr1' (RSA) to the list of known hosts.

Step 51

Enter cisco as the password.

Answer

Password:

csr1kv1#

Step 52

Run the show archive command and verify that the router successfully wrote the configuration to your TFTP server.

Answer

csr1kv1# show archive

The maximum archive configurations allowed is 10.

The next archive file will be named tftp://192.168.0.20/csr1kv1-<timestamp>-1

Archive # Name

1 tftp://192.168.0.20/csr1kv1-Nov--4-06-07-04-PST-0 <- Most Recent

3
4

10

csr1kv1#

Step 53

Run the write command to save the configuration and archive it to your server again.

Answer

csr1kv1# write

Building configuration...

[OK]!

csr1kv1#

Step 54

Run the show archive command again and see that the most recent archive is now number 2.

Answer

csr1kv1# show archive

The maximum archive configurations allowed is 10.

The next archive file will be named tftp://192.168.0.20/csr1kv1-<timestamp>-2

Archive # Name

1 tftp://192.168.0.20/csr1kv1-Nov--4-06-07-04-PST-0

2 tftp://192.168.0.20/csr1kv1-Nov--4-06-22-29-PST-1 <- Most Recent

10

csr1kv1#
CI/CD Pipelines for Infrastructure Automation 

Once you see the power of automation, it is easy to imagine the other side—the side of automation where an incorrect
configuration value is entered and deployed across your entire enterprise, breaking an application or severing connectivity.
For that reason, you need to strike a balance between introducing changes to the automation system and making sure that
said changes do not harm the enterprise network. That is where CI/CD comes into play.

CI/CD is the process of testing to make sure that changes do not break an application or network and make sure that the
change to be deployed is ready for the production environment. CI/CD is simply a quality assurance and testing process for
the scripted changes that ultimately will be applied to your production environment.

You can imagine that testing your existing infrastructure is not an easy task. On one hand, you need to test how a new
feature, an application, or service behaves in your existing environment. On the other hand, testing in production is out of
the question. For these reasons, you need to have a separate test environment that resembles
your production environment as close as it can.

A CD/CI server can create a test and staging environment automatically as a part of your workflow when you commit
changes to your source code repository, or you can have a CI/CD server monitor for changes in your repository and
execute automated builds for every new commit or periodically.

Code review also is very important in projects where many team members are collaborating. Everyone has their own style
of coding, and they may use different code editors or IDEs to get the work done. You can image that in the end, code
would look very inconsistent. This is where code review becomes a necessity. Your IDE can be configured to follow specific
rules of code (variable names, function names, indentation) and automatically correct or warn you if those rules are
broken.

Because IDE configuration is not a part of the code repository, code review rules must be implemented on the code
repository. This way, code that does not follow rules is found very early, because code review runs before any other tests.

CI/CD Pipelines for Infrastructure Automation: Example

Instead of making changes to code and executing everything from your workstation, a CI/CD pipeline should be
implemented to push that change to production in a consistent and predictable manner. Everyone collaborating on the
project should see what other team members changed and when that change was made.

The tests can only be run in a staging environment when the source code repository accepts a change, and the changes
can only be propagated into the production environment when the staging environment results in a stable state.

Pipelines really depend on the complexity of your infrastructure and the tools used. You will focus on the IaC with Ansible
Playbooks as examples.
The following pipeline is very common:

 You commit a change for example.yml to a code repository.

 The repository executes syntax and sanity checks, code review rules, and:

1. Accepts your commit and notifies the CI/CD server to run tests.

2. Rejects your commit based on failed checks.

 CI/CD prepares an environment and runs predefined tests for any Ansible playbook:

1. pip install ansible==2.8.1

2. ansible-version

3. ansible-playbook example.yml --syntax-check

4. ansible-playbook -i staging_inventory.cfg -check

5. ansible-playbook -i staging_inventory.cfg -vvvv

 You are notified that you can propagate your changes to the production environment.

A pipeline like this ensures the following:

 There are no syntax errors in your playbook.

 There are no missing or misspelled modules in your playbook.

 The code is compliant with the rules of coding for this project (code review).

 Ansible version 2.8.1 is used for tests.

 Your playbook executed successfully in the staging environment.

 Everyone can see the change and why it failed or succeeded.

CI/CD Pipeline: Components

More than one tool is needed for the CI/CD pipeline to be created and integrated into the workflow:

 A code repository or "repo" to centrally store scripts for safekeeping, availability, version control, forking, and
collaboration by others within your development team.

 A build system to build the automation script. The key here is the tools that you use for creating and testing your
script.
 An integration server/orchestrator to build and run test scripts. This could be as easy as your personal laptop, or it
could be as formalized as a dedicated enterprise server.

 Tools for automatic configuration and deployment would include applications for delivering the final scripts for
testing in your lab environment.

CI/CD Pipeline: Tools

Here, you will see how each component from just a moment ago is filled in with an actual tool and tool name that
performs the operations of that component. For example, a component would be the code repository, and GitHub would
perform the operations that are outlined for that component.

CI/CD tools:

 GitHub: A code repository

 Visual Studio Code: Build system (IDE)

 Standard Linux: Build and run scripts

 Ansible, Terraform: Configuration and deployment

 Jenkins: Build server

Another helpful tool in the CI/CD pipeline process is GitLab. GitLab helps you visualize the stages and components that are
involved and the current position as the changes are progressing through the pipeline.
Section 13: Testing and Securing Applications

Introduction

Applications may contain various types of defects. Some affect usability of the software, not performing the tasks required
by its users. Others may be more subtle, not (yet) directly affecting software operation but lying hidden, allowing malicious
users to exploit them later. In addition to code reviews, you should thoroughly test your code to avoid such situations. You
can use Docker containers, a popular tool that is used in software testing.

Software Test Types 

The process of developing software has many stages during which the system is subjected to many changes. Adding new
features or modifying existing ones can unexpectedly introduce new bugs to the system. There are many reasons to try to
automate testing. Testing software manually would require the developer (tester) to have a good understanding of the
system and would take a large amount of time to be done every time new bugs show up. Even for smaller software
solutions, manual testing will result in greater cost in time and money. Manual testing also is unreliable when reproducing
test cases.

With automated tests, developers can discover the cause of bugs in code faster, which allows them to spend more time
focusing on developing new features instead of fixing bugs. Another benefit of automated testing is that once you have
written a test case, you do not need to write it again the next time you are running tests.

Larger projects have many components, and it gets hard to manage what needs to be tested and why. An effective
strategy to integrate tests into your workflow is to divide testing into layers. One of the ways to do it is described by the
concept of the testing pyramid. It was first mentioned by Mike Cohn in 2009
(http://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid). This concept
proposes the following:

1. Tests from the upper layers are started when tests from the lower layers are finished.

2. There should be more tests executed in the lower layers than in the upper layers.
There are many variations of the testing pyramid concept, and perhaps the most common is separating layers as follows
(from lowest layer to highest):

1. Unit testing: Test the smallest units of software. The unit can be a module, function, class, procedure, and so on.

2. Integration testing: Test how software components work together.

3. System testing: Test the functionality of the entire system.

4. Acceptance testing: Test if the system is ready for delivery.

Unit Testing

Unit testing should be conducted from the beginning of the development process. Unit under test (UUT) is a part of
software that can be tested in isolation from the rest of the system, which means that unit tests should not test interaction
between multiple system components. UUT can be something like a method, function, procedure, or even an entire class.
Good unit tests should have the following characteristics

 Reliable: The unit test should only fail if there is a bug in the UUT. Sometimes, this condition cannot be fulfilled.
For example, when you try to write a unit test for a method of a class, and you need to call constructor, there is a
chance that there is a bug in constructor, which will make the test for the method fail.

 Isolated: UUT might interact with other components, which might have bugs of their own. Tests should be
designed to isolate UUT from other components to avoid test failure in case one of those components would cause
test failure. This test is usually done using test doubles.

 Fast: Unit tests should cover a great number of test cases for each system unit. This condition results in a great
number of tests. Unit tests are mostly run by developers who need quick feedback on whether the feature they
are working on is working properly. Running a huge number of tests can be time-costly, so it is necessary that one
unit test executes as fast as possible to reduce the time the developer has to wait for feedback.

 Readable: Most of the systems are not finished work and are constantly evolving. Features change, and so do the
needed tests. Maintaining tests becomes exponentially more difficult if great care is not given to their readability.
One of the most recommended practices for writing unit tests is to follow the "Arrange, Act, Assert" principle:

o Arrange: Initialize variables and objects that are required for the test.

o Act: Run UUT.

o Assert: Check if the result matches what was expected.

Tested units usually have dependencies to other parts of the system. To isolate UUT from those dependencies, you can use
test doubles. A test double is an object that mimics some aspect or functionality of some other object, usually the object
on which UUT depends. There are several types of test doubles:

 Fake: An object that implements required interface methods but in a lightweight manner that cannot be used in
production.

 Stub: Implements interface methods, but responses for a given input are known.
 Mock: Used to verify interaction with a mocked component. After the execution of UUT, you can inspect if certain
methods of a mocked object have been called and how many times they have been called.

 Dummy: Object that is passed but is never actually used or called.

 Spy: Acts like stubs but can provide more information on how methods were called.

Integration Testing

Testing each component individually can give you great confidence that for a given input, you will get an output that
matches the specification. This fact does not necessarily mean that a component can flawlessly work with other system
components. The consequence of isolating a UUT from its dependencies is that you must replace those dependencies with
test doubles, which requires abstracting some aspects of the required dependencies. After the unit testing is finished, you
need to validate that the component can interact properly with actual system components. Combining and testing
multiple components and their interaction is called integration testing. The component that is tested is called a component
under test (CUT).

When conducting integration testing, you can use one of two approaches:

1. Big bang approach: It requires all components to be finished, and all tests use only actual components.

2. Incremental approach: The component can be tested when it is developed. This approach requires test doubles to
replace some dependencies.

With the big bang approach, you get more reliable testing results. A disadvantage to this approach is that you need to wait
for all components to be finished before you can conduct integration testing. The incremental approach allows testing of a
component against other components when it is developed.

A common system structure would have core components that have no dependencies on other parts of the system and
are considered lower-level components. Some other system components would depend on these core components, other
components would depend on them, and so on. Components further down this dependency chain would be considered
higher-level components. When you are testing with the incremental approach, you can choose to do go with one of three
directions:

1. Top-down approach: From higher-level components to lower-level components

2. Bottom-up approach: From lower-level components to higher-level components

3. Sandwich approach: Combination of top-down and bottom-up approaches

In case you need to test your component against another component that is not yet developed, you use test doubles to
replace them. There are two types of test doubles used with integration testing:

1. Driver: When you are using the incremental bottom-up approach, it is expected that lower-level components are
already developed and tested, but higher-level components are still not developed. A driver replaces missing
higher-level components and provides the functionality of initiating and maintaining required interaction with the
CUT.

2. Stub: A stub is used in the top-down approach. The stub replaces the unfinished lower-level components and
provides basic functionality to enable required interaction for the CUT.
System Testing

The next step after integration testing is system testing. This layer of testing probably is the hardest part to understand;
there are many different expert views on what should be covered by this layer. The purpose of this testing layer is
validating that the system or product works as a whole. A lot of aspects of a system could not be tested at lower testing
layers and need to be tested before shipping the product to the client. Inputs that you will get for system testing include
available documentation and system requirements. Tests are executed in a testing environment, which differs from
production environment only in that it is a closed environment to the developers and testers, and access is not available to
end users.

The list of types of system tests that can be conducted is long and includes:

 Functionality testing: Tests system functions. The focus is on defining system functions and validating that for a
given input to the system, you get the desired output.
 Installation testing: Tests installation procedures and shows if there may be some problems with setting up the
system or product on a certain platform, operating system, or device.

 Usability testing: Tests if the system satisfies requirements for the targeted user base—for example, if the system
can be used by visual- or hearing-impaired users (mobile apps with haptic feedback).

 Security testing: Tests the system for possible security breaches.

 Performance testing: Tests system performance such as response times, average time to handle request, and
number of requests sent in a certain amount of time.

 Load testing: A subtype of performance testing. The goal is to validate if the system can handle the required load,
such as the number of users who are connected at the same time or the minimum number of requests in the
queue.

 Stress testing: A subtype of performance testing. The goal is to validate the stability of a system in extreme
conditions such as increased load above the maximum load that the system can handle.

 Regression testing: Checks if a new version of a system is compatible with older versions.

 Storage testing: Tests cover usage of Solid State Drives (SSDs) or hard disks—for example, disk storage capacity.

 Configuration testing: A given system can have different running configurations. An example would be a cloud-
based app that uses a configuration file to manage endpoints for used services, which depend on the region and
location of the server where the app is deployed. Testing systems against different configurations can help expose
fallacies in configuration handling

 Compatibility testing: Checks if the system is compatible with different environments, operating systems, mobile
devices, and so on.

 Reliability testing: Tests if the same given input system gives the same output. An example of a case where you
might get a different result with the same input is a system that delegates the execution of some task dynamically
to multiple service instances (microservices) to decrease the load on a single service. When the task is run multiple
times, it might be delegated to different services, each of which would return a different result, which would show
that this part of the system is not reliable and needs to be fixed.

 Recovery testing: Tests the system capability of recovery if there is a system failure.

 Procedure testing: Tests the defined system procedures—for example, the procedure for migrating a database.

The types of tests that you will use can depend on the product. For example, if you are developing a product that does not
store any data and does not interact with the file system, there is no need to do storage testing, because no storage
capability is required by your product.

Acceptance Testing

Once developers are finished with developing and testing the product, it is ready for delivery. Customers can push the
delivered product directly to the production environment, or they can do some tests of their own. This phase of testing
belongs to acceptance testing, and it serves the purpose of validating that the delivered product matches the
requirements of the client. Acceptance testing also can be executed by developers or by the end user to validate that the
product matches the needs of users. If the product does not satisfy some requirements, it can be returned to developers
to fix or improve the product.
There are few types of acceptance testing:

 Alpha testing: Done by developers in a development environment. Developers assume the user role and test the
usability of the product.

 Beta testing: A selected group of users gains access to the product before its release. They use the product and
provide feedback, which helps to find missed bugs and increase product quality.

 Contract testing: Validates that the product satisfies requirements that are requested by a given contract.

 Regulation testing: Validates that the product satisfies legal and government regulations.

 Operational testing: Validates that the product is ready for an operational environment. This testing includes
validating workflows for client or business operations.

Verifying Code Behavior with Unit Tests 

The goal of software development is to bring a requirement or an idea to life by using various technologies and
programming languages. The goal of software testing is to make sure that the implementation of that idea produces the
real intention. Software testing is an integral part of any modern software solution. There are many different types of
testing strategies that can be used on a system under test (SUT). The smallest piece of software that you can write to
determine whether an application behaves as expected is called a unit test.

Unit tests are automated tests that individually focus on a small portion of the application code that you want to test. You
typically define a bunch of unit tests to cover a higher percentage of functionality of the SUT, and they can be executed
whenever you need to check the validity of the application code. Because a single unit test covers a small portion (unit) of
the code in isolation, you can quickly see which parts of the system behave unexpectedly. The size of a unit under test is
not strictly defined, and neither is the definition of what a unit is. The team should decide what makes sense to be a unit
for the purpose of understanding the system and testing it. Usually, a unit is a single function, method, class, or any
statement inside these constructs that needs verification, but a unit also can span a tightly coupled collection of any of
these constructs, as the next figure suggests.

A unit test in practice is a piece of code, typically a method or function, that invokes any part of the application code that
you want to test. Inside a unit test, you define for which parts of the application code you want to check the correctness,
and what is the expected response from the application code when doing so. When a unit test runs, it either passes, or it
fails if the response is not as expected.

Unit tests should be automated, and the intentions of the test should be easy to understand. When a test fails, it should be
a clear indication of what needs to be done in the application code to fix the issue. A set of unit tests should run quickly;
this is possible because these kinds of tests are usually isolated from the dependencies of the SUT, so their actions do not
affect any other systems and their result is consistent and easily controlled. When tests use the real database with proper
data sets, or the real file system and other system properties, they step into the field of integration tests, which are slower
and mostly used to ensure that the changes can be integrated into the real environment without any problems.

Individual unit tests have a small scope, and they can run frequently when developing application code. Whenever your
code is updated with some changes, you should be able to run the tests immediately to find out whether you introduced a
bug into the features that the tests are covering at the moment. Unit tests also are a common part of continuous
integration pipelines where they determine if a build is ready for deployment. Pipelines enable test automation by running
the tests whenever there is a new change that is pushed to the repository. You do not have to run all unit tests because of
a small change in the code. You should be able to run a subset of unit tests that cover that part of the application code that
you worked on. How fast your test suite should finish executing the tests does not really matter, as long as it does not
discourage you from running the tests frequently.

So what kind of errors do you typically catch with unit tests? Syntax errors that can be considered unintentional misuses of
the specific programming language structure and rules are mostly caught by the integrated development environment
(IDE) or the compiler and interpreter before you even run your tests. Semantic (logical) errors cannot be determined at
compile time; they must be evaluated at runtime by using different testing practices, and unit testing is widely used for
that. As an example, observe the following Python code.
The function "is_greater(a, b)" expects two numbers and returns a Boolean value of True if the first number is greater than
the second. At first glance, it is a simple function that should not produce a faulty result, but it has its flaws. If you pass to
the function two identical numbers, it will return a Boolean True, which is not what you expect. Because the function only
checks whether the first number is smaller, and if not, falls back to the default assumption that the number must be
greater than, it produces a wrong result. Therefore, the function is semantically incorrect. Not only that, the function
allows you to pass into it anything, whether it is a number, string, or any kind of object that you might not be able to
compare. For example, if you pass an integer number and a string of characters, this function will produce a TypeError.

One way of unit testing this small program is to call the specific function and manually inspect the output that it generates.
This approach still could be considered as unit testing, although it is a quite primitive and inefficient approach to testing
the code.

A better way of testing the code from the example is to write another program, a test method, that enables test
automation, easy additions of new tests, and a clear report for each test. Examine the next implementation of a unit test
written in a test.py module.
In the example, you can see a custom unit test implementation. The "publish_result(test)" is a Python decorator function,
which serves for printing a report for each individual unit test that is defined in this test program. When you want to write
a unit test, you can define a new function and annotate it with a decorator as shown in the example. The test function
stores result in an array where the first argument of an array is a call to a function that you want to test, and the second
argument is the expected result from that call. The result is examined and printed in a nicely formatted way to the user
output, as it is defined in the "publish_result" decorator function. As the next figure suggests, when you have formally
defined unit tests, you can use them in test automation, running them in scripts, or by continuous integration pipelines.
If you know what is the expected behavior of a function, method, or a class, you also could write the unit tests first and
then proceed to the actual implementation of the program. The implementation needs to make sure that after it is
finished, the previously written unit tests must pass. This practice also is known as test-driven development (TDD).

Note

Decorators in Python are a simple way of wrapping functions and modifying its behaviors in real time. In the previous
example, instead of using the decorator with the @ symbol above a test function, you could achieve the same by calling
the publish_result function with the specific test function as the argument—for example, test_execution =
publish_result(test_false_when_equal).

The approach of building your own custom functions for unit testing might be sufficient for simple use cases, but when you
are working on a more complex application, your tests also will require a little bit more flexibility and functionality.

Unit Testing Frameworks

As seen on a previous example of implementing your own unit test framework, you can quickly spend more time writing
the code that tests your application than on the actual application code. Usually, for unit testing, you will be using an
existing framework that does not require any code to enable testing features. You will still need to learn how to use it, but
once you do that, the only task you will have to do is to write tests that are tailored for your specific use case using the
framework syntax.

The original paper on testing frameworks, Simple Smalltalk Testing: With Patterns, written by Kent Beck, is the basis for
most of the practices used in modern unit testing frameworks. Some of the important concepts that were introduced are
support for test automation, shared setup and teardown code for tests, aggregation of tests into collections, and
independence of the tests from the reporting framework.

These concepts can be defined with the following terminology:

 Test fixture: Create a common state of the environment that is needed by the tests and return to the original state
(cleanup) afterward—for example, starting and stopping a server process, defining a temporary database, and so
on.

 Test case: The elemental component of a test; an individual unit of testing. it is used for checking for a specific
response on a set of inputs.

 Test suite: An aggregation of test cases and other test suites that should be executed together using the same
fixture. The order of tests inside a suite does not matter.

 Test runner: A component for orchestrating the execution of tests and providing a graphical or textual output of
the testing results.

The first testing framework based on these concepts was the SUnit developed for the Smalltalk programming language.
JUnit is the Java programming language equivalent that took inspiration from the Smalltalk implementation. There are
many unit test frameworks in various programming languages that base their work on these concepts. A collective name
for these types of frameworks that are following the same basic component architecture is an xUnit framework.

Common to this testing framework is the concept of assertions. An assertion is a function or a macro that verifies the
behavior or state of the UUT. Typically, an assertion expresses a logical condition that is true in correctly running SUT. If
the logical condition is not satisfied, the execution of tests throws an exception, stopping the current test progression.
There are several unit testing frameworks that are written for the Python programming language. You will go through
examples of two most commonly used frameworks—unittest and pytest.

The unittest framework is included in the Python standard library, meaning that you do not need to install it separately
with the pip package manager. Observe the following code of a basic unit test in the Python unittest framework.

The testing module imports the unittest library. The test cases in the unittest framework are defined as instances of the
TestCase class from the unittest library. The TestCase is a base class. All the specifics to your test implementation are
defined in your concrete subclasses; in this example, it will be in the TestTools class. The instances of TestCase class
provide a couple of methods for running and checking the conditions of the tests. The two fixture methods that you can
use for setting up and tearing down the environment are the setUp()  and tearDown() methods. These two methods are
run before and after each of the test methods. In this example, the setup and teardown methods would be called two
times because there are two test methods. Instead of the setUp() method, you can use setUpClass() and the respective
teardown method, which are called only once per TestCase class. These fixture methods can be used for setting up a
mocked database, creating new objects from the application code on which the tests are performed, and similar tasks. The
fixture methods are optional.

The test methods, which use assertion for validating the logic of a UUT, must be prefixed with the test string so that they
can be automatically picked by the test execution. The methods in the TestCase class extension can use several different
assertion methods, which are simplifying the process of defining test logic. In the background, the assertions are using a
Python native command, as you can see in the next list.

The assertion methods are:

 assertEqual(a, b): Checks  a == b

 assertNotEqual(a, b): Checks a != b

 assertTrue(x): Checks bool(x) is True

 assertFalse(x): Checks bool(x) is False

 assertIs(a, b): Checks a is b

 assertIsNot(a, b): Checks a is not b

 assertIsNone(x): Checks x is None

 assertIsNotNone(x): Checks x is not None

 assertIn(a, b): Checks a in b
 assertNotIn(a, b): Checks a not in b

 assertIsInstance(a, b): Checks isinstance(a, b)

 assertNotIsInstance(a, b): Checks not isinstance(a, b)

A test fails if it does not satisfy the assertion prediction; for example, you may check whether a specific object exists in a
list by using assertIn('CS101', self.list_of_classes). This test passes if the 'CS101' string exists in the provided list and fails if
it does not. You can also test the opposite: assertNotIn('CS101', self.list_of_classes). In this case, the test fails if the string
'CS101' exists in the list and succeeds if it does not.

The framework also enables you to skip tests if they do not meet some system requirements. For example, using a
decorator @unittest.skipUnless(sys.platform.startswith("win"), "requires Windows")  on top of a test method will only
run a test if it is executed on a Windows platform. If you want to skip a specific test in a test case based on the module
version, you can also use the decorator in this fashion: @unittest.skipIf(tools.__version__ < 4.2, "not executing in this
version"). There are plenty of other options to tailor your test suite to your demands. More of the examples can be found
in Python documentation (https://docs.python.org/3/library/unittest.html).

To run the test module, you can simply execute it as a script. Observe the following example of test execution and a
generated report.

The first line after calling the module is showing a brief output of how many tests were executed and how many of them
failed. The dots (..) represent a successful test run, and the "F" letter marks a failed test. After that, it shows which of the
tests failed and the reason. You also can run unittest in deployment pipelines if a certain test fails, if a pipeline fails as well,
and the code changes are rejected.

The framework also supports simple test discovery. If you have test cases in multiple Python modules located in a folder,
you can navigate to that folder and run the command python -m unittest. By default, the command uses a pattern
"test*.py" to find all the unittest compatible files. You can change this pattern by using the --pattern (-p) flag. The start
directory can also be changed using the --start-directory(-s) flag.

The pytest framework is a more mature third-party framework and uses a lighter-weight syntax for writing tests. It can be
installed on your system using the command pip install -U pytest. With pytest, you can build simple and expressive tests
with no boilerplate code required. The previous example could be translated to the pytest framework as follows:
Notice that there is no boilerplate like in the unittest framework. You do not extend any TestCase class. If you want, you
can still group your tests in classes, but it is not necessary. The pytest execution runner will pick up methods in which
names are prefixed with "test_" or use "_test" at the end of the method name.

It also does not use specific setup and teardown methods. Any method can be used for setting up and tearing down a
testing environment by annotating it with a decorator @pytest.fixture. In the example, the  tools_lib() function is used as a
fixture. Whenever you want to use any kind of setup that this function provides, you inject the function as an argument to
your test method, and you get a prepared object that can be used in your test methods. This is an example of the so-called
dependency injection practice. In this pytest example, teardown is not included. To implement a simple setup and
teardown using the same approach, you may code your fixture method in this way:

By using the yield keyword instead of return, you pass the yielded object (generator) to the test method, and when the
test method finishes with the work, the process continues at the method that yielded the object. After the yield keyword,
you can write your teardown logic, which will be executed right after the test ends. You are also able to write standard
xUnit fixture methods for setup and teardown. More examples can be found in the pytest documentation
(https://docs.pytest.org/en/latest/fixture.html).

Another difference is in how assertions are done. The classic assertion methods, such as assertTrue, are replaced by a
single assert keyword followed by a Python logical expression. The assert statement is a standard Python statement that
also exists in many other languages. The statement checks the condition and raises an error in the condition as False. The
assertion methods can be customized to show different messages when an assertion is taking place or raising a specific
error when a condition fails. More examples can be found in the pytest documentation
(https://docs.pytest.org/en/latest/assert.html).

Similar to the unittest framework, pytest can discover the tests in a folder when the modules are prefixed with the test
string. A specific test can be executed by providing the name of the module to the pytest execution command—for
example, pytest test_file.py. Observe the output that the pytest framework generates when your run the tests.
The pytest framework provides color coding of the result output and shows what was the returned value of a unit under
test, and what did the test expect to receive in order to pass.

Practicing Unit Testing

To write a valuable unit test, you need to know the requirements and use cases of the SUT. This way, you can base your
tests on how the application is used so that the tests are beneficial to you. Generally, there are two scenarios that you
should consider when writing tests—the happy paths scenario, also known as sunny days scenario, and the error paths
scenario, also known as rainy day scenario.

The happy path scenario is a default use case where a test uses known input and executes successfully without any
exceptions; the goal of the request is achieved. For example, a user enters a valid email address and a password for logging
in to a website; both are in the proper format and are accepted by the system. You should write happy path tests based on
the different use cases of the normal usage of the application that you are testing. As an example of a happy path test,
observe the following simple unit test.

The test uses expected values for the username and password; the actual implementation if the credentials are correct can
be mocked instead of using a proper staging or production database. This request is expected to succeed, so you compare
the output of the login message with the keyword success because this is the string that the application returns on
successful login attempts.

Another use case is when a user enters an email address and a password, but because of a wrong set of characters used on
the input, the application does not accept the request. This is the so-called error path scenario, where the input is not in
the expected form. You should also cover such boundary cases in your tests, because it can help you to identify code
smells and bugs early in the development of your application. You do not want your program to crash because someone
sent an empty string for a username, so it is better to test for this beforehand and solve them in your application code if
any bugs are found. Observe the next example of an error path unit test.

Another process you can use to improve your testing objectives is code coverage. Code coverage is a measurement of how
many statements in your application code are being tested by the test cases that you run. It can help you identify parts of
your application code that you still have to test. The code coverage tools provide a percentage of how many statements
are covered. Look at the next example of an application login code.
From the previous examples of the happy path and error path test procedures, those two methods covered roughly 50
percent of the code inside the login method as marked with the green boxes. The other parts are still not covered by any
test cases, so the code coverage tools would report that. Code coverage tools usually check the entire application code
against your unit tests and report how much of the code is being tested. You should strive to achieve as high a percentage
of code coverage as possible, but keep in mind that a very high code coverage does not mean that your tests are
meaningful and are showing you the real state of an application.

If there is a good amount of test coverage, with meaningful tests, you can have confidence that when you do some code
changes, it did not affect any other parts of the application. The code coverage can be calculated by dividing the number of
lines of code tested with the total number of lines of code. The various coverage tools use also more sophisticated
methods to get the real result of how many statements and conditions are tested, functions and function calls, lines of
code, and so on. In Python, the coverage module can be used for testing the code coverage of your programs. It can be
installed using the command pip install coverage.

To run the coverage tool on your test_login module, which contains tests for the login module, you first need to execute
the command coverage run -source login  -m  pytest test_login. After the command finishes, you can see the report by
issuing the command coverage report. The report looks like this:

You can see that your current tests for the login module cover 73 percent of the statements. Besides statement coverage,
branch coverage also can be calculated. The difference between statement and branch coverage is that for statement
coverage, you only need to test a statement once, no matter if it is a decision statement or not. For example, having a
statement if(a): # do something, and a test that checks assert  a is True, you achieved 100 percent statement coverage,
but only 50 percent branch coverage, because you did not test the other possibility with assert a is False.

This is how you run the branch coverage:

The report now includes the branch coverage calculation for the login application module. As you can see in the next
output, it is slightly lower that with statement-only coverage.
Unit tests can be used for TDD. TDD promotes the idea that you write the tests before you write your application logic.
With TDD, test cases serve as a contract with the application that specifies how a certain unit under test (function,
method, class, and so on) should behave. Observe the following process of TDD.

When you develop with TDD, each new feature begins with writing a test (1). The test is written based on the application
features and requirements. The next step is to run the test and make sure that it fails (2). A failed test indicates that the
required behavior does not exist and needs to be implemented. The new test should fail for the expected reason. After the
test fails, you should write some basic functionality of the application that makes the test succeed (3). The code does not
have to be perfect at this stage, because it is expected that the refactoring will be done in the later stage.

After the test succeeds, you should run the entire test suite (4) to make sure that the new requirement did not break any
existing functionality. If the test suite does fail, you should adjust your code until the tests pass. When the codebase grows
during TDD, it should be cleaned up regularly. The refactoring process (5) adjusts the code so that it follows proper
conventions, keeps up with the modularity of the program, and prevents the duplication of code. Design patterns are used
to organize your code and ensure that proper techniques are used for implementing the functionalities. After the
refactoring process is done, the test suite is executed again to make sure that you did not break functionality by
refactoring the code.

The cycle is repeated with tests to implement new use cases (6) for the application.
Construct a Python Unit Test 

Unit tests individually focus on a small portion of the application code for testing the specific functionality of your
program. You received a skeleton code for a program that stores information about the students, the subjects they are
listening to, and grades for each subject. Your goal is to use the TDD approach to implement the application use cases. You
also received skeleton test files, where you will first implement the tests, which create a contract with the application
code. The customer decided that they want to use the pytest framework for this project. The application code must
implement the logic based on the tests that are written beforehand. You will define tests for both happy path and error
path scenarios and implement the application code based on them. At the end, you will run the tests again to make sure
that they pass. You will also check the code coverage, which will tell you how many lines of code your tests are covering.

Write Happy Path Tests

First, you will write the happy path scenario where the tests will use known inputs and execute successfully without any
exceptions. This will be the basic use case for the student application that will set the normal behavior of the program.
Before you start writing the tests, you will create a Python virtual environment, install the dependencies, and examine the
application code that you are dealing with. At the end, you will run the tests, which should fail. Then, you will proceed to
the student module code, where you will implement the requirements set by the unit tests.

Step 1

Open a terminal window and navigate to ~/working_directory.

Answer

student@student-workstation:~$ cd working_directory

Step 2

Install the pytest library inside the Python virtual environment, using pipenv.

Answer

student@student-workstation:~/working_directory$ pipenv install pytest

Installing pytest…

Adding pytest to Pipfile's [packages]…

Installation Succeeded

<… output omitted …>

Step 3

Install the coverage library inside the Python virtual environment, using pipenv.

Answer

student@student-workstation:~/working_directory$ pipenv install coverage

Installing coverage…

Adding coverage to Pipfile's [packages]…

Installation Succeeded

Installing dependencies from Pipfile.lock (dc663e)…

▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 12/12 — 00:00:04

To activate this project's virtualenv, run pipenv shell.


Alternatively, run a command inside the virtualenv with pipenv run.

Step 4

Activate the virtual environment, using pipenv.

Answer

student@student-workstation:~/working_directory$ pipenv shell

Step 5

From the desktop, open Visual Studio Code and open the working_directory.

Answer

Step 6

Click the student.py module in the left sidebar, and examine the current state of the code.

Answer

The student  module first imports the subject  module, which has the simple implementation of how a subject behaves. The
current module continues to implement the Student class with a constructor, setting the properties when a Student object
is created. The other methods inside the class are left empty; you will implement them later.

import subject

class Student:

def __init__(self, name):

self.name = name

self.email = None

self.subjects = []

self.grades = []

def add_subject(self, subject):

pass

def set_grade(self, subject, grade):

pass

def get_grades_for_subject(self, subject):

pass

Step 7
From the left sidebar, open the test_grades_happypath module.

Answer

This testing module already implements the fixtures needed by the tests. You will need to write the test cases inside the
class TestHappyPath.

import pytest

@pytest.fixture

def student():

from student import Student

stud = Student('Luka Zauber')

yield stud

del stud

@pytest.fixture

def subject():

from subject import Subject

s = Subject('Unit Testing 101')

yield s

del s

@pytest.mark.usefixtures('student', 'subject')

class TestHappyPath:

pass

Step 8

Write the first test inside the test_grades_happypath module, for adding a subject to the student. When a subject is
successfully added, the method should return Boolean True. Use the provided fixtures.

Answer

class TestHappyPath:

def test_add_subject_to_student(self, student, subject):

assert student.add_subject(subject) is True

Step 9

Open the terminal and navigate to /home/student/working_directory. Use the


command pytest test_grades_happypath.py to run the tests for the specific module.

Answer

The test fails as expected. You will implement the functionality that will make the test pass later.
Step 10

Write a test inside the test_grades_happypath module, for adding a grade to the student subject. When a grade is
successfully added to the subject, the method should return Boolean True. Use the provided fixtures.

Answer

def test_add_grade_to_subject(self, student, subject):

student.add_subject(subject)

assert student.set_grade(subject, 8) is True

Step 11

Write a test inside the test_grades_happypath module, for retrieving the grades of a subject. When getting the grades of a
subject, the method returns a list of all grades. Use the provided fixtures.

Answer

def test_get_subject_grade(self, student, subject):

student.add_subject(subject)

student.set_grade(subject, 8)

student.set_grade(subject, 9)

assert student.get_grades_for_subject(subject) == [8, 9]

Step 12

Examine the test_grades_happypath file.

Answer

The final test file should look like this:

import pytest

@pytest.fixture

def student():

from student import Student

stud = Student('Luka Zauber')

yield stud

del stud

@pytest.fixture

def subject():

from subject import Subject

s = Subject('Unit Testing 101')

yield s

del s
@pytest.mark.usefixtures('student', 'subject')

class TestHappyPath:

def test_add_subject_to_student(self, student, subject):

assert student.add_subject(subject) is True

def test_add_grade_to_subject(self, student, subject):

student.add_subject(subject)

assert student.set_grade(subject, 8) is True

def test_get_subject_grade(self, student, subject):

student.add_subject(subject)

student.set_grade(subject, 8)

student.set_grade(subject, 9)

assert student.get_grades_for_subject(subject) == [8, 9]

Step 13

Finally, run the pytest on the test_grades_happypath testing module inside the ~/working_directory folder. Use the pytest
test_grades_happypath.py command.

Answer

student@student-workstation:~/working_directory$ pytest test_grades_happypath.py

<… output omitted …>

> assert student.get_grades_for_subject(subject) == [8, 9]

E assert None == [8, 9]

E + where None = <bound method Student.get_grades_for_subject of <student.Student object at


0x7f47ba36f9e8>>(<subject.Subject object at 0x7f47ba36f518>)

E + where <bound method Student.get_grades_for_subject of <student.Student object at 0x7f47ba36f9e8>> =


<student.Student object at 0x7f47ba36f9e8>.get_grades_for_subject

test_grades_happypath.py:35: AssertionError

======================= 3 failed in 0.04s =======================

Make Tests Pass

You will implement the application code inside the student module, based on the previously written happy path unit tests.
Your goal is to make the tests pass.

Step 14

Open the student.py module from the ~/working_directory folder in Visual Studio Code, and implement
the add_subject() method based on the previously written unit test.

Answer

def add_subject(self, subject):

self.subjects.append(subject)
return True

Step 15

Implement the set_grade() method based on the previously written unit test.

Answer

def set_grade(self, subject, grade):

self.grades.append((subject, grade))

return True

Step 16

Implement the get_grades_for_subject() method based on the previously written unit test.

Answer

def get_grades_for_subject(self, subject):

grades = []

for grade in self.grades:

grades.append(grade[1])

return grades

Step 17

Run the happy path tests again and examine if the tests now succeed. Inside the ~/working_directory folder, run
the pytest test_grades_happypath.py command.

Answer

student@student-workstation:~/working_directory$ pytest test_grades_happypath.py

===================== test session starts =====================

platform linux -- Python 3.7.3, pytest-5.3.0, py-1.8.0, pluggy-0.13.1

rootdir: /home/student/working_directory /working_directory

collected 3 items

test_grades_happypath.py ... [100%]

===================== 3 passed in 0.01s =====================

Write Error Path Tests

Now, you will write more detailed requirements. The error path tests will define more edge cases of user inputs to the
methods in the student module. You should check for the less expected input and catch errors later in the application logic
implementation.

Step 18

In Visual Studio Code, open the test_grades_errorpath.py module from the ~/working_directory folder and examine the
current content.

Answer
This testing module includes the same fixtures as the previous one, with the addition of another fixture, subjects, which
returns multiple Subject objects. There will be a couple of tests per each method in the student module, so the tests will
be grouped in classes for better readability.

import pytest

@pytest.fixture

def student():

from student import Student

stud = Student('Luka Zauber')

yield stud

del stud

@pytest.fixture

def subject():

from subject import Subject

s = Subject('Unit Testing 101')

yield s

del s

@pytest.fixture

def subjects():

from subject import Subject

s = [Subject('Unit Testing 101'), Subject('CS500')]

yield s

del s

@pytest.mark.usefixtures('student', 'subject')

class TestErrorPathSubject:

pass

@pytest.mark.usefixtures('student', 'subject')

class TestErrorPathAddGrade:

pass

@pytest.mark.usefixtures('student', 'subject', 'subjects')

class TestErrorPathGetGrade:
pass

Step 19

Implement the tests in the TestErrorPathSubject class, for testing the Boolean False response when the subject passed to
the student object is empty.

Answer

def test_add_empty_subject_to_student(self, student, subject):

assert student.add_subject(None) is False

Step 20

Implement the test in the TestErrorPathSubject class, for testing the Boolean False response when the subject passed to
the student object already exists in the list.

Answer

def test_add_existing_subject_to_student(self, student, subject):

student.add_subject(subject)

assert student.add_subject(subject) is False

Step 21

Implement the test in the TestErrorPathAddGrade class, for testing the ValueError response when the grade is higher than
the limit (10).

Answer

def test_add_higher_grade_to_subject(self, student, subject):

student.add_subject(subject)

with pytest.raises(ValueError):

student.set_grade(subject, 11)

Step 22

Implement the test in the TestErrorPathAddGrade class, for testing the ValueError response when the grade is lower than
the limit (1).

Answer

def test_add_lower_grade_to_subject(self, student, subject):

student.add_subject(subject)

with pytest.raises(ValueError):

student.set_grade(subject, -1)

Step 23

Implement the test in the TestErrorPathAddGrade class, for testing the ValueError response when the grade is the same as
the lower limit. This is an edge case scenario, which is special because Python treats 0 as Boolean False. It is good practice
to check edge cases such as this one when testing your applications.

Answer

def test_add_zero_grade_to_subject(self, student, subject):

student.add_subject(subject)
with pytest.raises(ValueError):

student.set_grade(subject, 0) 

Step 24

Implement the test in the TestErrorPathGetGrade class, for testing the Boolean False response when the subject passed to
the student object is empty.

Answer

def test_get_none_subject_grade(self, student, subject):

student.add_subject(subject)

student.set_grade(subject, 8)

assert student.get_grades_for_subject(None) is False

Step 25

Implement the test in the TestErrorPathGetGrade class, for testing the emptiness of the list of grades when the existing
subject does not hold any grades yet but there are other subjects that have them.

Answer

def test_get_missing_subject_grade(self, student, subjects):

student.add_subject(subjects[0])

student.add_subject(subjects[1])

student.set_grade(subjects[0], 8)

assert student.get_grades_for_subject(subjects[1]) == []

Step 26

Run the error path tests and confirm that all tests fail. Inside the ~/working_directory folder, run
the pytest test_grades_errorpath.py command.

Answer

student@student-workstation:~/working_directory$ pytest test_grades_errorpath.py

<… output omitted …>

> assert student.get_grades_for_subject(subjects[1]) == []

E assert [8] == []

E Left contains one more item: 8

E Use -v to get the full diff

test_grades_errorpath.py:70: AssertionError

====================7 failed in 0.07s ====================

Refine Code to Catch Errors

After the detailed error path tests are defined, it is time that you implement them in the student module. At the end, you
can check the code coverage to inspect whether your tests are covering most of the application code.

Step 27
Open the student.py module from the ~/working_directory folder in Visual Studio Code, and implement
the add_subject() method based on the two unit tests written in the TestErrorPathSubject class of the
test_grades_errorpath module. The subject argument should be validated for its correctness, and duplication should be
avoided by returning Boolean False if the subject already exists.

Answer

def add_subject(self, subject):

if subject:

if subject not in self.subjects:

self.subjects.append(subject)

return True

else:

return False

else:

return False

Step 28

Implement the set_grade() method based on the unit tests written in the TestErrorPathAddGrade class. The subject and
grade arguments should be validated for existence. There is a special case with the grade parameter; per tests, it is
considered to be a ValueError if the user sends grade 0, not Boolean False, as Python interprets it by default. The tests also
define the upper and lower boundary for the grade, so be sure to implement it.

Answer

def set_grade(self, subject, grade):

if (subject and grade) or (subject and grade is int(0)):

if subject in self.subjects:

if grade < 1 or grade > 10:

raise ValueError('grade out of bound', grade)

else:

self.grades.append((subject, grade))

return True

else:

raise ValueError('no subject or grade', subject)

else:

return False

Step 29

Implement the get_grades_for_subject() method based on the unit tests written in the TestErrorPathGetGrade class. The
subject argument should be checked for validity, and the method should return only the grades from the requested
subject, as the tests specify.

Answer
def get_grades_for_subject(self, subject):

if subject:

grades = []

for grade in self.grades:

if grade[0] is subject:

grades.append(grade[1])

else:

return False

return grades

Step 30

Run the error path tests again and confirm that all tests succeed now. Inside the ~/working_directory folder, run
the pytest test_grades_errorpath.py command

Answer

student@student-workstation:~/working_directory$ pytest test_grades_errorpath.py

==========================test session starts ==========================

platform linux -- Python 3.7.3, pytest-5.3.0, py-1.8.0, pluggy-0.13.1

rootdir: /home/student/working_directory

collected 7 items

test_grades_errorpath.py ....... [100%]

==========================7 passed in 0.02s ==========================

Step 31

All the tests are now passing, meaning that the student module implements the requirements from the test successfully.
Run code coverage for the line statements in the student module to inspect whether your tests covered most of the
application code. Inside the ~/working_directory folder, run the coverage run --source student -m pytest
test_grades_errorpath.py command.

Answer

student@student-workstation:~/working_directory$ coverage run --source student -m pytest test_grades_errorpath.py

==========================test session starts ==========================

platform linux -- Python 3.7.3, pytest-5.3.0, py-1.8.0, pluggy-0.13.1

rootdir: /home/student/working_directory

collected 7 items

test_grades_errorpath.py ....... [100%]

==========================7 passed in 0.03s ==========================

Step 32
Run the coverage report command inside the ~/working_directory directory to inspect the results of the code coverage
execution from the previous step.

Answer

student@student-workstation:~/working_directory$ coverage report

Name Stmts Miss Cover

--------------------------------

student.py 31 3 90%

There are 31 statements in the student module, and only three were not covered by the tests, resulting in 90 percent code
coverage. This is a good result, and you should strive for high code coverage, but also keep in mind that the unit tests you
write might cover the code statements, but they do not test the code efficiently. The –branch flag in the coverage
run command would give you different results because it also checks the coverage of branches in the condition
statements.

Dockerfile Composition 

A Docker image provides everything needed for the application in the container to run. The Docker image is defined by a
set of instructions placed in a file, called Dockerfile, that Docker uses to build the image. When Docker is building the
image from the Dockerfile, it is executing the instructions in order.

The directory where the Dockerfile is located is called build's context and is used by the docker build command when
building the Docker image. All the files and folders in the context can be used during this process for copying or executing
by the instructions in the Dockerfile. The context can be specified also with the PATH or URL instructions in the Dockerfile.

The instructions in the Dockerfile are not case-sensitive, but the conventions dictates they need to be uppercase for
clearer readability. A Dockerfile must begin with the FROM instruction, which specifies the parent image; one exception is
the ARG instruction, which can be placed before in case arguments are used in the FROM instruction.

Lines that start with the "#" sign are treated as comments.

Process of creating a container:

1. Write the Dockerfile.

2. Add files to the build's context.

3. Build the image using the docker build command.

4. Start the container with the new image.

Dockerfile properties:
 Convention dictates the instructions to be uppercase.

 Starts with the instruction FROM.

 Lines starting with "#" are comments.

The basic instructions used in Docker files are:

 FROM: Specifies the parent (base) image to be used for the following instructions in the Dockerfile. The Dockerfile
might have multiple FROM instructions to build multiple images. Usage:

1. The image build with the following FROM instruction will have the latest Ubuntu release for the base
image:

FROM ubuntu:latest

 COPY: Used to copy files or directories from the build's context into the container. The destination can be an
absolute or a relative path in the container file system. Relative paths are relative to the working directory. Usage:

1. The following instruction copies the users.txt file from the build's context to /home/cisco/user_abs.txt.
The destination is set with an absolute path:

COPY users.txt /home/cisco/users_abs.txt

2. The following instruction copies the users.txt file from the build's context to WORKDIR/user_rel.txt. The
destination is set with a relative path:

COPY users.txt users_rel.txt

 ENV: Creates a new environment variable or sets a value of an existing variable inside the container. There are two
possible ways of defining the environment variables, with or without the "=" sign. By using "=", multiple variables
can be set in the same instruction. Usage:

1. Without the "=" sign:

ENV APP_VERSION 6.3

2. With the "=" sign:

ENV app_name="Device Manager", app_maintainer=John\ Smith, app_directory=/opt/app

The double quotes are used when the value of the variable contains spaces; another option is to use the "\" as the escape
character.

 RUN: Used to run a single or multiple commands in a shell in the container. There are two forms of writing and
running the commands:

1. The shell form runs the command in the shell with /bin/sh -c:

RUN apt update

2. Running the commands in the exec form requires the definition of the shell to be used and the following
commands as separate elements:

RUN ["/bin/bash", "-c", "echo $HOME"]

 VOLUME: Creates a mounting point for persisting data that is consumed by the Docker containers. Volumes are
managed by Docker and do not get deleted when the container stops running. Usage:

1. The following instruction creates a mounting point with for the directory /opt/app:

VOLUME /opt/app
 EXPOSE: Exposes a TCP or UDP port on which the application running in the container is accessible. The instruction
serves more as a documentation for the one running the container to correctly publish the ports to the outside
network when running the container. Usage:

1. The following instruction specifies the port on which the application in the container listens:

EXPOSE 8080

Dockerfile Example

The following Dockerfile defines an image that is based on the latest release of Ubuntu. The install.sh file is copied into
the /app directory, and then the mounting point for the directory is created. The image is updated, and software is
installed. The install is run with the /bin/sh shell, and port 8080 gets exposed.

Interpret a Dockerfile 

Dockerfiles are used for building Docker images. A base image from a repository is used as the base and on top of it
additional files are uploaded, packages installed, and system preferences configured. You will containerize a simple Python
code with a build in the web server with the necessary configurations and files to run it.

Examine Provided Dockerfile

In this task, you will open and examine the prepared Dockefile skeleton.

Step 1

From the Desktop, open Visual Studio Code.

Answer
Step 2

Open your working directory folder, which is located at /home/working_directory.

Step 3

From the Explorer, open the file named Dockerfile.

Step 4

Examine the file. The file is a skeleton Dockerfile, which needs to be filled in.

Add Code to Image

Dockerfile consists of multiple predefined instructions to manage the image and customize it to accommodate the running
application. You will add the needed instructions for the application to run.

Step 5

For the base image, use python with the tag alpine3.7 with the FROM instruction.

Answer

# Use python:alpine3.7 for the base image

FROM python:alpine3.7

Step 6

Write the instruction to copy the Python files from the ./app directory to /app directory with the COPY instruction.

Answer

# Copy the .py files in ./app from build's context to /app

COPY ./app/*.py /app/

Step 7

Write the instruction to copy the Pipfile from the ./app directory to /app directory with the COPY instruction.

Answer

# Copy the Pipfile in ./app from build's context to /app

COPY ./app/Pipfile /app/

Step 8

Change the working directory to /app with the WORKDIR instruction.

Answer

# Change the working directory to WORKDIR

WORKDIR /app

Step 9

Expose the port 8080 using the EXPOSE instruction.

Answer

# Expose port 8080

EXPOSE 8080

Install Missing Dependencies


Additional software is installed during the build of a Docker image. The RUN instruction runs a command on the system
and is usually used to install packages. You will add some tools and configure the Python environment for the application.

Step 10

Troubleshooting Domain Name System (DNS)-related issues on Linux machines running web applications is one of the
tasks of DevOps engineers. A powerful tool that might help you is the dig command, which comes in the bind-tools
package. Use the RUN instruction to install the package using the apk add bind-tools command.

Answer

# Install bind-tools

RUN apk add bind-tools

Step 11

The base image comes with Python preinstalled, but to run the application, pipenv is required. Use the RUN instruction to
install pipenv with pip install.

Answer

# Install pipenv

RUN pip install pipenv

Step 12

Run the pipenv install command to install the pip environment using the RUN instruction.

Answer

The modules required by the application are defined in the Pipfile, and the pipenv install command installs them
automatically in the environment.

# Install the environment

RUN pipenv install

Configure Container Startup

Dockerfile is the base for building the Docker images. Besides adding files and packages, it defines the process that will be
running in the container. You will set the entry point, build the Docker image, start the container, and test if the
application inside it is running as expected.

Step 13

Use the CMD instruction to start the application with the pipenv run python3.7 myapp.py command.

Answer

# Start the python application

CMD pipenv run python3.7 myapp.py

Step 14

Save the changes you have made to the Dockerfile.

Answer

Use Control+s or File > Save.

Step 15

From the terminal on your workstation, build the image using the docker build command. Use the option -t and set it
to my-python-app:1.0. Include the trailing period at the end of the command for building the image based on the
Dockerfile located in the current directory. The building process will take a few minutes to complete.
Answer

student@student-vm:~/working_directory$ docker build -t my-python-app:1.0 .

Step 16

In the terminal, list the Docker images on your workstation using the docker images command. Identify the image you
have created.

Answer

The image python:alpine3.7 is used as the base image for the my-python-app:1.0 image.

student@student-vm:~/working_directory$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

my-python-app 1.0 6f742115fe80 29 minutes ago 135MB

python alpine3.7 00be2573e9f7 9 months ago 81.3MB

Step 17

Run a container in detached mode, bind the port 8080 to 8080, use the image my-python-app:1.0, and name it myapp.

Answer

The -d option starts the container in detached mode, -p option binds the ports (host to container), and the --name option
sets the name of the container.

student@student-vm:~/working_directory$ docker run -d -p 8080:8080 --name myapp my-python-app:1.0

4b9e96b0a8af14ad8b2b332c3b9713ca0733b61fe2a192d3484f7e559b120611

student@student-vm:~/working_directory$

Step 18

Open a browser window, and in the URL field, type localhost. For the port, use 8080.

Answer

The URL is localhost:8080. The content of the page is:

Server working!

Using Docker in a Local Developer Environment 

Containerization technology makes it possible for developers to replicate the production environment on the local
computer. As a developer, you can manage the containers via the terminal. Some basic operations consist of pulling an
image from a repository, starting, stopping, and entering a container. To access the container via the network, you can
configure the Docker network so that it fits the requirements of the application, which is being developed.

Similarly to Git repositories, where the code is stored in a repository, Docker images are stored in Docker repositories. An
image might have multiple versions, each with its own unique tag. An image and its versions are stored in a repository;
multiple repositories make a registry. Docker Hub is a registry with publicly available and private repositories of Docker
images. To transfer an image from a repository to your local Docker installation, use the docker
pull image_name:tag command. For example, to pull the latest Ubuntu image, use the following command:

cisco@workstation:~$ docker pull ubuntu:latest


Images that are created locally with Dockerfile or pulled from a repository are templates for running instances called
containers. To list the images on the host, use the docker images command. The command displays the properties of the
images available on the host, the repository, tag, image ID, when it was created, and its size.

cisco@workstation:~$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

ubuntu latest 775349758637 3 weeks ago 64.2MB

To run a container from an image, Docker uses the repository and the tag or just the image ID to uniquely identify an
image. To run a container from an image, the docker run image command is used. To run an Ubuntu container from the
image using the repository name and the tag, use the following command:

cisco@workstation:~$ docker run ubuntu:latest

To run the container using the image ID, use the following command:

cisco@workstation:~$ docker run 775349758637

Additional options can be specified with the command. For example, the previous commands to run the Ubuntu container
would exit immediately, because no process inside the container is running; Docker containers are in running state only if a
process inside them is running. To start a container in interactive mode with the shell displayed in the terminal, options -
i and -t should be added to the docker run command.

cisco@workstation:~$ docker run -i -t ubuntu:latest

root@60530976efe7:/#

To manage the Docker containers, you first need to get their name or ID. The docker ps command shows the currently
running containers, their ID, image from which they were created, the command executed, time since creation, and the
status, which is published ports (if any) and their names.

cisco@workstation:~$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

7059eef1a2ae ubuntu:latest "/bin/bash" 12 seconds ago Up 11 seconds compassionate_satoshi

The -a option, appended to the previous docker ps command, displays all existing containers on the host, including the
nonrunning containers.

cisco@workstation:~$ docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

7059eef1a2ae ubuntu:latest "/bin/bash" 13 minutes ago Up 13 minutes compassionate_satoshi

c396aa7e5cfb ubuntu:latest "/bin/bash" 15 minutes ago Exited (0) 15 minutes ago clever_hugle

To stop a running container, use the docker stop command. For specifying the container to stop, use either the name of
the container or its ID.

cisco@workstation:~$ docker stop 7059eef1a2ae

7059eef1a2ae

cisco@workstation:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

7059eef1a2ae ubuntu:latest "/bin/bash" 18 minutes ago Exited (0) 13 seconds ago compassionate_satoshi

c396aa7e5cfb ubuntu:latest "/bin/bash" 15 minutes ago Exited (0) 15 minutes ago clever_hugle

To start a container, use the docker start command and specify the container with its name or ID.

cisco@workstation:~$ docker start 7059eef1a2ae

7059eef1a2ae

cisco@workstation:~$ docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

7059eef1a2ae ubuntu:latest "/bin/bash" 20 minutes ago Up 4 seconds compassionate_satoshi

c396aa7e5cfb ubuntu:latest "/bin/bash" 22 minutes ago Exited (0) 22 minutes ago clever_hugle

When you need to remove a container, use the docker rm command and specify the container with its name or ID. Only
nonrunning containers can be deleted.

cisco@workstation:~$ docker rm 7059eef1a2ae

Error response from daemon: You cannot remove a running container


7059eef1a2ae00910a64682cf278ce9cd8aa356683d9936679ac2bbba147bf7a. Stop the container before attempting
removal or force remove

cisco@workstation:~$ docker rm c396aa7e5cfb

c396aa7e5cfb

cisco@workstation:~$ docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

7059eef1a2ae ubuntu:latest "/bin/bash" 20 minutes ago Up 4 seconds compassionate_satoshi

Docker Network

For connecting a container to another container or machine on the host network or outside it, Docker uses network
drivers. Different types of drivers are available, the default being the bridge driver, which is installed automatically during
Docker installment. It connects the containers in the same bridge network and isolates the containers in different bridge
networks. When starting a container if no network driver is specified, the bridge network is used.

For example, if an application in a container runs on port 8080, and you would like to reach it via port 80 from the host,
you need to publish the port when starting the container using the -p or --publish option. The following example runs a
container and publishes a port:

cisco@workstation:~$docker run -p 80:8080 ubuntu:latest

cisco@workstation:~$docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

19c353471bea ubuntu:latest "/bin/bash" 7 seconds ago Up 5 seconds 0.0.0.0:80->8080/tcp romantic_joliot

From the host, the application is accessible on port 80, and the default bridge network driver is used. The PORTS column
specifies the port bindings, and in this case, all traffic coming from any IP on port 80 is directed to port 8080 of the
container.

Docker keeps track of the ports being used by the containers, and in case of publishing a port in use, an error shows, saying
the port is already allocated to a running container.
cisco@workstation:~$docker run -it -p 8080:81 ubuntu:latest

docker: Error response from daemon: driver failed programming external connectivity on endpoint hopeful_chandrasekhar
(45251c13f6c722e711421172790f22128850d9273849712f81974e9d9548d89b): Bind for 0.0.0.0:8080 failed: port is
already allocated

Utilize Docker Commands to Manage Local Developer Environment 

Using the terminal on your workstation, you will manage the Docker images, containers, and the Docker network. You will
pull an image from a repository, start multiple containers, and resolve a network issue by fixing the port bindings.

Check Container Repositories

In this activity, you will check the repositories and pull images from them.

Step 1

Finding a Docker image through a registry is possible on the website and via the CLI. Open the terminal and enter
the docker search alpine command to find the images of a lightweight Linux image named Alpine.

Answer

student@student-vm:~/working_directory$ docker search alpine

NAME DESCRIPTION STARS OFFICIAL AUTOMATED

alpine A minimal Docker image based on Alpine Linux… 5861 [OK]

mhart/alpine-node Minimal Node.js built on Alpine Linux 445

anapsix/alpine-java Oracle Java 8 (and 7) with GLIBC 2.28 over A… 430 [OK]

frolvlad/alpine-glibc Alpine Docker image with glibc (~12MB) 220 [OK]

<...output omitted...>

Step 2

Pull the Alpine image using the docker pull command.

Answer

The image with the latest tag is downloaded.

student@student-vm:~/working_directory$ docker pull alpine


Using default tag: latest

latest: Pulling from library/alpine

89d9c30c1d48: Pull complete

Digest: sha256:c19173c5ada610a5989151111163d28a67368362762534d8a8121ce95cf2bd5a

Status: Downloaded newer image for alpine:latest

docker.io/library/alpine:latest

Step 3

Use the docker images command to show the images on the host.

Answer

student@student-vm:~/working_directory$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

alpine latest 965ea09ff2eb 4 weeks ago 5.55MB

Run Containers

You can create the containers from images that have been pulled from a repository or created locally. In this activity, you
will start the containers from the pulled images.

Step 4

Start a container using the Alpine image and publish port 8080 inside the container to port 80 on the host. To create a
container, enter the docker run -dit -p hostPort:containerPort --name name image:tag command.

Answer

The -dit options serve so that the container does not exit after it has been created.

student@student-vm:~/working_directory$ docker run -dit -p 80:8080 --name alpine alpine:latest

80420802802d53460540ac8623f7c4984b984ca95802d11ec72b8f681b20570c

Step 5

Use the docker exec command to enter the container. The command syntax is docker exec -it containerName shell. For
the shell, use /bin/sh.

Answer

student@student-vm:~/working_directory$ docker exec -it alpine /bin/sh

/#

/#

/ # ls

bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var

Step 6

Display the running processes in the container.

Answer

There are two shells running; one is the current shell, and the second was started when the container was created.

/ # ps -a
PID USER TIME COMMAND

1 root 0:00 /bin/sh

13 root 0:00 /bin/sh

19 root 0:00 ps -a

/#

Step 7

Exit from the shell by typing exit.

Answer

/ # exit

student@student-vm:~/working_directory$

Verify Port Bindings

When using the bridge network driver, binding two different ports from the host to the same one on two different
containers causes a biding error. In this activity, you will troubleshoot it and fix it.

Step 8

Start a container using the Ubuntu image, and publish port 8080 in the container and port 80 on the host.

Answer

The Ubuntu image does not exist yet locally. Before the container is started, the run command automatically pulls the
missing image.

student@student-vm:~/working_directory$ docker run -dit -p 80:8080 --name ubuntu ubuntu:latest

28fbdbffae149a46b27a89b4c5bf99ca71b9094488b44c5c0dcfb8e2947e4afb

docker: Error response from daemon: driver failed programming external connectivity on endpoint ubuntu
(29a3ce4100b8b8b54e3cffef57568f6e0c55e45e1226a0771ea205fff6ee68c7): Bind for 0.0.0.0:80 failed: port is already
allocated.

The error shows that a running container has the same port 80 already bound.

Step 9

Verify the existing containers with the docker ps -a command.

Answer

student@student-vm:~/working_directory$ docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

d99fb8b8d549 ubuntu:latest "/bin/bash" 39 seconds ago Created ubuntu

6703cc98c3d8 alpine:latest "/bin/sh" 18 minutes ago Up 18 minutes 0.0.0.0:80->8080/tcp alpine

Step 10

Stop the container named Alpine. Use the docker stop containerName command.

Answer

student@student-vm:~/working_directory$ docker stop alpine

alpine

Step 11
Remove the two containers. Use the docker rm containerName command and verify that there are no running containers.

Answer

student@student-vm:~/working_directory$ docker rm alpine

alpine

student@student-vm:~/working_directory$ docker rm ubuntu

ubuntu

student@student-vm:~/working_directory$ docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Step 12

There are no running containers and no possible conflicts with the port bindings. Try again to run the Ubuntu container.

Answer

student@student-vm:~/working_directory$ docker run -dit -p 80:8080 --name ubuntu ubuntu:latest

f14c655b73e2f3bcef2d15c3836891a395a5cfb157b05bb543651c6015f27257

Step 13

Verify if the Ubuntu container is up and running.

Answer

student@student-vm:~/working_directory$ docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

f14c655b73e2 ubuntu:latest "/bin/bash" 2 minutes ago Up 2 minutes 0.0.0.0:80->8080/tcp ubuntu

Application Security 

Modern applications are accessed throughout different networks and clouds and that makes them even more vulnerable
to various threats and attackers. Application security includes procedures that help prevent data leaks from the application
itself. It is a crucial part of the application development process (testing security features) and protection measures after
the application is developed. From an IT business point of view, data leaks are a serious issue that can hurt the company
and web application users a lot. It is within your power to prevent that from happening at any cost.

The security measures can be done on various TCP/IP layers:

The Internet layer:

 Using a router to prevent an attacker from viewing a computer IP address

The application layer:

 Using a web application firewall to prevent a malicious unsecure request from being accepted
Web application security relies on a couple of components. Same-origin policy, for example, is one of the basic approaches
of identification assurance. Once the content of a website (such as https://example.com) gets permission to access
resources on a web client, then the content from any URL will share these permissions. This can happen if the following
components of the URL are the same:

 URI scheme

 Hostname

 Port number

If any of these three components are different from the trusted URL, then the permission is no longer granted.

Data Handling

Application data comes in various forms, either as data in motion or as data at rest. Data in motion is data that transits
from a device to device or location A (web server or application) to a different location B via a private network or the
Internet. Data at rest is data that does not move from one place to another and is stored on a computer, hard drive, flash
drive, server, and so on. While data in motion can be assumed as less secure because of the transition, data at rest usually
is more valuable and targeted because it consists of more sensitive information. Protection of both types of data is
important and needs to be addressed aggressively, because both data types can be exposed to risks.

There are many ways to secure the data in both states. One of the most used tools is encryption. Sensitive data in transit is
encrypted or sent (or both) via encrypted connections like HTTPS, Transport Layer Security (TLS), and so on to preserve its
content. Data at rest also can be encrypted before it is stored, or the whole storage drive itself can be encrypted.

It is also important that the HTTPS (with a certificate) connection is established throughout every step of the (data) route:

 User

 Browser

 Router, firewall, load balancer

 End server

Data Handling with Top OWASP Threats: XSS

The Open Web Application Security Project (OWASP) Foundation is a nonprofit charitable organization that came online in
late 2001. It is an open community that enables organizations to develop, invent, handle, and maintain applications that
are trustworthy. Everything that OWASP provides, from tools to documentation and forums, is free and open to anyone.
The organization has no commercial interest, so it can provide fair and free information about application security and its
issues.

Cross-site scripting (XSS) occurs when an attacker executes malicious scripts in the web browser of a victim. It exploits
known vulnerabilities in web applications, web application servers, and its plug-in systems. The attacker injects malicious
code, mostly JavaScript, into a legitimate web page or application. When the victim visits the compromised web page, the
script is executed. Because it comes from a trusted source, the web browser does not check the content for malicious
scripts. This way, the attacker gains access privileges to cookies, delicate content, and other session information operated
by the web browser from the user. The most common XSS attacks occur on web pages that grant user comments and web
forums.

There are many types of XSS attacks, but the most common are as follows:

 Stored XSS attacks occur when the injected malicious code is stored on the targeted web application server (such
as the database). Once the request for the stored information is made, the victim unknowingly executes the
malicious script.

 Reflected XSS attacks occur when the injected malicious code is reflected with any response from the web server
that consists of the input sent to the web server. The attack is distributed to the victim in a different way, usually in
an email massage. Once the victim clicks the link, the malicious code navigates to the web page. This action
reflects the attack back to the victim web browser, which then executes the malicious script, because the script
came from an already trusted web server.

 Other XSS attacks (such as Document Object Model [DOM]-based XSS).

The consequence of the XSS attack is the same no matter the type. The severity of the attack can range from a modified
content presentation to hijacking the bank or other accounts of the user.

To prevent such attacks, it is important that the HTTP TRACE is turned off on a web application server in order to deny all
untrusted data into the web page HTML document and escape (HTML, JavaScript, cascading style sheet [CSS]) tags, and
sanitize any of the user input.

The following is an example of a stored XSS attack.

<html>
<script type="text/javascript">var attack='../hack.php?cookie_data='+escape(document.cookie);</script>
</html>

Data Handling with Top OWASP Threats: SQL Injection

Another common web application threat is the Structured Query Language (SQL) injection attack. A SQL injection is made
of an insertion of a SQL query through the input data from an attacker to the web application. With a SQL injection, the
attacker can read or modify (such as insert or delete) sensitive data from the database. The attacker can in some cases
execute operating system commands, recover file content, or issue administration operations on the database
management system. A successful SQL injection attack inserts a metacharacter to data input, which then puts SQL
commands into the control plane. Note that SQL treats the data and control plane almost the same.
Any platform that has an interaction with a SQL database is at risk, but websites are among the most common targets.

SQL injection attacks can be separated into three different types:

 In-band SQL injection is the most common and the easiest to exploit. It happens when the attack can use the
identical communication channel to initiate the attack and collect the desired data. Error-based and union-based
are the two most common subtypes of the in-band SQL injection.

 Inferential or blind SQL injection takes longer to exploit compared with in-band, but is just as malicious. No actual
data is sent to the attacker, and the attacker cannot see the result right away. However, the attack can change the
structure of the database. Boolean-based and time-based are among the most common subtypes of the inferential
SQL injections.

 Out-of-band SQL injection happens when the attacker cannot use the identical communication channel to initiate
the attack and collect the results. Because it relies on enabled features on the database server applied by the
attacked web application, it is not commonly used. This technique depends on the capacity of the database server
to issue DNS or HTTP requests to get a hold of the desired data.

To prevent SQL injection attacks, you will need to:

 Sanitize any input data that you get from the user.

 Use prepared statements and not dynamic SQL.

 Specify the output data so that you do not leak any sensitive data that is not supposed to be seen.

The following is an example of a simple SQL injection:

Because OR 1=1 is TRUE, the SQL query will return all rows from the "Users" table. If the Users table contains names and
passwords, then this simple attack can be very harmful.

Data Handling with Top OWASP Threats: CSRF and SSRF

A cross-site request forgery (CSRF) attack occurs when an attacker forces a victim to issue undesirable actions on the
victim-authenticated web application. The attack is not executed to collect any data but to target state-changing requests
(such as money transfer), because the attacker cannot see the return of the forged request. CSRF is done through social
engineering where a victim clicks a link (such as inside an email) and unknowingly submits a forged request. The CSRF
attack also can be stored inside an image tag, hidden form, or JavaScript XMLHttpRequest objects, which makes it harder
for the victim to detect the attack, because it is done on a legitimate web page and not on some random page. The attack
exploits the web applications trust in a user browser as opposed to the XSS where the user trust of a web application is
exploited.
Prevention of a CSRF attack can be done with antiforgery tokens. You need to introduce a unique and secret token with
every HTTP response. The antiforgery tokens are usually random numbers that are stored inside a cookie and stored on a
web server. With every HTTP request, the tokens are validated from the server, and if the tokens match on both the cookie
and the server, the request is accepted.

Server-side request forgery (SSRF) vulnerabilities allow the attacker to send a forged request from a web server on the
behalf of the attacker. In this type of attack, the targets usually are internal systems behind some firewalls. The web
application request sometimes retrieves external information from a third-party resource (such as updates), and the
attacker can modify or control such requests. The attacker also can:

 Read internal resources and files from the vulnerable server.

 Scan local or external networks.

To prevent this kind of attack, you need to use a whitelist of allowed domains and protocols from which the server can
retrieve external resources.
Securing and Scaling Application Ingress Traffic 

Applications need to be available 24 hours every day. A successful web application should be able to handle ingress traffic
even when the number of users drastically rises and be able to support any amount of traffic. For example, if your web
page loads in a couple of seconds with 100,000 users a month, it should be able to load within the same time even with
double or triple the amount of users. But just scaling the traffic is not enough; the ingress traffic needs to be secure as
well. While many businesses deal with traffic security and scaling when it is usually too late, you need to be prepared and
use the right prevention tools and methods.

One of those methods is Secure Sockets Layer (SSL) offloading, for example, with a load balancer on the edge of your
network.

SSL offloading has two different approaches:

 SSL termination

 SSL bridging

SSL termination happens when a load balancer (or proxy server) used for SSL offloading decrypts the HTTPS connection
from the user to the web application. The connection from the load balancer to the application server later goes through
HTTP. When a user connects to the web application, the connection used by the user browser is still through HTTPS (and
encrypted), and just the communication after the load balancer is changed.

SSL bridging, on the other hand, just changes the data encryption. Instead of sending the traffic requests forward via HTTP,
SSL bridging usually re-encrypts the traffic with Rivest, Shamir, and Adleman (RSA) keys.
SSL termination and SSL bridging both execute traffic analysis and help excessively with handling an immense amount of
traffic on your network. Because data encryption is a high-demanding CPU task, SSL offloading can help you with scaling
the web application traffic. However, data encryption is still a desired and best-practice procedure.

It is also possible to change the public SSL/TLS certificates once they reach your local network to private certificates.

A SSL/TLS certificate joins together:

 Hostname, server name, or domain name

 Identity and the location of an organization

 A public key of the web application or page

Public-facing web pages use public SSL/TLS certificates. A public certificate is used to secure user-to-server or server-to-
server requests and communication. Because of the regular public policy updates, the public certificate needs to be
changed more frequently, while a private certificate does not.

Like a public certificate, a private SSL/TLS certificate needs to be approved by a certificate authority (CA). The main
difference is that a private SSL/TLS certificate can be used only for server-to-server communication and for nonregistered
private network domains.

Web Application Firewall

A device or software that can block, filter, and monitor HTTP requests to and from a web application is called a web
application firewall (WAF). The difference between a regular network firewall and WAF is that the network firewall
operates on Layer 3 and 4, while the WAF operates on Layers 3 to 7.
Network firewalls do not ensure safe traffic of the web application and do not provide protocol coverage, SSL traffic
inspection, or any other kind of threat detection. WAF, however, provides all the latter with the focus on web application
transactions. SQL injection or distributed denial of service (DDoS) attacks all can be prevented by the WAF.

WAFs usually are provisioned between the web application and the user.

Reverse Proxy Scrubbing

A reverse proxy server, sometimes called a surrogate, resembles a traditional server. It forwards requests to a traditional
web server (or multiple servers) that later deals with the requests. The server then responds back to the reverse proxy
server, which handles and returns the request to the user. However, the user does not know that the request came from
the proxy server. A reverse proxy server handles the application ingress traffic in many ways.

SSL encryption usually is done on the reverse proxy with the right SSL acceleration hardware and not by the web
application server itself. A reverse proxy server can remove the need for multiple separate SSL certificates for each web
application server behind the reverse proxy and can implement SSL encryption for a multiple number of servers. A reverse
proxy also can operate as a load balancer by scattering the load to different web application servers. It can remove the
burden off the web application servers by caching static content, which optimizes the speed of the web page loading time.
A security layer is added with the reverse proxy server, protecting against the attack on the operating system and the web
server. Nonetheless, the proxy does not implement protection against the web application itself.

Load Balancing

A load balancer distributes the workload and ingress traffic over multiple web servers. It optimizes the use of resources,
minimizes the response time, and helps avoid overload.

A load-balanced system can consist of:

 Databases

 Web pages

 FTP sites

 DNS servers

If you have a simple local network without a designated load balancer provisioned before your web application server, all
user requests will go straight to the web server. As the number of users grows, so does the number of requests that need
to be handled by the web server. This can slow down your web page or application. In a worst-case scenario, the web
server can go down, and users will lose access to the web page itself.

The most straightforward way to balance the traffic to a server farm of multiple servers is to implement load balancing on
the fourth layer. The load balancer scatters user traffic depending on the IP scope and ports. When a user request includes
https://example.com, the load balancer redirects the request to the back-end servers (or database) of the web application
on port 80.
A different and more convoluted approach is to implement load balancing on the seventh layer. Layer 7 load balancing
allows you to differentiate the request placed on the content of the requests, letting you operate several web servers with
the same domain name and port number. When a user request includes, for example, https://example.com/shop, the load
balancer redirects the request to the series of back-end servers that operate the shop back end. For example, requests for
videos on the web server are redirected to a different back end, which can run another application even when both back
ends use the same database.

A load balancer can use different approaches or algorithms:

 Round Robin is the default algorithm, which selects servers in turns.

 Least Connections selects a server with the least number of connections.

 Source algorithm establishes a hash of the user IP address, making sure that the user connects to the same server.

 Sticky sessions ensure that the same user connects to the same web server when web applications demand so.

The advantage of sticky sessions is that the user session does not move from one server to the other. This can lead to more
efficient performance because only one server creates the session object, and there is no need for a common data layer
for syncing sessions with multiple servers if the application requires so. However, nonsticky load balancing might be
favorable for most cases because it is harder to overload one machine. If a node is lost, the sticky session also is lost, which
can result in inconsistent behavior for a user.
A health check also can be done by the load balancer, which prevents a single point of failure. If a server does not respond
(for example, it becomes unhealthy), it becomes disabled, and the traffic will not be redirected to the unhealthy server
before it becomes responsive again. This is also known as a high-availability infrastructure.

Securing and Scaling DNS

The DNS is the second-most attacked protocol after the HTTP, which is why securing your DNS servers is very important.

The DNS is an essential component on the networks that you are connecting to with your devices. It is storage that holds
mapping between names and numbers, similar to a phone book. In a typical user network, the domain names are resolved
to IP addresses using a public DNS. You also may use a private DNS to resolve names that you do not intend to expose to
the Internet. DNS queries and responses can be a rich source of security-related information regarding activity on your
network. DNS records provide a wealth of information about the infrastructure of an organization, to both legitimate users
and potential attackers. Making sure that the DNS infrastructure is resilient is critical for the security and operation of
internal and external network applications.

Publicly available DNS servers need to be trustworthy and are not supposed to operate recursively. Attackers use DNS
recursion to gain knowledge about your internal network. If your domain names have to be resolved by the public, then
only those DNS servers should perform those actions. Other DNS servers need to be secured only for your internal
network.

All DNS servers need to be an element of a high-availability cluster. If one DNS server goes down, then others will accept
the load. This outcome can also be done in high-availability pairs of servers.

Primary name servers only serve the information to secondary DNS servers inside the organization, which is why they need
to be hidden to end users and not used for queries. They should be accessible to organization IT maintenance employers
only to protect the honesty of the DNS information. When public DNS servers are available, the DNS servers in an internal
network must be behind a network firewall.

A web application can hold tens or hundreds of external (linked) resources, where all resources may need a DNS lookup for
a web application to function properly. If the DNS servers are not provisioned properly, it can slow down the application.
That is why it is important for organizations to have on-site DNS servers at every branch instead of just at headquarters or
remote sites.

When DNS servers are producing authoritative information, they cannot be used as recursive servers. Zone transfers
between DNS servers need to be secured by access control lists (ACLs); this action prevents DDoS attacks. Internal
secondary DNS servers must refuse all zone transfer requests.
To provide secure trustworthy DNS queries, Domain Name System Security (DNSSEC) extensions need to be executed. DNS
information is digitally signed by DNSSEC and makes sure that end users connect to legitimate web pages or services
depending on a domain name. This process is done via public key infrastructure (PKI). A link of assurance between the
head of the DNS tree and the bottom end nodes is created from the root server digital certificate to the name server.

Exploit Insufficient Parameter Sanitization 

Exploit Insufficient Parameter Sanitization

You will inspect a Python program code and call an API endpoint with a filename that will read a file and return back the
output of the file. Next, you will verify the insecure operation by adding commands that will read sensitive information
from the server. Finally, you will sanitize the user input string by implementing input validation.

Inspect Program Code

You will get to install the Flask module inside the Python virtual environment. Later, you will get to inspect Python program
code and the function that receives the GET requests.

Step 1

From the desktop, open a terminal, go to the working_directory, and install Flask using pipenv.

Answer

student@student-workstation:~$ cd ~/working_directory/

student@student-workstation:~/working_directory$ pipenv install flask

Installing flask…

Adding flask to Pipfile's [packages]…

✔ Installation Succeeded

Pipfile.lock (35806a) out of date, updating to (e0f86e)…

Locking [dev-packages] dependencies…

Locking [packages] dependencies…

✔ Success!

Step 2

Open Visual Studio Code and open the folder /home/working_directory. Open the flask_app.py file and check the
containing code.

Answer
from flask import Flask, request

app = Flask(__name__)

def cat(filename):

with open(filename) as file:

data = file.read()

return data

@app.route('/get_file', methods=['GET'])

def get_file():

filename = request.args['filename']

return '''Content of the file {} is...\n\n {}''' .format(filename, cat(filename))

if __name__ == "__main__":

app.run(host="127.0.0.1", port=int("8080"), debug=True)

Note

Flask is a Python micro web framework. Micro framework means that it does not need additional libraries or tools. The full
documentation for the Flask web framework is available at https://flask.palletsprojects.com/en/1.1.x/.

Step 3

Run flask_app.py inside pipenv with the python command.

Answer

student@student-workstation:~/working_directory$ pipenv run python flask_app.py

* Serving Flask app "flask_app" (lazy loading)

* Environment: production

WARNING: This is a development server. Do not use it in a production deployment.

Use a production WSGI server instead.

* Debug mode: on

* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)

* Restarting with stat

* Debugger is active!

* Debugger PIN: 124-189-315

Note

As you can see, the Flask app is running on the localhost address 127.0.0.1 port 8080.

Step 4

Take a closer look at the get_file function to see how the Flask app handles the get_file request.
Answer

@app.route('/get_file', methods=['GET'])

def get_file():

filename = request.args['filename']

return '''Content of the file {} is...\n\n {}''' .format(filename, cat(filename))

Note

As you can see, the get_file function is triggered when a request is received with the API resource /get_file and takes the
value of the filename key as an argument.

Step 5

Now, open Postman and send a get request with the API resource /get_file and welcome.txt as a value of
the filename key to get the output of the desired file.

Answer

Step 6

Take a closer look at the response.

Answer

Note

The welcome.txt file does not hold any sensitive information about the server, but that probably is not true for any other
files on the web application server.

Verify Insecure Operation

You will inspect the code a bit further and try to find a possible vulnerability. Then, you will send a request to the Flask app
to get hold of a file that contains plaintext passwords.

Step 7

Open Visual Studio Code and open the folder /home/working_directory. Open the flask_app.py file, and take a closer look
at the get_file function to try to find out if it uses another function to read a file.

Answer

@app.route('/get_file', methods=['GET'])

def get_file():

filename = request.args['filename']

return '''<h1>Content of the file {} is... \n\n {}''' .format(filename, cat(filename))

Note

As you can see, the get_file function is using another function named "cat" to read files

Step 8

Next, look at the cat function and try to find the command-line vulnerability.

Answer
def cat(filename):

with open(filename) as file:

data = file.read()

return data

Note

As you can see, the filename (user input) is never checked. This result presents a big flaw in the security of the code and
the Flask app itself.

Step 9

The password.txt file is in the directory above the directory where the Flask app searches for files. This file contains
sensitive data and should not be accessible to end users with the API. Open Postman and send a GET request with the API
resource /get_file and passwords.txt as a value of the filename key to get the output of the desired file.

Answer

Step 10

Take a closer look at the response.

Answer

Note

The input is never checked or sanitized. An experienced attacker can turn this into their favor and read a lot of sensitive
information.

Implement Input Validation

In this procedure, you will update the Python script and sanitize the end user input.

Step 11

Open Visual Studio Code and open the folder /home/working_directory. Open the flask_app.py file again.

Answer

from flask import Flask, request

app = Flask(__name__)

def cat(filename):

with open(filename) as file:

data = file.read()

return data

@app.route('/get_file', methods=['GET'])

def get_file():
filename = request.args['filename']

return '''Content of the file {} is...\n\n {}''' .format(filename, cat(filename))

if __name__ == "__main__":

app.run(host="0.0.0.0", port=int("8080"), debug=True)

Step 12

First, import the regex re module at the top of the Python script.

Answer

import re

from flask import Flask, request

Step 13

Write a function named sanitize_string above the cat function. The sanitize_string function will take the filename as an
input.

Answer

def sanitize_string(filename):

Step 14

Next, write an if check with the help of imported re module and its search function. The search function takes a regular
expression (regex) pattern and a string as an input. You need to write a regex that checks for accepted characters within
the users input. The accepted characters are any character (at the beginning) from a to Z, 0 to 9, the “-“ character, and the
dot (.). Any other character represents a potential command injection attack and should not be used in filenames.

Answer

def sanitize_string(filename):

if re.search('^[\w\-\.]+$', filename):

Note

The full documentation for the re module and search function is available at: https://docs.python.org/3/library/re.html#.

Step 15

Only if the filename users input contains an accepted regex can it be passed further to the cat function. Write
a pass statement inside your if check.

Answer

def sanitize_string(filename):

if re.search('^[\w\-\.]+$', filename):

pass

Step 16

Now, you will need to make sure that if the accepted regex is not found, the sanitize_string function raises a ValueError
with the ‘Can not use special characters’ as a standard output.

Answer

def sanitize_string(filename):
if re.search('^[\w\-\.]+$', filename):

pass

else:

raise ValueError('Can not use special characters')

Step 17

Because the cat function reads the file, you will need to make sure that the users input is checked before the file is read.
Write the users input check function inside the cat function, before the file is actually read.

Answer

def cat(filename):

sanitize_string(filename)

with open(filename) as file:

data = file.read()

return data

Step 18

Save the file and open Postman. First, test that the GET request indeed accepts the desired characters with the
welcome.txt file.

Answer

Step 19

Finally, test the updated Flask app with the malicious GET request, and once again, try to obtain the password.txt file,
which is in a folder above the folder where the Flask app searches for files.

Answer

Step 20

You can see the response codes and errors in the terminal where the Flask app is running.

Answer

File "/home/student/working_directory/flask_app_complete.py", line 13, in cat

sanitize_string(filename)

File "/home/student/working_directory/flask_app_complete.py", line 10, in sanitize_string

raise ValueError('Can not use special characters')

ValueError: Can not use special characters


Network Simulation and Test Tools 

The challenge with modern-day data centers lies in their complexity, density, and multivendor solutions spanning across
multiple different technologies. The solution is to programmatically manage a data center full of devices using a scripting
language like PowerShell or a full programming language such as Python. These options bring their own challenges of
complex features, syntactical instructions, and the learning curve that comes with both. For that reason, another piece in
the DevOps puzzle was developed, Ansible. Ansible is an open source provisioning software that allows for centralized
configuration management.

When creating a test DevOps environment, it is best to re-create your production environment, or at least include as many
of your real-world devices as possible in a test network. Using all physical devices will be prohibitive for many reasons—
cost, space, convenience, power, and noise being just a few of them. For that reason, instantiating as many of your
production devices as virtual machines is an ideal way to re-create your production environment in a test lab. Today, that
process is easy, because so many of your physical devices also can run as virtual machines.

Here is a short list of Cisco network devices in a virtual machine form factor:

 Cisco NX-OSv: The NX-OS operating system found on your Cisco Nexus switches, running as a virtual machine.

 Cisco UCS-PE: Cisco Unified Computing System (UCS) Manager Platform Emulator. A virtual machine emulating
Cisco UCS Manager running in Fabric Interconnects. A download of Cisco UCS-PE is available from the Cisco
Community pages at https://community.cisco.com.

 Cisco ASAv: Cisco ASA adaptive security appliance firewall as a virtual machine, supports the programmability
feature natively, needing only to enable the API services.

 Cisco CSR1Kv: Cisco Cloud Services Router, also known as the CSR 1000v, running as a virtual machine, extends an
enterprise into the public cloud.

 Cisco ACI Simulator: Cisco Application Centric Infrastructure Simulator runs a real, fully featured Cisco Application
Policy Infrastructure Controller (APIC) software along with simulated running as a virtual machine.

Note

Using emulated and virtual form factors, it is fully possible to create this topology in a portable and virtual lab
environment.

Network Simulation and Test Services

Some administrators may not have the time, the know-how, or the desire to create their own test environment. For that
reason, there are several cloud-based services available that provide a ready-made lab environment with flexible options
that are quick and easy to use.
Network simulation tools are as follows:

 Cisco VIRL: Cisco VIRL offers fast and easy-to-deploy network modeling and environment simulations to include a
simulated physical environment connection. VIRL enables administrators to build highly accurate models of
existing networks using authentic versions of Cisco network operation systems for Layer 2 and Layer 3 devices,
including Cisco NX-OSv, CSR1000v, ASAv, and IOS-XRv solutions. Cisco VIRL is available at http://virl.cisco.com/.

 Cisco pyATS: Cisco pyATS is a Python framework for creating automated tests and validations. Everything from
device to network or even web GUI features can be tested. It enables developers to construct small test cases that
can later scale with infrastructure. Cisco pyATS is available at https://developer.cisco.com/pyats/.

 GNS3: GNS3 is one of the oldest network simulators around. It can run Cisco IOS images, keyboard, video, mouse
(KVM), and VirtualBox machines, Docker containers, and more. GNS3 is available at https://www.gns3.com/.

Cisco DevNet, available at https://developer.cisco.com, is the premier home for learning how to use developer tools and
languages to automate network changes programmatically. DevNet resources include learning videos, sample code, and
sandboxes for testing your new skills or planned changes to your production network.

To access the DevNet Sandbox, log in with your Cisco DevNet account and click Sandbox under the list of options that
DevNet has to offer. From there, you will be able to explore the list of available technologies.
Cisco dCloud offers a large and growing catalog of demos, training, and sandboxes for a wide range of Cisco architectures.
The environment is fully scripted and customizable.

Once you are logged in to Cisco dCloud, navigate to find content and pick your area of interest from the catalog. Once you
have selected the actual lab to work on, click Schedule.

You might also like