Professional Documents
Culture Documents
Cloud Computing Students Handbook - v1.3 - 115444
Cloud Computing Students Handbook - v1.3 - 115444
Cloud Computing Students Handbook - v1.3 - 115444
Cloud Computing
It can be supported by a cloud provider that sets up a platform that includes the OS,
Apache, a MySQL™ database, Perl, Python, and PHP with the ability to scale
automatically in response to changing workloads.
Cloud computing can be the ability to use applications on the Internet that store and
protect data while providing a service — anything including email, sales force automation
and tax preparation. It can be using a storage cloud to hold application, business, and
personal data. And it can be the ability to use a handful of Web services to integrate
photos, maps, and GPS information to create a mashup in customer Web browsers.
1.1.2 Definition
Cloud computing is the delivery of computing services—including servers, storage,
databases, networking, software, analytics, and intelligence—over the Internet (“the
cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically
pay only for cloud services you use, helping lower your operating costs, run your
infrastructure more efficiently and scale as your business needs change.
In brief cloud is essentially a bunch of commodity computers networked together in same
or different geographical locations, operating together to serve a number of customers
with different need and workload on demand basis with the help of virtualization. Cloud
These utility services are generally described as XaaS (X as a Service) where X can be
Software or Platform or Infrastructure etc. Cloud users use these services provided by
the cloud providers and build their applications in the internet and thus deliver them to
their end users. So, the cloud users don’t have to worry about installing, maintaining
hardware and software needed. And they also can afford these services as they have to
pay as much they use. So, the cloud users can reduce their expenditure and effort in the
field of IT using cloud services instead of establishing IT infrastructure themselves.
Cloud is essentially provided by large distributed data centers. These data centers are
often organized as grid and the cloud is built on top of the grid services. Cloud users are
provided with virtual images of the physical machines in the data centers. This
virtualization is one of the key concept of cloud computing as it essentially builds the
abstraction over the physical system. Many cloud applications are gaining popularity day
by day for their availability, reliability, scalability and utility model. These applications
made distributed computing easy as the critical aspects are handled by the cloud provider
itself.
Why is cloud computing a buzzword today? i.e. what are the benefits the provider and
the users get using cloud? Though its idea has come long back in 1990 but what situation
made it indispensable today? How is cloud built? What differentiates it from similar terms
like grid computing and utility computing? What are the different services are provided by
the cloud providers? Though cloud computing now-a-days talks about business
enterprises not the non-profit organizations; how can this new paradigm be used in the
services like e-governance and in social development issues of rural India?
Virtualization is provided on top of these physical machines. These virtual machines are
provided to the cloud users. Different cloud provider provides cloud services of different
So,the cloud services are divided into many types like Software as a Service, Platform as
a Service or Infrastructure as a Service. These services are available over the Internet in
the whole world where the cloud acts as the single point of access for serving all
customers. Cloud computing architecture addresses difficulties of large-scale data
processing.
2. Public Cloud – In this type an organization rents cloud services from cloud provider’s on-
demand basis. Services provided to the users using utility computing model. 3.
3. Hybrid Cloud – This type of cloud is composed of multiple internal or external cloud. This
is the scenario when an organization moves to public cloud computing domain from its
internal private cloud.
1. SaaS (Software as a service) – Delivers a single application through the web browser
to thousands of customers using a multitenant architecture. On the customer side, it
means no upfront investment in servers or software licensing; on the provider side,
with just one application to maintain, cost is low compared to conventional hosting.
Under SaaS, the software publisher (seller) runs and maintains all necessary
hardware and software. The customer of SaaS accesses the applications through
Internet.
For example Salesforce.com with yearly revenues of over $300M, offers on-demand
Customer Relationship Management software solutions. This application runs on
Salesforce.com’s own infrastructure and delivered directly to the users over the
Internet.
Salesforce does not sell perpetual licenses but it charges a monthly subscription fee
starting at $65/user/month. Google docs is also a very nice example of SaaS where
the users can create, edit, delete and share their documents, spreadsheets or
presentations whereas Google have the responsibility to maintain the software and
hardware. E.g. - Google Apps, Zoho Office
3. IaaS (Infrastructure as a Service) – IaaS service provides the users of the cloud
greater flexibility to lower level than other services. It gives even CPU clocks with OS level
control to the developers. E.g. – Azure VM and Azure Blob store.
Cloud Services
(a) Most of the data centers today are under-utilized. They are mostly 15% utilized.
These data centers need spare capacity just to cope with the huge spikes that sometimes
get in the server usage. Large companies having those data centres can easily rent those
computing power to other organizations and get profit out of it and also make the
resources needed for running data centre (like power) utilized properly.
(b) Companies having large data centers have already deployed the resources and
to provide cloud services they would need very little investment and the cost would
be incremental
(a) Cloud users need not to take care about the hardware and software they use
and also, they don’t have to be worried about maintenance. The users are no longer tied
to someone traditional system.
(b) Virtualization technology gives the illusion to the users that they are having all
the resources available.
(c) Cloud users can use the resources on demand basis and pay as much as they
use. So, the users can plan well for reducing their usage to minimize their expenditure.
(d) Scalability is one of the major advantages to cloud users. Scalability is provided
dynamically to the users. Users get as much resources as they need. Thus, this model
perfectly fits in the management of rare spikes in the demand
Another category of virtual machine is called process virtual machine which acts as an
abstract layer between the operating system and applications. Virtualization can be very
roughly said to be as software translating the hardware instructions generated by
conventional software to the understandable format for the physical hardware.
Virtualization also includes the mapping of virtual resources like registers and memory to
real hardware resources. The underlying platform in virtualization is generally referred to
as host and the software that runs in the VM environment is called as the guest. The
Figure shows very basics of virtualization.
Here the virtualization layer covers the physical hardware. Operating System accesses
physical hardware through virtualization layer. Applications can issue instruction by using
OS interface as well as directly using virtualizing layer interface. This design enables the
users to use applications not compatible with the operating system.
Virtualization enables the migration of the virtual image from one physical machine to
another and this feature is useful for cloud as by data locality lots of optimization is
possible and also this feature is helpful for taking back up in different locations. This
feature also enables the provider to shut down some of the data center physical machines
to reduce power consumption.
The products like Microsoft Azure, Google App-Engine and Amazon EC2 are capturing
the market with their ease of use, availability aspects and utility computing model. Users
don’t have to be worried about the hinges of distributed programming as they are taken
care of by the cloud providers. They can devote more on their own domain work rather
than these administrative works.
Business organizations are also showing increasing interest to indulge themselves into
using cloud services. There are many open research issues in this domain like security
aspect in the cloud, virtual machine migration, dealing with large data for analysis
purposes etc. In developing counties like India cloud computing can be applied in the e-
governance and rural development with great success. Although as we have seen there
are some crucial issues to be solved to successfully deploy cloud computing for these
social purposes.
The cloud services market size was valued at $264.80 billion in 2019, and is projected to
reach $927.51 billion by 2027, growing at a CAGR of 16.4% from 2020 to 2027. Cloud
computing refers to the model or network where a program or applications run, which can
be accessed by many devices or servers at a time.
Cloud computing technology is a shift in the tradition of computing, which has given
newer and faster methods to provide computing solutions, infrastructure solutions, and
It also gives a comparative analysis of the cloud computing technology with the
conventional technology and describes how the cloud computing technology scores an
upper hand than the conventional technology. The cloud computing technology
comprises of both hardware as well as the software through which the services are
delivered. This report contains only the services category and excludes the hardware.
Hence, the report also focuses on the cloud services market opportunities. There lies a
great potential in the cloud computing services market due to several benefits such as
access to broader network, on demand service, pay as you go benefits, resource pooling,
business agility, rapid elasticity, cost cutting, and others. The global adoption of cloud
computing services in various sectors such as medical & healthcare, banking financial
services & insurance, and educational sector with the help of various deployment models
determines the scope of further increase in the global cloud computing services market.
Follow the instruction given below to create an Azure Free Account and login into Azure
Portal.
Azure Dashboard
• Fault Tolerance- Ability to withstand certain amount of failure and still remain
functional
• Scalability- Ability to easily grow in size, capacity and/ or scope when required.
Growth is usually based on demand.
• Elasticity- Ability to grow or scale when required and reduce in size when
resources are no longer needed.
Cloud Advantages
Scalability
Get the specific amount of power you need, when you need it, enabling you
to increase and decrease levels to suit your businesses demands.
Cost savings
Resilience
Protecting your business against any potential IT failures that could cause
down-time or disruption, fully backed-up to provide a complete disaster
recovery solution.
Business Focus
When you rely on the cloud, you can apply your time and money towards
your business priorities, rather than worrying about your IT infrastructure.
Virtualization makes it possible to create multiple virtual machines, each with their own
operating system (OS) and applications, on a single physical machine. A VM cannot
interact directly with a physical computer. Instead, it needs a lightweight software layer
called a hypervisor to coordinate between it and the underlying physical hardware. The
hypervisor allocates physical computing resources—such as processors, memory, and
storage—to each VM. It keeps each VM separate from others so they don’t interfere with
each other.
When a hypervisor is used on a physical computer or server, (also known as bare metal
server), it allows the physical computer to separate its operating system and applications
from its hardware. Then, it can divide itself into several independent “virtual machines.”
Each of these new virtual machines can then run their own operating systems and
applications independently while still sharing the original resources from the bare metal
server, which the hypervisor manages. Those resources include memory, RAM, storage,
etc. The hypervisor acts like a traffic cop of sorts, directing and allocating the bare metal’s
resources to each of the various new virtual machines, ensuring they don’t disrupt each
other.
Type 1 hypervisors run directly on the physical hardware (usually a server), taking the
place of the OS. Typically, you use a separate software product to create and manipulate
VMs on the hypervisor. Some management tools, like VMware’s vSphere, let you select
a guest OS to install in the VM.
You can use one VM as a template for others, duplicating it to create new ones.
Depending on your needs, you might create multiple VM templates for different purposes,
such as software testing, production databases, and development environments.
Type 2 hypervisors run as an application within a host OS and usually target single-user
desktop or notebook platforms. With a Type 2 hypervisor, you manually create a VM and
then install a guest OS in it. You can use the hypervisor to allocate physical resources to
your VM, manually setting the amount of processor cores and memory it can use.
Resource utilization and improved ROI: Because multiple VMs run on a single
physical computer, customers don’t have to buy a new server every time they want
to run another OS, and they can get more return from each piece of hardware they
already own.
Scale: With cloud computing, it’s easy to deploy multiple copies of the same virtual
machine to better serve increases in load.
Portability: VMs can be relocated as needed among the physical computers in a
network. This makes it possible to allocate workloads to servers that have spare
computing power. VMs can even move between on-premises and cloud
environments, making them useful for hybrid cloud scenarios in which you share
computing resources between your data center and a cloud service provider.
Flexibility: Creating a VM is faster and easier than installing an OS on a physical
server because you can clone a VM with the OS already installed. Developers and
software testers can create new environments on demand to handle new tasks as
they arise.
Security: VMs improve security in several ways when compared to operating
systems running directly on hardware. A VM is a file that can be scanned for
malicious software by an external program. You can create an entire snapshot of
the VM at any point in time and then restore it to that state if it becomes infected
with malware, effectively taking the VM back in time. The fast, easy creation of
VMs also makes it possible to completely delete a compromised VM and then
recreate it quickly, hastening recovery from malware infections.
VMs have several uses, both for enterprise IT administrators and users. Here are a few
options:
Cloud computing: For the last 10+ years, VMs have been the fundamental unit
of compute in cloud, enabling dozens of different types of applications and
workloads to run and scale successfully.
Support DevOps: VMs are a great way to support enterprise developers, who can
configure VM templates with the settings for their software development and
testing processes. They can create VMs for specific tasks such as static software
tests, including these steps in an automated development workflow. This all helps
streamline the DevOps toolchain.
System and network administrators use this protocol the most, as well as anyone who
needs to manage a computer remotely in a highly secure manner.
How Does SSH Work?
In order to establish an SSH connection, you need two components: a client and the
corresponding server-side component. An SSH client is an application you install on the
computer which you will use to connect to another computer or a server. The client uses
the provided remote host information to initiate the connection and if the credentials are
verified, establishes the encrypted connection.
On the server’s side, there is a component called an SSH daemon that is constantly
listening to a specific TCP/IP port for possible client connection requests. Once a client
initiates a connection, the SSH daemon will respond with the software and the protocol
versions it supports and the two will exchange their identification data. If the provided
credentials are correct, SSH creates a new session for the appropriate environment.
The default SSH protocol version for SSH server and SSH client communication is
version 2.
How to Enable an SSH Connection
Since creating an SSH connection requires both a client and a server component, you
need to make sure they are installed on the local and the remote machine, respectively.
An open source SSH tool—widely used for Linux distributions— is OpenSSH. Installing
OpenSSH is relatively easy. It requires access to the terminal on the server and the
computer that you use for connecting. Note that Ubuntu does not have SSH server
installed by default.
How to Install an OpenSSH Client
Before you proceed with installing an SSH client, make sure it is not already installed.
Many Linux distributions already have an SSH client. For Windows machines, you can
install PuTTY or any other client of your choice to gain access to a server.
To check if the client is available on your Linux-based system, you will need to:
1. Load an SSH terminal. You can either search for “terminal” or
press CTRL + ALT + T on your keyboard.
2. Type in ssh and press Enter in the terminal.
3. If the client is installed, you will receive a response that looks like this:
username@host:~$ ssh
[-J [user@]host[:port]] [-L address] [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option
] [-p port] [-Q query_option] [-R address] [-S ctl_path] [-W host:port] [-w local_tun[:remote
_tun]]
[user@]hostname [command]
username@host:~$
This means that you are ready to remotely connect to a physical or virtual machine.
Otherwise, you will have to install the OpenSSH client:
1. Run the following command to install the OpenSSH client on your
computer: sudo apt-get install openssh-client
2. Type in your superuser password when asked.
3. Hit Enter to complete the installation.
You are now able to SSH into any machine with the server-side application on it, provided
that you have the necessary privileges to gain access, as well as the hostname or IP
address.
How to Install an OpenSSH Server
In order to accept SSH connections, a machine needs to have the server-side part of the
SSH software toolkit.
If you first want to check if OpenSSH server is available on the Ubuntu system of the
remote computer that needs to accept SSH connections, you can try to connect to the
local host:
1. Open the terminal on the server machine. You can either search for “terminal”
or press CTRL + ALT + T on your keyboard.
2. Type in ssh localhost and hit enter.
3. For the systems without the SSH server installed the response will look similar
to this:
The response in the terminal should look similar to this if the SSH service is now running
properly:
Active: active (running) since Fr 2018-03-12 10:53:44 CET; 1min 22s ago Process: 1174
ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCES
Another way to test if the OpenSSH server is installed properly and will accept
connections is to try running the ssh localhost command again in your terminal prompt.
The response will look similar to this screen when you run the command for the first time:
The authenticity of host 'localhost (127.0.0.1)' can't be established. ECDSA key fingerpri
nt is SHA256:9jqmhko9Yo1EQAS1QeNy9xKceHFG5F8W6kp7EX9U3Rs. Are you sure y
ou want to continue connecting (yes/no)? yes
Congratulations! You have set up your server to accept SSH connection requests from a
different
computer using an SSH client.
TIP
You can now edit the SSH daemon configuration file, for example, you can change the
default port for SSH connections. In the terminal prompt, run this command:
The configuration file will open in the editor of your choice. In this case, we used Nano.
Please note that you need to restart SSH service every time you make any changes to
the sshd_config file by running this command:
ssh your_username@host_ip_address
If the username on your local machine matches the one on the server you are
trying to connect to, you can just type:
ssh host_ip_address
Here is the example of a connection request using the OpenSSH client. We will specify
the port number as well:
Warning: Permanently added ' 185.52.53.222' (ECDSA) to the list of known hosts.
username@host:~$
You are now able to manage and control a remote machine using your terminal. If you
have trouble connecting to a remote server, make sure that:
The IP address of the remote machine is correct.
The port SSH daemon is listening to is not blocked by a firewall or forwarded
incorrectly.
Your username and password are correct.
The SSH software is installed properly.
1.4.5 DevOps
Deliver innovation faster with simple, reliable tools for continuous delivery
Azure Artifacts
Create, host and share packages with your team
Azure Boards
Plan, track and discuss work across your teams
Azure DevOps
Services for teams to share code, track work and ship software
Azure DevTest Labs
Quickly create environments using reusable templates and artifacts
Azure Monitor
Full observability into your applications, infrastructure and network
The Linux OS was developed by Linus Torvalds in 1991, which sprouted as an idea to
improve the UNIX OS. He suggested improvements but was rejected by UNIX designers.
Therefore, he thought of launching an OS, designed in a way that could be modified by
its users.
Linux kernel is the core part of the operating system. It establishes communication
between devices and software. Moreover, it manages system resources. It has four
responsibilities:
2) System Libraries
System libraries are special programs that help in accessing the kernel's features. A
kernel has to be triggered to perform a task, and this triggering is done by the applications.
But applications must know how to place a system call because each kernel has a
different set of system calls. Programmers have developed a standard library of
procedures to communicate with the kernel. Each operating system supports these
standards, and then these are transferred to system calls for that operating system. The
most well-known system library for Linux is Glibc (GNU C library).
3) System Tools
Linux OS has a set of utility tools, which are usually simple commands. It is a software
which GNU project has written and publish under their open source license so that
software is freely available to everyone.
With the help of commands, you can access your files, edit and manipulate data in your
directories or files, change the location of files, or anything.
4) Development Tools
With the above three components, your OS is running and working. But to update your
system, you have additional tools and libraries. These additional tools and libraries are
written by the programmers and are called toolchain. A toolchain is a vital development
tool used by the developers to produce a working application.
These end tools make a system unique for a user. End tools are not required for the
operating system but are necessary for a user.
Some examples of end tools are graphic design tools, office suites, browsers, multimedia
players, etc.
This is one of the most asked questions about Linux systems. Why do we use a different
and bit complex operating system, if we have a simple operating system like Windows?
So, there are various features of Linux systems that make it completely different and one
of the most used operating systems. Linux may be a perfect operating system if you want
to get rid of viruses, malware, slowdowns, crashes, costly repairs, and many more.
Most OS come in a compiled format means the main source code has run through a
program called a compiler that translates the source code into a language that is known
to the computer.
On the other hand, open-source is completely different. The source code is included with
the compiled version and allows modification by anyone having some knowledge. It gives
us the freedom to run the program, freedom to change the code according to our use,
freedom to redistribute its copies, and freedom to distribute copies, which are modified by
us.
In short, Linux is an operating system that is "for the people, by the people." And we
can dive in Linux without paying any cost. We can install it on multiple machines without
paying any cost.
It is secure
Linux supports various security options that will save you from viruses, malware,
slowdowns, and crashes. Further, it will keep your data protected. Its security feature is
the main reason that it is the most favourable option for developers. It is not completely
safe, but it is less vulnerable than others. Each application needs to authorize by the
admin user. The virus cannot be executed until the administrator provides the access
password. Linux systems do not require any antivirus program.
Linux is suitable for the developers, as it supports almost all of the most used
programming languages such as C/C++, Java, Python, Ruby, and more. Further, it
facilitates with a vast range of useful applications for development.
Developers find that the Linux terminal is much better than the Windows command line,
So, they prefer terminal over the Windows command line. The package manager on Linux
system helps programmers to understand how things are done. Bash scripting is also a
functional feature for the programmers. Also, the SSH support helps to manage the
servers quickly.
Linux is a flexible OS, as, it can be used for desktop applications, embedded systems,
and server applications. It can be used from wristwatches to supercomputers. It is
everywhere in our phones, laptops, PCs, cars and even in refrigerators. Further, it
supports various customization options.
Linux Distributions
Many agencies modified the Linux operating system and makes their Linux distributions.
There are many Linux distributions available in the market. It provides a different flavor of
the Linux operating system to the users. We can choose any distribution according to our
needs. Some popular distros are Ubuntu, Fedora, Debian, Linux Mint, Arch Linux, and
many more.
For the beginners, Ubuntu and Linux Mint are considered useful and, for the proficient
developer, Debian and Fedora would be a good choice. To get a list of distributions,
visit Linux Distributions.
Linux is a UNIX-like operating system, but it supports a range of hardware devices from
phones to supercomputers. Every Linux-based operating system has the Linux kernel
and set of software packages to manage hardware resources.
Also, Linux OS includes some core GNU tools to provide a way to manage the kernel
resources, install software, and configure the security setting and performance, and many
more. All these tools are packaged together to make a functional operating system.
We can use Linux through an interactive user interface as well as from the terminal
(Command Line Interface). Different distributions have a slightly different user interface
but almost all the commands will have the same behavior for all the distributions. To run
Linux from the terminal, press the "CTRL+ALT+T" keys. And, to explore its functionality,
press the application button given on the left down corner of your desktop.
Linux is an open-source operating system like Windows and MacOS. It is not just limited
to the operating system, but nowadays, it is also used as a platform to run desktops,
servers, and embedded systems. It provides various distributions and variations as it is
open source and has a modular design. The kernel is a core part of the Linux system.
1. Pen Source
2. Security
The Linux security feature is the main reason that it is the most favorable option for
developers. It is not completely safe, but it is less vulnerable than others. Each application
needs to authorize by the admin user. The virus is not executed until the administrator
provides the access password. Linux systems do not require any antivirus program.
3. Free
Certainly, the biggest advantage of the Linux system is that it is free to use. We can easily
download it, and there is no need to buy the license for it. It is distributed under GNU GPL
(General Public License). Comparatively, we have to pay a huge amount for the license
of the other operating systems.
4. Lightweight
Linux is lightweight. The requirements for running Linux are much less than other
operating systems. In Linux, the memory footprint and disk space are also lower.
Generally, most of the Linux distributions required as little as 128MB of RAM around the
same amount for disk space.
5. Stability
Linux is more stable than other operating systems. Linux does not require to reboot the
system to maintain performance levels. It rarely hangs up or slow down. It has big up-
times.
6. Performance
Linux system provides high performance over different networks. It is capable of handling
a large number of users simultaneously.
Linux operating system is very flexible. It can be used for desktop applications, embedded
systems, and server applications too. It also provides various restriction options for
specific computers. We can install only necessary components for a system.
8. Software Updates
In Linux, the software updates are in user control. We can select the required updates.
There a large number of system updates are available. These updates are much faster
than other operating systems. So, the system updates can be installed easily without
facing any issue.
9. Distributions/ Distros
There are many Linux distributions available in the market. It provides various options
and flavours of Linux to the users. We can choose any distros according to our needs.
Some popular distros are Ubuntu, Fedora, Debian, Linux Mint, Arch Linux, and many
more.
For the beginners, Ubuntu and Linux Mint would be useful and, Debian and Fedora would
be good choices for proficient programmers.
Almost all Linux distributions have a Live CD/USB option. It allows us to try or run the
Linux operating system without installing it.
The programmers prefer the Linux terminal over the Windows command line. The
package manager on Linux system helps programmers to understand how things are
done. Bash scripting is also a functional feature for the programmers. It also provides
support for SSH, which helps in managing the servers quickly.
Linux provides large community support. We can find support from various sources.
There are many forums available on the web to assist users. Further, developers from
the various open source communities are ready to help us.
14. Privacy
Linux always takes care of user privacy as it never takes much private data from the user.
Comparatively, other operating systems ask for the user's private data.
15. Networking
Linux facilitates with powerful support for networking. The client-server systems can be
easily set to a Linux system. It provides various command-line tools such as ssh, ip, mail,
telnet, and more for connectivity with the other systems and servers. Tasks such as
network backup are much faster than others.
16. Compatibility
Linux is compatible with a large number of file formats as it supports almost all file formats.
17. Installation
Linux installation process takes less time than other operating systems such as Windows.
Further, its installation process is much easy as it requires less user input. It does not
require much more system configuration even it can be easily installed on old machines
having less configuration.
Linux system provides multiple desktop environment support for its enhanced use. The
desktop environment option can be selected during installation. We can select any
desktop environment such as GNOME (GNU Network Object Model
Environment) or KDE (K Desktop Environment) as both have their specific
environment.
19. Multitasking
2. Requirements
You’ll need to consider the following before starting the installation:
From here, you can select your language from a list on the left and choose between either
installing Ubuntu directly, or trying the desktop first (if you like what you see, you can also
install Ubuntu from this mode too).
Depending on your computer’s configuration, you may instead see an alternative boot
menu showing a large language selection pane. Use your mouse or cursor keys to select
a language and you’ll be presented with a simple menu.
Select the second option, ‘Install Ubuntu’, and press return to launch the desktop installer
automatically. Alternatively, select the first option, ‘Try Ubuntu without installing’, to test
Ubuntu (as before, you can also install Ubuntu from this mode too).
A few moments later, after the desktop has loaded, you’ll see the welcome window. From
here, you can select your language from a list on the left and choose between either
installing Ubuntu directly, or trying the desktop first.
If your computer doesn’t automatically boot from USB, try holding F12 when your
computer first starts. With most machines, this will allow you to select the USB device
from a system-specific boot menu.
After selecting Continue you will be asked what apps you would like to install to start
with. The two options are ‘Normal installation’ and ‘Minimal installation’. The first is the
equivalent to the old default bundle of utilities, applications, games and media players —
a great Launchpad for any Linux installation. The second takes considerably less storage
space and allows you to install only what you need.
Beneath the installation-type question are two checkboxes; one to enable updates while
installing and another to enable third-party software.
Image: Options related to side-by-side installation or erasing a previous installation are only offered when pre-existing
installations are detected.
7. Begin installation
After configuring storage, click on the ‘Install Now’ button. A small pane will appear with
an overview of the storage options you’ve chosen, with the chance to go back if the details
are incorrect.Click Continue to fix those changes in place and start the installation
process.
If you’re unsure of your time zone, type the name of a local town or city or use the map
to select your location.
Image: If you’re having problems connecting to the Internet, use the menu in the top-right-hand corner to select a
network.
9. Login details
Enter your name and the installer will automatically suggest a computer name and
username. These can easily be changed if you prefer. The computer name is how your
computer will appear on the network, while your username will be your login and account
name.
Next, enter a strong password. The installer will let you know if it’s too weak. You can
also choose to enable automatic login and home folder encryption. If your machine is
portable, we recommend keeping automatic login disabled and enabling encryption. This
should stop people accessing your personal files if the machine is lost or stolen.
Congratulations! You have successfully installed the world’s most popular Linux operating
system!
1. /bin : All the executable binary programs (file) required during booting,
repairing, files required to run into single-user-mode, and other important,
basic commands viz., cat, du, df, tar, rpm, wc, history, etc.
2. /boot : Holds important files during boot-up process, including Linux
Kernel.
3. /dev : Contains device files for all the hardware devices on the machine
e.g., cdrom, cpu, etc
4. /etc : Contains Application’s configuration
files, startup, shutdown, start, stop script for every individual program.
5. /home : Home directory of the users. Every time a new user is created, a
directory in the name of user is created within home directory which
contains other directories like Desktop, Downloads, Documents, etc.
6. /lib : The Lib directory contains kernel modules and shared
library images required to boot the system and run commands in root file
system.
7. /lost+found : This Directory is installed during installation of Linux, useful
for recovering files which may be broken due to unexpected shut-down.
8. /media : Temporary mount directory is created for removable devices
viz., media/cdrom.
9. /mnt : Temporary mount directory for mounting file system.
10. /opt : Optional is abbreviated as opt. Contains third party application
software. Viz., Java, etc.
11. /proc : A virtual and pseudo file-system which contains information
about running process with a particular Process-id aka pid.
12. /root : This is the home directory of root user and should never be confused
with ‘/‘
13. /run : This directory is the only clean solution for early-runtime-
dir problem.
14. /sbin : Contains binary executable programs, required by System
Administrator, for Maintenance. Viz., iptables, fdisk, ifconfig, swapon,
reboot, etc.
15. /srv : Service is abbreviated as ‘srv‘. This directory contains server specific
and service related files.
16. /sys : Modern Linux distributions include a /sys directory as a virtual
filesystem, which stores and allows modification of the devices connected
to the system.
17. /tmp :System’s Temporary Directory, Accessible by users and root. Stores
temporary files for user and system, till next boot.
1. pwd Command
The pwd command is used to display the location of the current working directory.
Syntax:
~$: pwd
2. mkdir Command
The mkdir command is used to create a new directory under any directory.
Syntax:
3. rmdir Command
Syntax:
4. ls Command
Syntax:
~$: ls
5. cd Command
Syntax:
6. touch Command
The touch command is used to create empty files. We can create multiple empty files by
executing it once.
Syntax:
7. cat Command
The cat command is a multi-purpose utility in the Linux system. It can be used to create
a file, display content of the file, copy the content of one file to another file, and more.
Syntax:
Press "CTRL+ D" keys to save the file. To display the content of the file, execute it as
follows:
8. rm Command
Syntax:
9. cp Command
Syntax:
10. mv Command
The mv command is used to move a file or a directory form one location to another
location.
Syntax:
The rename command is used to rename files. It is useful for renaming a large group of
files.
Syntax:
For example, to convert all the text files into pdf files, execute the below command:
The head command is used to display the content of a file. It displays the first 10 lines of
a file.
Syntax:
Syntax:
The tac command is the reverse of cat command, as its name specified. It displays the
file content in reverse order (from the last line).
Syntax:
The more command is quite similar to the cat command, as it is used to display the file
content in the same way that the cat command does. The only difference between both
commands is that, in case of larger files, the more command displays screenful output at
a time.
In more command, the following keys are used to scroll the page:
Syntax:
Syntax:
17. su Command
The su command provides administrative access to another user. In other words, it allows
access of the Linux shell to another user.
Syntax:
18. id Command
The id command is used to display the user ID (UID) and group ID (GID).
Syntax:
~$: id
Syntax:
The passwd command is used to create and change the password for a user.
Syntax:
Syntax:
The cat command is also used as a filter. To filter a file, it is used inside pipes.
Syntax:
The cut command is used to select a specific column of a file. The '-d' option is used as
a delimiter, and it can be a space (' '), a slash (/), a hyphen (-), or anything else. And, the
'-f' option is used to specify a column number.
Syntax:
The grep is the most powerful and used filter in a Linux system. The 'grep' stands for
"global regular expression print." It is useful for searching the content from a file.
Generally, it is used with the pipe.
Syntax:
The 'comm' command is used to compare two files or streams. By default, it displays three
columns, first displays non-matching items of the first file, second indicates the non-
matching item of the second file, and the third column displays the matching items of both
files.
Syntax:
The sed command is also known as stream editor. It is used to edit files using a regular
expression. It does not permanently edit files; instead, the edited content remains only on
display. It does not affect the actual file.
Syntax:
The tee command is quite similar to the cat command. The only difference between both
filters is that it puts standard input on standard output and also write them into a file.
Syntax:
28. tr Command
The tr command is used to translate the file content like from lower case to upper case.
Syntax:
The uniq command is used to form a sorted list in which every word will occur only once.
Syntax:
30. wc Command
The wc command is used to count the lines, words, and characters in a file.
Syntax:
Syntax:
Syntax:
The gzip command is used to truncate the file size. It is a compressing tool. It replaces
the original file by the compressed file having '.gz' extension.
Syntax:
Syntax:
The find command is used to find a particular file within a directory. It also supports
various options to find a file such as byname, by type, by date, and more.
Syntax:
The locate command is used to search a file by file name. It is quite similar to find
command; the difference is that it is a background process. It searches the file in the
database, whereas the find command searches in the file system. It is faster than the find
command. To find the file with the locates command, keep your database updated.
Syntax:
The date command is used to display date, time, time zone, and more.
Syntax:
~$: date
The cal command is used to display the current month's calendar with the current date
highlighted.
Syntax:
~$: cal
The sleep command is used to hold the terminal by the specified amount of time. By
default, it takes time in seconds.
Syntax:
Syntax:
~$: time
Syntax:
42. df Command
The df command is used to display the disk space used in the file system. It displays the
output as in the number of used blocks, available blocks, and the mounted directory.
Syntax:
~$: df
The mount command is used to connect an external device file system to the system's
file system.
Syntax:
Linux exit command is used to exit from the current shell. It takes a parameter as a
number and exits the shell with a return of status number.
Syntax:
Syntax:
~$: clear
After pressing the ENTER key, it will clear the terminal screen.
46. ip Command
Syntax:
~$: ip a or ip addr
Linux ssh command is used to create a remote connection through the ssh protocol.
Syntax:
The mail command is used to send emails from the command line.
Syntax:
The ping command is used to check the connectivity between two nodes, that is whether
the server is connected. It is a short form of "Packet Internet Groper."
The host command is used to display the IP address for a given domain name and vice
versa. It performs the DNS lookups for the DNS Query.
Syntax:
5. Under Instance details, type myVM for the Virtual machine name and choose East
US for your Region. Choose Windows Server 2019 Datacenter for
the Image and Standard_DS1_v2 for the Size. Leave the other defaults.
7. Under Inbound port rules, choose Allow selected ports and then select RDP
(3389) and HTTP (80) from the drop-down.
8. Leave the remaining defaults and then select the Review + create button at the
bottom of the page.
Create a remote desktop connection to the virtual machine. These directions tell you how
to connect to your VM from a Windows computer. On a Mac, you need an RDP client
such as this Remote Desktop Client from the Mac App Store.
1. On the overview page for your virtual machine, select the Connect button then
select RDP.
2. In the Connect with RDP page, keep the default options to connect by IP address,
over port 3389, and click Download RDP file.
When no longer needed, you can delete the resource group, virtual machine, and all
related resources.
Go to the resource group for the virtual machine, then select Delete resource group.
Confirm the name of the resource group to finish deleting the resources.
5. Under Instance details, type myVM for the Virtual machine name,
choose East US for your Region, and choose Ubuntu 18.04 LTS for
your Image. Leave the other defaults.
9. Under Inbound port rules > Public inbound ports, choose Allow selected
ports and then select SSH (22) and HTTP (80) from the drop-down.
10. Leave the remaining defaults and then select the Review + create button at
the bottom of the page.
1. If you are on a Mac or Linux machine, open a Bash prompt. If you are on a
Windows machine, open a PowerShell prompt.
2. At your prompt, open an SSH connection to your virtual machine. Replace
the IP address with the one from your VM, and replace the path to
the .pem with the path to where the key file was downloaded.
Console
Tip
The SSH key you created can be used the next time your create a VM in Azure. Just
select the Use a key stored in Azure for SSH public key source the next time you
create a VM. You already have the private key on your computer, so you won't need to
download anything.
To see your VM in action, install the NGINX web server. From your SSH session, update
your package sources and then install the latest NGINX package.
Use a web browser of your choice to view the default NGINX welcome page. Type the
public IP address of the VM as the web address. The public IP address can be found on
the VM overview page or as part of the SSH connection string you used earlier.
When no longer needed, you can delete the resource group, virtual machine, and all
related resources. To do so, select the resource group for the virtual machine,
select Delete, then confirm the name of the resource group to delete.
It is along with their customization flexibility, control, and data management within the
organization. Further, it involves the pooling of specialized human and technical
resources to effectively manage existing systems and applications as it helps in meeting
the requirements of organizations and users.
Public Cloud
This type of cloud services is provided on a network for public use. Customers have no
control over the location of the infrastructure. It is based on a shared cost model for all
the users, or in the form of a licensing policy such as pay per user. Public deployment
models in the cloud are perfect for organizations with growing and fluctuating demands.
It is also popular among businesses of all sizes for their web applications, webmail, and
storage of non-sensitive data.
Community Cloud
It is a mutually shared model between organizations that belong to a particular community
such as banks, government organizations, or commercial enterprises. Community
members generally share similar issues of privacy, performance, and security. This type
of deployment model of cloud computing is managed and hosted internally or by a third-
party vendor.
Hybrid Cloud
This model incorporates the best of both private and public clouds, but each can remain
as separate entities. Further, as part of this deployment of cloud computing model, the
internal, or external providers can provide resources. A hybrid cloud is ideal for scalability,
flexibility, and security. A perfect example of this scenario would be that of an organization
who uses the private cloud to secure their data and interacts with its customers using the
public cloud.
Characteristics of IaaS
Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google
Compute Engine (GCE), Rackspace, and Cisco Metacloud.
IaaS is offered in three models: public, private, and hybrid cloud. The private cloud implies
that the infrastructure resides at the customer-premise. In the case of public cloud, it is
located at the cloud computing platform vendor's data center, and the hybrid cloud is a
combination of the two in which the customer selects the best of both public cloud or
private cloud.
1. Shared infrastructure
3. Pay-as-per-use model
IaaS providers provide services based on the pay-as-per-use basis. The users are
required to pay for what they have used.
5. On-demand scalability
On-demand scalability is one of the biggest advantages of IaaS. Using IaaS, users
do not worry about to upgrade software and troubleshoot the issues related to
hardware components.
1. Security
Security is one of the biggest issues in IaaS. Most of the IaaS providers are not
able to provide 100% security.
Although IaaS service providers maintain the software, but they do not upgrade
the software for some organizations.
3. Interoperability issues
It is difficult to migrate VM from one IaaS provider to the other, so the customers
might face problem related to vendor lock-in.
IaaS cloud computing platform cannot replace the traditional hosting method, but it
provides more than that, and each resource which are used are predictable as per the
usage.
IaaS cloud computing platform may not eliminate the need for an in-house IT department.
It will be needed to monitor or control the IaaS setup. IT salary expenditure might not
reduce significantly, but other IT expenses can be reduced.
Breakdowns at the IaaS cloud computing platform vendor's can bring your business to
the halt stage. Assess the IaaS cloud computing platform vendor's stability and finances.
Make sure that SLAs (i.e., Service Level Agreement) provide backups for data, hardware,
network, and application failures. Image portability and third-party support is a plus point.
The IaaS cloud computing platform vendor can get access to your sensitive data. So,
engage with credible companies or organizations. Study their security policies and
precautions.
PaaS cloud computing platform is created for the programmer to develop, test, run, and
manage the applications.
Characteristics of PaaS
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App
Engine, Apache Stratos, Magento Commerce Cloud, and OpenShift.
PaaS providers provide various programming languages for the developers to develop
the applications. Some popular programming languages provided by PaaS providers are
Java, PHP, Ruby, Perl, and Go.
2. Application frameworks
3. Databases
4. Other tools
PaaS providers provide various other tools that are required to develop, test, and deploy
the applications.
Advantages of PaaS
1) Simplified Development
2) Lower risk
No need for up-front investment in hardware and software. Developers only need
a PC and an internet connection to start building applications.
Some PaaS vendors also provide already defined business functionality so that
users can avoid building everything from very scratch and hence can directly start
the projects only.
4) Instant community
5) Scalability
Applications deployed can scale from one to thousands of users without any
changes to the applications.
1) Vendor lock-in
One has to write the applications according to the platform provided by the PaaS
vendor, so the migration of an application to another PaaS vendor would be a
problem.
2) Data Privacy
Corporate data, whether it can be critical or not, will be private, so if it is not located
within the walls of the company, there can be a risk in terms of privacy of data.
It may happen that some applications are local, and some are in the cloud. So
there will be chances of increased complexity when we want to use data which in
the cloud with the local data.
Characteristics of SaaS
Business Services - SaaS Provider provides various business services to start-up the
business. The SaaS business services include ERP (Enterprise Resource
Planning), CRM (Customer Relationship Management), billing, and sales.
Social Networks - As we all know, social networking sites are used by the general public,
so social networking service providers use SaaS for their convenience and handle the
general public's information.
Mail Services - To handle the unpredictable number of users and load on e-mail services,
many e-mail providers offering their services using SaaS.
Unlike traditional software, which is sold as a licensed based with an up-front cost
(and often an optional ongoing support fee), SaaS providers are generally pricing
the applications using a subscription fee, most commonly a monthly or annually
fee.
2. One to Many
SaaS services are offered as a one-to-many model means a single instance of the
application is shared by multiple users.
Software as a service removes the need for installation, set-up, and daily
maintenance for the organizations. The initial set-up cost for SaaS is typically less
than the enterprise software. SaaS vendors are pricing their applications based on
some usage parameters, such as a number of users using the application. So
SaaS does easy to monitor and automatic updates.
6. Multidevice support
SaaS services can be accessed from any device such as desktops, laptops,
tablets, phones, and thin clients.
7. API Integration
SaaS services easily integrate with other software or services through standard
APIs.
8. No client-side installation
SaaS services are accessed directly from the service provider using the internet
connection, so do not need to require any software installation.
1) Security
Actually, data is stored in the cloud, so security may be an issue for some users.
However, cloud computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the
end-user, there is a possibility that there may be greater latency when interacting
with the application compared to local deployment. Therefore, the SaaS model is
not suitable for applications whose demand response time is in milliseconds.
Switching SaaS vendors involves the difficult and slow task of transferring the very
large data files over the internet and then converting and importing them into
another SaaS also.
The below table shows the difference between IaaS, PaaS, and SaaS
Activity: This activity requires learners to login to Azure dashboard and browse
through various service offerings by category. Learner needs to identify and divide by
themselves the services into classifies categories of IaaS, PaaS and SaaS.
The Azure cloud platform is more than 200 products and cloud services designed to
help you bring new solutions to life—to solve today's challenges and create the future.
Build, run and manage applications across multiple clouds, on-premises and at the
edge, with the tools and frameworks of your choice.
1.10.13 Monitor
With the connectivity of the global Azure network, each of the Azure datacenters provides
high availability, low latency, scalability, and the latest advancements in cloud
infrastructure—all running on the Azure platform.
Together, these components keep data entirely within the trusted Microsoft network and
IP traffic never enters the public internet.
What is Azure datacenters?
Azure datacenters are unique physical buildings—located all over the globe—that house
a group of networked computer servers
What is Azure region?
An Azure region is a set of datacenters, deployed within a latency-defined perimeter and
connected through a dedicated regional low-latency network.
With more global regions than any other cloud provider, Azure gives customers the
flexibility to deploy applications where they need. An Azure region has discrete pricing
and service availability.
What is Azure geography?
Azure Availability Zones are unique physical locations within an Azure region and offer
high availability to protect your applications and data from datacenter failures. Each zone
is made up of one or more datacenters equipped with independent power, cooling, and
networking.
The physical separation of availability zones within a region protects apps and data from
facility-level issues. Zone-redundant services replicate your apps and data across Azure
Availability Zones to protect from single points of failure.
What is the Azure Global Network?
The Azure global network refers to all of the components in networking and is comprised
of the Microsoft global wide-area network (WAN), points of presence (PoPs), fiber, and
others.
What are Azure Edge Zones?
Azure Edge Zones are footprint extensions of Azure, placed in densely populated areas.
Azure Edge Zones support virtual machines (VMs), containers, and a select set of Azure
services that let you run latency-sensitive and throughput-intensive apps close to your
end users.
Azure Edge Zones are part of the Microsoft global network and offer secure, reliable, and
high-bandwidth connectivity between apps—running at the Azure Edge Zone (close to
the user), and the full set of Azure services running across the larger Azure regions.
This ensures that connection issues in one datacenter don’t cause issues for the wider
region. This also allows the addition of new datacenters without the need to route direct
network connections to each existing datacenter.
Reference: https://azurecomcdn.azureedge.net/cvt-
e3a122c14d54133f4987ad39a20b68bf418820b5ebac6c6a421232bba29588e9/images/shared/regions-map-
mobile.svg
Sign in to Azure
2. In the search box, enter Virtual Network. Select Virtual Network in the
search results.
3. In the Virtual Network page, select Create.
T AB L E 1
Setting Value
Project details
Instance details
4. In Create virtual network, enter or select this information in the Basics tab:
T AB L E 2
Setting Value
Setting Value
11. Select the Review + create tab or select the Review + create button.
12. Select Create.
T AB L E 3
Setting Value
Project Details
Instance details
Setting Value
Administrator account
3. Select the Networking tab, or select Next: Disks, then Next: Networking.
4. In the Networking tab, select or enter:
T AB L E 4
Setting Value
Network interface
5. Select the Review + create tab, or select the blue Review + create button
at the bottom of the page.
6. Review the settings, and then select Create.
Setting Value
Project Details
Instance details
Administrator account
3. Select the Networking tab, or select Next: Disks, then Next: Networking.
4. In the Networking tab, select or enter:
T AB L E 6
Setting Value
Network interface
Setting Value
5. Select the Review + create tab, or select the blue Review + create button
at the bottom of the page.
6. Review the settings, and then select Create.
1. Go to the Azure portal to manage your private VM. Search for and
select Virtual machines.
2. Pick the name of your private virtual machine myVM1.
3. In the VM menu bar, select Connect, then select Bastion.
PowerShell
Pinging myvm2.cs4wv3rxdjgedggsfghkjrxuqf.bx.internal.cloudapp.net
[10.1.0.5] with 32 bytes of data:
Reply from 10.1.0.5: bytes=32 time=3ms TTL=128
Reply from 10.1.0.5: bytes=32 time=1ms TTL=128
Reply from 10.1.0.5: bytes=32 time=1ms TTL=128
Reply from 10.1.0.5: bytes=32 time=1ms TTL=128
PowerShell
Pinging myvm1.cs4wv3rxdjgedggsfghkjrxuqf.bx.internal.cloudapp.net
[10.1.0.4] with 32 bytes of data:
Reply from 10.1.0.4: bytes=32 time=1ms TTL=128
Reply from 10.1.0.4: bytes=32 time=1ms TTL=128
Reply from 10.1.0.4: bytes=32 time=1ms TTL=128
Reply from 10.1.0.4: bytes=32 time=1ms TTL=128
You connected to one VM from the internet and securely communicated between the two
VMs.
When you're done using the virtual network and the VMs, delete the resource group and
all of the resources it contains:
Azure supports a wide range of computing solutions for development and testing, running
applications, and extending your datacentre. The service supports Linux, Windows
Server, SQL Server, Oracle, IBM, and SAP. Azure also has many services that can run
virtual machines (VMs). Each service provides different options depending on your
requirements. Some of the most prominent services are:
Virtual machines are software emulations of physical computers. They include a virtual
processor, memory, storage, and networking resources. VMs host an operating system,
and you can install and run software just like a physical computer. When using a remote
desktop client, you can use and control the VM as if you were sitting in front of it.
With Azure Virtual Machines, you can create and use VMs in the cloud. Virtual Machines
provides infrastructure as a service (IaaS) and can be used in different ways. When you
need total control over an operating system and environment, VMs are an ideal choice.
Just like a physical computer, you can customize all the software running on the VM. This
ability is helpful when you're running custom software or custom hosting configurations.
Virtual machine scale sets are an Azure compute resource that you can use to deploy
and manage a set of identical VMs. With all VMs configured the same, virtual machine
scale sets are designed to support true autoscale. No pre-provisioning of VMs is required.
For this reason, it's easier to build large-scale services targeting big compute, big data,
and containerized workloads. As demand goes up, more VM instances can be added. As
demand goes down, VM instances can be removed. The process can be manual,
automated, or a combination of both.
Container Instances and Azure Kubernetes Service are Azure compute resources that
you can use to deploy and manage containers. Containers are lightweight, virtualized
application environments. They're designed to be quickly created, scaled out, and
stopped dynamically. You can run multiple instances of a containerized application on a
single host machine
App Service
With Azure App Service, you can quickly build, deploy, and scale enterprise-grade web,
mobile, and API apps running on any platform. You can meet rigorous performance,
scalability, security, and compliance requirements while using a fully managed platform
to perform infrastructure maintenance. App Service is a platform as a service (PaaS)
offering.
Functions
Functions are ideal when you're concerned only about the code running your service and
not the underlying platform or infrastructure. They're commonly used when you need to
perform work in response to an event (often via a REST request), timer, or message from
another Azure service, and when that work can be completed quickly, within seconds or
less.
In this content you will walks you through how to deploy an Apache web server, MySQL,
and PHP (the LAMP stack) on an Ubuntu VM in Azure. To see the LAMP server in action,
you can optionally install and configure a WordPress site. In this tutorial you learn how to:
This setup is for quick tests or proof of concept. For more on the LAMP stack, including
recommendations for a production environment, see the Ubuntu documentation.
This tutorial uses the CLI within the Azure Cloud Shell, which is constantly updated to the
latest version. To open the Cloud Shell, select Try it from the top of any code block.
If you choose to install and use the CLI locally, this tutorial requires that you are running
the Azure CLI version 2.0.30 or later. Run az --version to find the version. If you need to
install or upgrade, see Install Azure CLI.
Create a resource group with the az group create command. An Azure resource group is
a logical container into which Azure resources are deployed and managed.
Azure CLI
Try It
az group create --name myResourceGroup --location eastus
The following example creates a VM named myVM and creates SSH keys if they do not
already exist in a default key location. To use a specific set of keys, use the --ssh-key-
value option. The command also sets azureuser as an administrator user name. You use
this name later to connect to the VM.
Azure CLI
Try It
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys
When the VM has been created, the Azure CLI shows information similar to the following
example. Take note of the publicIpAddress. This address is used to access the VM in
later steps.
Output
{
"fqdns": "",
By default, only SSH connections are allowed into Linux VMs deployed in Azure. Because
this VM is going to be a web server, you need to open port 80 from the internet. Use
the az vm open-port command to open the desired port.
Azure CLI
Try It
If you don't already know the public IP address of your VM, run the az network public-ip
list command. You need this IP address for several later steps.
Azure CLI
Try It
Use the following command to create an SSH session with the virtual machine. Substitute
the correct public IP address of your virtual machine. In this example, the IP address
is 40.121.4.215. azureuser is the administrator user name set when you created the VM.
Bash
ssh azureuser@40.121.4.215
Run the following command to update Ubuntu package sources and install Apache,
MySQL, and PHP. Note the caret (^) at the end of the command, which is part of the lamp-
server^ package name.
~$: bash
You are prompted to install the packages and other dependencies. This process installs
the minimum required PHP extensions needed to use PHP with MySQL.
Bash
apache2 -v
With Apache installed, and port 80 open to your VM, the web server can now be accessed
from the internet. To view the Apache2 Ubuntu Default Page, open a web browser, and
enter the public IP address of the VM. Use the public IP address(40.121.4.215) you used
to SSH to the VM:
Check the version of MySQL with the following command (note the capital V parameter):
Bash
mysql -V
To help secure the installation of MySQL, including setting a root password, run
the mysql_secure_installation script.
Bash
sudo mysql_secure_installation
You can optionally set up the Validate Password Plugin (recommended). Then, set a
password for the MySQL root user, and configure the remaining security settings for your
environment. We recommend that you answer "Y" (yes) to all questions.
Bash
Verify PHP
Bash
php -v
If you want to test further, create a quick PHP info page to view in a browser. The following
command creates the PHP info page:
Bash
Now you can check the PHP info page you created. Open a browser and go
to http://yourPublicIPAddress/info.php. Substitute the public IP address of your VM. It
should look similar to this image.
http://40.121.4.215/info.php
Durable and highly available. Redundancy ensures that your data is safe
in the event of transient hardware failures. You can also opt to replicate data
across datacenters or geographical regions for additional protection from
local catastrophe or natural disaster. Data replicated in this way remains
highly available in the event of an unexpected outage.
Secure. All data written to an Azure storage account is encrypted by the
service. Azure Storage provides you with fine-grained control over who has
access to your data.
Scalable. Azure Storage is designed to be massively scalable to meet the
data storage and performance needs of today's applications.
Managed. Azure handles hardware maintenance, updates, and critical
issues for you.
Accessible. Data in Azure Storage is accessible from anywhere in the world
over HTTP or HTTPS. Microsoft provides client libraries for Azure Storage in
a variety of languages, including .NET, Java, Node.js, Python, PHP, Ruby,
Go, and others, as well as a mature REST API. Azure Storage supports
scripting in Azure PowerShell or Azure CLI. And the Azure portal and Azure
Storage Explorer offer easy visual solutions for working with your data.
Azure Blobs: A massively scalable object store for text and binary data. Also
includes support for big data analytics through Data Lake Storage Gen2.
Azure Files: Managed file shares for cloud or on-premises deployments.
Azure Queues: A messaging store for reliable messaging between
application components.
Azure Tables: A NoSQL store for schemaless storage of structured data.
Azure Disks: Block-level storage volumes for Azure VMs.
Each service is accessed through a storage account. To get started, see Create a storage
account.
The following table compares Files, Blobs, Disks, Queues, and Tables, and shows
example scenarios for each.
E X AM P L E S C E N A R I O S
Azure Offers fully managed cloud file You want to "lift and shift" an application to the
Files shares that you can access from cloud that already uses the native file system APIs
anywhere via the industry to share data between it and other applications
standard Server Message Block running in Azure.
(SMB) protocol.
You want to replace or supplement on-premises
You can mount Azure file shares file servers or NAS devices.
from cloud or on-premises
deployments of Windows, Linux, You want to store development and debugging
and macOS. tools that need to be accessed from many virtual
machines.
Azure Allows unstructured data to be You want your application to support streaming
Blobs stored and accessed at a and random access scenarios.
massive scale in block blobs.
You want to be able to access application data
Also supports Azure Data Lake from anywhere.
Storage Gen2 for enterprise big
data analytics solutions. You want to build an enterprise data lake on
Azure and perform big data analytics.
Azure Allows data to be persistently You want to "lift and shift" applications that use
Disks stored and accessed from an native file system APIs to read and write data to
attached virtual hard disk. persistent disks.
Azure Allow you to store structured You want to store flexible datasets like user data
Tables NoSQL data in the cloud, for web applications, address books, device
providing a key/attribute store information, or other types of metadata your
with a schemaless design. service requires.
Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is
optimized for storing massive amounts of unstructured data, such as text or binary data.
Objects in Blob storage can be accessed from anywhere in the world via HTTP or HTTPS.
Users or client applications can access blobs via URLs, the Azure Storage REST
API, Azure PowerShell, Azure CLI, or an Azure Storage client library. The storage client
libraries are available for multiple languages,
including .NET, Java, Node.js, Python, PHP, and Ruby.
For more information about Blob storage, see Introduction to Blob storage.
Azure Files enables you to set up highly available network file shares that can be
accessed by using the standard Server Message Block (SMB) protocol. That means that
multiple VMs can share the same files with both read and write access. You can also read
the files using the REST interface or the storage client libraries.
One thing that distinguishes Azure Files from files on a corporate file share is that you
can access the files from anywhere in the world using a URL that points to the file and
includes a shared access signature (SAS) token. You can generate SAS tokens; they
allow specific access to a private asset for a specific amount of time.
Many on-premises applications use file shares. This feature makes it easier
to migrate those applications that share data to Azure. If you mount the file
share to the same drive letter that the on-premises application uses, the part
of your application that accesses the file share should work with minimal, if
any, changes.
Configuration files can be stored on a file share and accessed from multiple
VMs. Tools and utilities used by multiple developers in a group can be stored
on a file share, ensuring that everybody can find them, and that they use the
same version.
Resource logs, metrics, and crash dumps are just three examples of data
that can be written to a file share and processed or analyzed later.
For more information about Azure Files, see Introduction to Azure Files.
Some SMB features are not applicable to the cloud. For more information, see Features
not supported by the Azure File service.
The Azure Queue service is used to store and retrieve messages. Queue messages can
be up to 64 KB in size, and a queue can contain millions of messages. Queues are
generally used to store lists of messages to be processed asynchronously.
For example, say you want your customers to be able to upload pictures, and you want
to create thumbnails for each picture. You could have your customer wait for you to create
the thumbnails while uploading the pictures. An alternative would be to use a queue.
When the customer finishes their upload, write a message to the queue. Then have an
Azure Function retrieve the message from the queue and create the thumbnails. Each of
Azure Table storage is now part of Azure Cosmos DB. To see Azure Table storage
documentation, see the Azure Table Storage Overview. In addition to the existing Azure
Table storage service, there is a new Azure Cosmos DB Table API offering that provides
throughput-optimized tables, global distribution, and automatic secondary indexes. To
learn more and try out the new premium experience, see Azure Cosmos DB Table API.
For more information about Table storage, see Overview of Azure Table storage.
An Azure managed disk is a virtual hard disk (VHD). You can think of it like a physical
disk in an on-premises server but, virtualized. Azure-managed disks are stored as page
blobs, which are a random IO storage object in Azure. We call a managed disk 'managed'
because it is an abstraction over page blobs, blob containers, and Azure storage
accounts. With managed disks, all you have to do is provision the disk, and Azure takes
care of the rest.
For more information about managed disks, see Introduction to Azure managed disks.
Azure Storage offers several types of storage accounts. Each type supports different
features and has its own pricing model. For more information about storage account
types, see Azure storage account overview.
Every request to Azure Storage must be authorized. Azure Storage supports the following
authorization methods:
Azure Active Directory (Azure AD) integration for blob and queue
data. Azure Storage supports authentication and authorization with Azure
AD for the Blob and Queue services via Azure role-based access control
(Azure RBAC). Authorizing requests with Azure AD is recommended for
superior security and ease of use. For more information, see Authorize
access to Azure blobs and queues using Azure Active Directory.
There are two basic kinds of encryption available for the core storage services. For more
information about security and encryption, see the Azure Storage security guide.
Encryption at rest
Azure Storage encryption protects and safeguards your data to meet your organizational
security and compliance commitments. Azure Storage automatically encrypts all data
prior to persisting to the storage account and decrypts it prior to retrieval. The encryption,
decryption, and key management processes are transparent to users. Customers can
also choose to manage their own keys using Azure Key Vault. For more information,
see Azure Storage encryption for data at rest.
Client-side encryption
The Azure Storage client libraries provide methods for encrypting data from the client
library before sending it across the wire and decrypting the response. Data encrypted via
client-side encryption is also encrypted at rest by Azure Storage. For more information
about client-side encryption, see Client-side encryption with .NET for Azure Storage.
To ensure that your data is durable, Azure Storage stores multiple copies of your data.
When you set up your storage account, you select a redundancy option. For more
information, see Azure Storage redundancy.
You have several options for moving data into or out of Azure Storage. Which option you
choose depends on the size of your dataset and your network bandwidth. For more
information, see Choose an Azure solution for data transfer.
Pricing
When making decisions about how your data is stored and accessed, you should also
consider the costs involved. For more information, see Azure Storage pricing.
You can access resources in a storage account by any language that can make
HTTP/HTTPS requests. Additionally, the core Azure Storage services offer programming
libraries for several popular languages. These libraries simplify many aspects of working
with Azure Storage by handling details such as synchronous and asynchronous
invocation, batching of operations, exception management, automatic retries, operational
behavior, and so forth. Libraries are currently available for the following languages and
platforms, with others in the pipeline.
The following image shows the settings on the Basics tab for a new storage account:
To recover a deleted storage account from within another storage account, follow these
steps:
1. Navigate to the overview page for an existing storage account in the Azure
portal.
2. In the Support + troubleshooting section, select Recover deleted account.
3. From the dropdown, select the account to recover, as shown in the following
image. If the storage account that you want to recover is not in the dropdown,
then it cannot be recovered.
1. In the Azure portal, select All services > the Storage category > Storage
accounts.
2. Under Storage accounts, select Add.
3. In the Subscription field, select the subscription in which to create the
storage account.
4. In the Resource group field, select an existing resource group or
select Create new, and enter a name for the new resource group.
5. In the Storage account name field, enter a name for the account. Note the
following guidelines:
o The name must be unique across Azure.
o The name must be between three and 24 characters long.
o The name can include only numbers and lowercase letters.
6. In the Location field, select a location for the storage account, or use the
default location.
7. For the rest of the settings, configure the following:
T AB L E 1
Field Value
Performance Select Premium.
Account kind Select BlockBlobStorage.
To recover a deleted storage account from within another storage account, follow these
steps:
1. Navigate to the overview page for an existing storage account in the Azure
portal.
2. In the Support + troubleshooting section, select Recover deleted account.
3. From the dropdown, select the account to recover, as shown in the
following image. If the storage account that you want to recover is not in the
dropdown, then it cannot be recovered.
4. Select the Recover button to restore the account. The portal displays a notification
that the recovery is in progress
2.4.5 Upload, download, and list blobs with the Azure portal
Create a container
Block blobs consist of blocks of data assembled to make a blob. Most scenarios using
Blob storage employ block blobs. Block blobs are ideal for storing text and binary data in
the cloud, like files, images, and videos. This quickstart shows how to work with block
blobs.
To upload a block blob to your new container in the Azure portal, follow these steps:
1. In the Azure portal, navigate to the container you created in the previous
section.
2. Select the container to show a list of blobs it contains. This container is new,
so it won't yet contain any blobs.
3. Select the Upload button to open the upload blade and browse your local file
system to find a file to upload as a block blob. You can optionally expand
the Advanced section to configure other settings for the upload operation.
You can download a block blob to display in the browser or save to your local file system.
To download a block blob, follow these steps:
1. Navigate to the list of blobs that you uploaded in the previous section.
2. Right-click the blob you want to download, and select Download.
Archive Blob
Enabling Archiving with Azure Blob Storage
1. Sign in to the Azure portal.
2. In the Azure portal, search for and select All Resources.
3. Select your storage account.
4. Select your container and then select your blob.
5. In the Blob properties, select Change tier.
6. Select the Hot or Cool access tier.
7. Select a Rehydrate Priority of Standard or High.
8. Select Save at the bottom.
To delete one or more blobs in the Azure portal, follow these steps:
In this quickstart, you create a single database in Azure SQL Database using either the
Azure portal, a PowerShell script, or an Azure CLI script. You then query the database
using Query editor in the Azure portal.
Prerequisite
An active Azure subscription. If you don't have one, create a free account.
Create a single database
Portal
To create a single database in the Azure portal this quickstart starts at the Azure SQL
page.
3. On the Basics tab of the Create SQL Database form, under Project details,
select the desired Azure Subscription.
4. For Resource group, select Create new, enter myResourceGroup, and
select OK.
5. For Database name enter mySampleDatabase.
6. For Server, select Create new, and fill out the New server form with the
following values:
o Server name: Enter mysqlserver, and add some characters for
uniqueness. We can't provide an exact server name to use
because server names must be globally unique for all servers in
Azure, not just unique within a subscription. So enter something
Select OK.
Once your database is created, you can use the Query editor (preview) in the Azure portal
to connect to the database and query data.
1. In the portal, search for and select SQL databases, and then select your
database from the list.
2. On the page for your database, select Query editor (preview) in the left menu.
3. Enter your server admin login information, and select OK.
SQLCopy
SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName
FROM SalesLT.ProductCategory pc
JOIN SalesLT.Product p
ON pc.productcategoryid = p.productcategoryid;
5. Select Run, and then review the query results in the Results pane.
2.4.8 Set up SQL Data Sync between databases in Azure SQL Database
and SQL Server
Create sync group
1. Go to the Azure portal to find your database in SQL Database. Search for
and select SQL databases.
3. On the SQL database menu for the selected database, select Sync to other
databases.
T AB L E 1
Setting Description
Sync Group Enter a name for the new sync group. This name is distinct from the
Name name of the database itself.
If you choose Use existing database, select the database from the
list.
Setting Description
The first sync begins after the selected interval period elapses from the
time the configuration is saved.
Use private link Choose a service managed private endpoint to establish a secure
connection between the sync service and the hub database.
Note
Microsoft recommends to create a new, empty database for use as the Sync
Metadata Database. Data Sync creates tables in this database and runs a
frequent workload. This database is shared as the Sync Metadata
Database for all sync groups in a selected region and subscription. You can't
change the database or its name without removing all sync groups and sync
agents in the region. Additionally, an Elastic jobs database cannot be used
as the SQL Data Sync Metadata database and vice versa.
Select OK and wait for the sync group to be created and deployed.
5. On the New Sync Group page, if you selected Use private link, you will need
to approve the private endpoint connection. The link in the info message will
take you to the private endpoint connections experience where you can
approve the connection.
After the new sync group is created and deployed, Add sync members (step 2) is
highlighted on the New sync group page.
In the Hub Database section, enter existing credentials for the server on which the hub
database is located. Don't enter new credentials in this section.
In the Member Database section, optionally add a database in Azure SQL Database to
the sync group by selecting Add an Azure SQL Database. The Configure Azure SQL
Database page opens.
T O AD D A D A T A B AS E I N AZ U R E S Q L D A T A B AS E
Setting Description
Sync Member Name Provide a name for the new sync member. This name is
distinct from the database name itself.
Sync Directions Select Bi-directional Sync, To the Hub, or From the Hub.
Username and Password Enter the existing credentials for the server on which the
member database is located. Don't enter new credentials in
this section.
Setting Description
Select OK and wait for the new sync member to be created and deployed.
In the Member Database section, optionally add a SQL Server database to the sync group
by selecting Add an On-Premises Database. The Configure On-Premises page opens
where you can do the following things:
1. Select Choose the Sync Agent Gateway. The Select Sync Agent page opens.
2. On the Choose the Sync Agent page, choose whether to use an existing agent
or create an agent.
If you choose Existing agents, select the existing agent from the list.
1. Download the data sync agent from the link provided and install it
on the computer where the SQL Server is located. You can also
download the agent directly from Azure SQL Data Sync Agent.
Important
You have to open outbound TCP port 1433 in the firewall to let the
client agent communicate with the server.
1. In the sync agent app, select Submit Agent Key. The Sync Metadata
Database Configuration dialog box opens.
2. In the Sync Metadata Database Configuration dialog box, paste in
the agent key copied from the Azure portal. Also provide the
existing credentials for the server on which the metadata database
is located. (If you created a metadata database, this database is on
the same server as the hub database.) Select OK and wait for the
configuration to finish.
Note: To connect to SQL Data Sync and the local agent, add your user name to the
role DataSync_Executor. Data Sync creates this role on the SQL Server instance.
Configure sync group
After the new sync group members are created and deployed, Configure sync group (step
3) is highlighted in the New sync group page.
This limitation constrains the physical layout of the network. All devices have to be
physically located near each other, for example, in the same room. Finally, if there's a
break in the bus cable, the whole network fails.
Ring topology
A diagram of a ring topology showing nodes connected in a ring.
In a ring topology, each network device is connected to its neighbor to form a ring. This
form of network is more resilient than the bus topology.
A break in the cable ring also affects the performance of the network.
Mesh topology
A diagram of a mesh topology where all nodes are connected to all other nodes. The
mesh topology is described as either a physical mesh or a logical mesh.
In a physical mesh, each network device connects to every other network device in the
network. It dramatically increases the resilience of a network but has the physical
overhead of connecting all devices.
Few networks today are built as a full mesh. Most networks use a partial mesh, where
some machines interconnect, but others connect through one device.
There's a subtle difference between a physical mesh network and a logical one. The
perception is that most modern networks are mesh based, since each device can see
and communicate with any other device on the network.
This description is of a logical mesh network and is primarily made possible through the
use of network protocols.
Star topology
A diagram of a star topology with a single node connected to all other nodes.
The star topology is the most commonly used network topology. Each network device
connects to a centralized hub or switch.
Switches and hubs can be linked together to extend and build more extensive networks.
This type of typology is, by far, the most robust and scalable.
2.5.3 Ethernet
Ethernet is a networking standard that's synonymous with wire-based LAN networks and
also used in MAN and WAN networks. Ethernet has replaced other wired LAN
technologies like ARCNET and Token Ring and is an industry standard.
While Ethernet is associated with wired networks, keep in mind that it's not limited to wire,
since it's used over fiber-optic links as well.
Application layer: The top layer of this stack is concerned with application or process
communication. The application layer is responsible for determining which
communication protocols to use based on what type of message is transmitted. For
example, the layer assigns the correct email protocols such as POP, SMTP, or IMAP if
the message is email content.
Transport layer: This layer is responsible for host-to-host communication on the network.
The protocols associated with this layer are TCP and UDP. TCP is responsible for flow
control. UDP is responsible for providing a datagram service.
Internet layer: This layer is responsible for exchanging datagrams. A datagram contains
the data from the transport layer and adds in the origin and recipient IP addresses. The
protocols associated with this layer are IP, ICMP, and the Internet Protocol Security
(IPsec) suite.
Network access layer: The bottom layer of this stack is responsible for defining how the
data is sent across the network. The protocols associated with this layer are ARP, MAC,
Ethernet, DSL, and ISDN.
Internet Protocol
What is the Internet Protocol (IP)?
The Internet Protocol (IP) is a protocol, or set of rules, for routing and addressing packets
of data so that they can travel across networks and arrive at the correct destination.
Data traversing the Internet is divided into smaller pieces, called packets. IP information
is attached to each packet, and this information helps routers to send packets to the right
place.
Every device or domain that connects to the Internet is assigned an IP address, and as
packets are directed to the IP address attached to them, data arrives where it is needed.
Once the packets arrive at their destination, they are handled differently depending on
which transport protocol is used in combination with IP. The most common transport
protocols are TCP and UDP.
Learn how to create a virtual network using the Azure portal. You deploy two virtual
machines (VMs). Next, you securely communicate between VMs and connect to VMs
from the internet. A virtual network is the fundamental building block for your private
network in Azure. It enables Azure resources, like VMs, to securely communicate with
each other and with the internet.
T AB L E 1
Setting Value
Project details
Instance details
Setting Value
11. Select the Review + create tab or select the Review + create button.
12. Select Create.
T AB L E 3
Setting Value
Project Details
Instance details
Setting Value
Administrator account
3. Select the Networking tab, or select Next: Disks, then Next: Networking.
4. In the Networking tab, select or enter:
T AB L E 4
Setting Value
Network interface
5. Select the Review + create tab, or select the blue Review + create button at
the bottom of the page.
6. Review the settings, and then select Create.
T AB L E 5
Setting Value
Project Details
Instance details
Administrator account
3. Select the Networking tab, or select Next: Disks, then Next: Networking.
4. In the Networking tab, select or enter:
Setting Value
Network interface
5. Select the Review + create tab, or select the blue Review + create button at
the bottom of the page.
6. Review the settings, and then select Create.
1. Go to the Azure portal to manage your private VM. Search for and
select Virtual machines.
2. Pick the name of your private virtual machine myVM1.
3. In the VM menu bar, select Connect, then select Bastion.
PowerShellCopy
Pinging myvm2.cs4wv3rxdjgedggsfghkjrxuqf.bx.internal.cloudapp.net [10.1.0.5] with 32
bytes of data:
Reply from 10.1.0.5: bytes=32 time=3ms TTL=128
Reply from 10.1.0.5: bytes=32 time=1ms TTL=128
Reply from 10.1.0.5: bytes=32 time=1ms TTL=128
Reply from 10.1.0.5: bytes=32 time=1ms TTL=128
PowerShellCopy
Pinging myvm1.cs4wv3rxdjgedggsfghkjrxuqf.bx.internal.cloudapp.net [10.1.0.4] with 32
bytes of data:
Reply from 10.1.0.4: bytes=32 time=1ms TTL=128
Reply from 10.1.0.4: bytes=32 time=1ms TTL=128
Reply from 10.1.0.4: bytes=32 time=1ms TTL=128
Reply from 10.1.0.4: bytes=32 time=1ms TTL=128
In this quickstart, you created a default virtual network and two VMs.
When you're done using the virtual network and the VMs, delete the resource group and
all of the resources it contains:
This section describes services that provide connectivity between Azure resources,
connectivity from an on-premises network to Azure resources, and branch to branch
connectivity in Azure - Virtual Network (VNet), ExpressRoute, VPN Gateway, Virtual
WAN, Virtual network NAT Gateway, Azure DNS, Azure Peering service, and Azure
Bastion.
Virtual network
Azure Virtual Network (VNet) is the fundamental building block for your private network
in Azure. You can use a VNets to:
ExpressRoute
ExpressRoute enables you to extend your on-premises networks into the Microsoft cloud
over a private connection facilitated by a connectivity provider. This connection is private.
Traffic does not go over the internet. With ExpressRoute, you can establish connections
to Microsoft cloud services, such as Microsoft Azure, Microsoft 365, and Dynamics 365.
For more information, see What is ExpressRoute?
VPN Gateway helps you create encrypted cross-premises connections to your virtual
network from on-premises locations, or create encrypted connections between VNets.
There are different configurations available for VPN Gateway connections, such as, site-
For more information about different types of VPN connections, see VPN Gateway.
Virtual WAN
Azure Virtual WAN is a networking service that provides optimized and automated branch
connectivity to, and through, Azure. Azure regions serve as hubs that you can choose to
connect your branches to. You can leverage the Azure backbone to also connect
branches and enjoy branch-to-VNet connectivity. Azure Virtual WAN brings together
many Azure cloud connectivity services such as site-to-site VPN, ExpressRoute, point-
to-site user VPN into a single operational interface. Connectivity to Azure VNets is
established by using virtual network connections. For more information, see What is
Azure virtual WAN?
Azure DNS is a hosting service for DNS domains that provides name resolution by using
Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your
DNS records by using the same credentials, APIs, tools, and billing as your other Azure
services. For more information, see What is Azure DNS?.
Azure Bastion
The Azure Bastion service is a new fully platform-managed PaaS service that you
provision inside your virtual network. It provides secure and seamless RDP/SSH
connectivity to your virtual machines directly in the Azure portal over TLS. When you
connect via Azure Bastion, your virtual machines do not need a public IP address. For
more information, see What is Azure Bastion?.
Azure Peering service enhances customer connectivity to Microsoft cloud services such
as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any
Microsoft services accessible via the public internet. For more information, see What is
Azure Peering Service?.
Azure Edge Zone is a family of offerings from Microsoft Azure that enables data
processing close to the user. You can deploy VMs, containers, and other selected Azure
services into Edge Zones to address the low latency and high throughput requirements
of applications.
Azure Orbital
Azure Orbital is a fully managed cloud-based ground station as a service that lets you
communicate with your spacecraft or satellite constellations, downlink and uplink data,
process your data in the cloud, chain services with Azure services in unique scenarios,
and generate products for your customers. This system is built on top of the Azure global
infrastructure and low-latency global fiber network.
This section describes networking services in Azure that help protect your network
resources - Protect your applications using any or a combination of these networking
services in Azure - DDoS protection, Private Link, Firewall, Web Application Firewall,
Network Security Groups, and Virtual Network Service Endpoints.
DDoS Protection
Azure DDoS Protection provides countermeasures against the most sophisticated DDoS
threats. The service provides enhanced DDoS mitigation capabilities for your application
and resources deployed in your virtual networks. Additionally, customers using Azure
DDoS Protection have access to DDoS Rapid Response support to engage DDoS
experts during an active attack.
Azure Private Link enables you to access Azure PaaS Services (for example, Azure
Storage and SQL Database) and Azure hosted customer-owned/partner services over a
private endpoint in your virtual network. Traffic between your virtual network and the
service travels the Microsoft backbone network. Exposing your service to the public
internet is no longer necessary. You can create your own private link service in your virtual
network and deliver it to your customers.
Azure Firewall is a managed, cloud-based network security service that protects your
Azure Virtual Network resources. Using Azure Firewall, you can centrally create, enforce,
and log application and network connectivity policies across subscriptions and virtual
networks. Azure Firewall uses a static public IP address for your virtual network resources
allowing outside firewalls to identify traffic originating from your virtual network.
For more information about Azure Firewall, see the Azure Firewall documentation.
Azure Web Application Firewall (WAF) provides protection to your web applications from
common web exploits and vulnerabilities such as SQL injection, and cross site scripting.
Azure WAF provides out of box protection from OWASP top 10 vulnerabilities via
managed rules. Additionally customers can also configure custom rules, which are
customer managed rules to provide additional protection based on source IP range, and
request attributes such as headers, cookies, form data fields or query string parameters.
Customers can choose to deploy Azure WAF with Application Gateway which provides
regional protection to entities in public and private address space. Customers can also
choose to deploy Azure WAF with Front Door which provides protection at the network
edge to public endpoints.
You can filter network traffic to and from Azure resources in an Azure virtual network with
a network security group. For more information, see Network security groups.
Service endpoints
Virtual Network (VNet) service endpoints extend your virtual network private address
space and the identity of your VNet to the Azure services, over a direct connection.
Endpoints allow you to secure your critical Azure service resources to only your virtual
networks. Traffic from your VNet to the Azure service always remains on the Microsoft
Azure backbone network. For more information, see Virtual network service endpoints.
This section describes networking services in Azure that help deliver applications -
Content Delivery Network, Azure Front Door Service, Traffic Manager, Load Balancer,
and Application Gateway.
Azure Content Delivery Network (CDN) offers developers a global solution for rapidly
delivering high-bandwidth content to users by caching their content at strategically placed
physical nodes across the world. For more information about Azure CDN, see Azure
Content Delivery Network.
Azure Front Door Service enables you to define, manage, and monitor the global routing
for your web traffic by optimizing for best performance and instant global failover for high
availability. With Front Door, you can transform your global (multi-region) consumer and
enterprise applications into robust, high-performance personalized modern applications,
APIs, and content that reach a global audience with Azure. For more information,
see Azure Front Door.
Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute
traffic optimally to services across global Azure regions, while providing high availability
and responsiveness. Traffic Manager provides a range of traffic-routing methods to
distribute traffic such as priority, weighted, performance, geographic, multi-value, or
subnet. For more information about traffic routing methods, see Traffic Manager routing
methods.
The following picture shows an Internet-facing multi-tier application that utilizes both
external and internal load balancers:
Azure Application Gateway is a web traffic load balancer that enables you to manage
traffic to your web applications. It is an Application Delivery Controller (ADC) as a service,
offering various layer 7 load-balancing capabilities for your applications. For more
information, see What is Azure Application Gateway?
The following diagram shows url path-based routing with Application Gateway.
This section describes networking services in Azure that help monitor your network
resources - Network Watcher, Azure Monitor Network Insights, Azure Monitor,
ExpressRoute Monitor, and Virtual Network TAP.
Network Watcher
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or
disable logs for resources in an Azure virtual network. For more information, see What is
Network Watcher?.
Azure Monitor for Networks provides a comprehensive view of health and metrics for all
deployed network resources, without requiring any configuration. It also provides access
to network monitoring capabilities like Connection Monitor, flow logging for network
security groups, and Traffic Analytics. For more information, see Azure Monitor Network
Insights.
Azure Monitor
Azure virtual network TAP (Terminal Access Point) allows you to continuously stream
your virtual machine network traffic to a network packet collector or analytics tool. The
collector or analytics tool is provided by a network virtual appliance partner.
2.8.1 Deploy and configure Azure Firewall using the Azure portal
One way you can control outbound network access from an Azure subnet is with Azure
Firewall. With Azure Firewall, you can configure:
Application rules that define fully qualified domain names (FQDNs) that can
be accessed from a subnet.
Network rules that define source address, protocol, destination port, and
destination address.
Network traffic is subjected to the configured firewall rules when you route your network
traffic to the firewall as the subnet default gateway.
For this tutorial, you create a simplified single VNet with two subnets for easy deployment.
For production deployments, a hub and spoke model is recommended, where the firewall
is in its own VNet. The workload servers are in peered VNets in the same region with one
or more subnets.
If you prefer, you can complete this tutorial using Azure PowerShell.
Prerequisites
If you don't have an Azure subscription, create a free account before you begin.
First, create a resource group to contain the resources needed to deploy the firewall.
Then create a VNet, subnets, and a test server.
The resource group contains all the resources for the tutorial.
Note
The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet
size, see Azure Firewall FAQ.
1. On the Azure portal menu or from the Home page, select Create a resource.
2. Select Networking > Virtual network.
3. Select Create.
4. For Subscription, select your subscription.
5. For Resource group, select Test-FW-RG.
6. For Name, type Test-FW-VN.
7. For Region, select the same location that you used previously.
8. Select Next: IP addresses.
9. For IPv4 Address space, type 10.0.0.0/16.
10. Under Subnet, select default.
11. For Subnet name type AzureFirewallSubnet. The firewall will be in this subnet,
and the subnet name must be AzureFirewallSubnet.
12. For Address range, type 10.0.1.0/26.
13. Select Save.
Now create the workload virtual machine, and place it in the Workload-SN subnet.
1. On the Azure portal menu or from the Home page, select Create a resource.
2. Select Windows Server 2016 Datacenter.
3. Enter these values for the virtual machine:
T AB L E 1
Setting Value
1. On the Azure portal menu or from the Home page, select Create a resource.
2. Type firewall in the search box and press Enter.
3. Select Firewall and then select Create.
T AB L E 2
Setting Value
Name Test-FW01
For the Workload-SN subnet, configure the outbound default route to go through the
firewall.
1. On the Azure portal menu, select All services or search for and select All
services from any page.
2. Under Networking, select Route tables.
3. Select Add.
4. For Subscription, select your subscription.
5. For Resource group, select Test-FW-RG.
6. For Region, select the same location that you used previously.
7. For Name, type Firewall-route.
9. For Next hop address, type the private IP address for the firewall that you
noted previously.
10. Select OK.
This is the network rule that allows outbound access to two IP addresses at port 53 (DNS).
This rule allows you to connect a remote desktop to the Srv-Work virtual machine through
the firewall.
For testing purposes in this tutorial, configure the server's primary and secondary DNS
addresses. This isn't a general Azure Firewall requirement.
1. On the Azure portal menu, select Resource groups or search for and
select Resource groups from any page. Select the Test-FW-RG resource
group.
2. Select the network interface for the Srv-Work virtual machine.
3. Under Settings, select DNS servers.
4. Under DNS servers, select Custom.
5. Type 209.244.0.3 in the Add DNS server text box, and 209.244.0.4 in the
next text box.
6. Select Save.
7. Restart the Srv-Work virtual machine.
2.8.10 Test the firewall
1. Connect a remote desktop to firewall public IP address and sign in to the Srv-
Work virtual machine.
2. Open Internet Explorer and browse to https://www.google.com.
3. Select OK > Close on the Internet Explorer security alerts.
4. Browse to https://www.microsoft.com.
You can browse to the one allowed FQDN, but not to any others.
You can resolve DNS names using the configured external DNS server.
The following diagram shows the virtual network and the VPN gateway created as part of
this tutorial.
Prerequisites
An Azure account with an active subscription. If you don't have one, create one for free.
When you fill in the fields, you see a green check mark when the characters you enter in
the field are validated. Some values are autofilled, which you can replace with your own
values:
In this step, you create the virtual network gateway for your VNet. Creating a gateway
can often take 45 minutes or more, depending on the selected gateway SKU.
Name: VNet1GW
Region: East US
Gateway type: VPN
VPN type: Route-based
SKU: VpnGw1
Generation: Generation1
Virtual network: VNet1
Gateway subnet address range: 10.1.255.0/27
Public IP address: Create new
Public IP address name: VNet1GWpip
Enable active-active mode: Disabled
Configure BGP: Disabled
2. On the Virtual network gateway page, select + Add. This opens the Create
virtual network gateway page.
Instance details
o Name: Name your gateway. Naming your gateway not the same
as naming a gateway subnet. It's the name of the gateway object
you are creating.
o Region: Select the region in which you want to create this
resource. The region for the gateway must be the same as the
virtual network.
o Gateway type: Select VPN. VPN gateways use the virtual
network gateway type VPN.
o VPN type: Select the VPN type that is specified for your
configuration. Most configurations require a Route-based VPN
type.
Public IP address
This setting specifies the public IP address object that gets associated to the
VPN gateway. The public IP address is dynamically assigned to this object
when the VPN gateway is created. The only time the Public IP address
changes is when the gateway is deleted and re-created. It doesn't change
across resizing, resetting, or other internal maintenance/upgrades of your
VPN gateway.
A gateway can take up to 45 minutes to fully create and deploy. You can see the
deployment status on the Overview page for your gateway. After the gateway is created,
you can view the IP address that has been assigned to it by looking at the virtual network
in the portal. The gateway appears as a connected device.
When working with gateway subnets, avoid associating a network security group (NSG)
to the gateway subnet. Associating a network security group to this subnet may cause
your Virtual Network gateway(VPN, Express Route gateway) to stop functioning as
expected.
You can view the gateway public IP address on the Overview page for your gateway.
To see additional information about the public IP address object, click the name/IP
address link next to Public IP address.
There are specific rules regarding resizing vs. changing a gateway SKU. In this section,
we will resize the SKU. For more information, see Gateway settings - resizing and
changing SKUs.
1. In the portal, navigate to the virtual network gateway that you want to reset.
2. On the page for the virtual network gateway, select Reset.
If you're not going to continue to use this application or go to the next tutorial, delete these
resources using the following steps:
1. Enter the name of your resource group in the Search box at the top of the
portal and select it from the search results.
2. Select Delete resource group.
3. Enter your resource group for TYPE THE RESOURCE GROUP NAME and
select Delete.
13. Select Create a resource in the upper left-hand corner of the portal.
14. In the search box, enter Virtual Network. Select Virtual Network in the
search results.
15. In the Virtual Network page, select Create.
16. In Create virtual network, enter or select this information in the Basics tab:
T AB L E 1
Setting Value
Project details
Instance details
Setting Value
17. Select the IP Addresses tab, or select the Next: IP Addresses button at the
bottom of the page.
18. In IPv4 address space, select the existing address space and change it
to 10.1.0.0/16.
19. Select + Add subnet, then enter MySubnet for Subnet
name and 10.1.0.0/24 for Subnet address range.
T AB L E 2
Setting Value
23. Select the Review + create tab or select the Review + create button.
24. Select Create.
Subnet Number Last Octet Binary Value Last Octet Decimal Value
0 00000000 .0
1 00100000 .32
2 01000000 .64
3 01100000 .96
4 10000000 .128
5 10100000 .160
6 11000000 .192
7 11100000 .224
0 0 0 0 0 0 0 0
7 6 5 4 3 2 1 0
^ ^ ^ ^ ^ ^ ^ ^
2 2 2 2 2 2 2 2
128 64 32 16 8 4 2 1
=64+32+16+8+4+2+1
=127
7 Bits Used
2^7 – 2 =
127-2=125
Because we have 1 bit remaining in the original host portion, we borrow bit to satisfy the
requirement to “create as many subnets as possible.” To determine how many subnets
we can create, use the following formula:
2BB = Number of subnets
Subnet Number Last Octet Binary Value Last Octet Decimal Value
0 00000000 .0
1 10000000 .128
T AB L E 1
Setting Value
Project details
Instance details
Setting Value
11. Select the Review + create tab or select the Review + create button.
12. Select Create.
T AB L E 3
Setting Value
Project Details
Instance details
Setting Value
Administrator
account
3. Select the Networking tab, or select Next: Disks, then Next: Networking.
4. In the Networking tab, select or enter:
T AB L E 4
Setting Value
Network interface
5. Select the Review + create tab, or select the blue Review + create button at
the bottom of the page.
T AB L E 5
Setting Value
Project Details
Instance details
Administrator
account
T AB L E 6
Setting Value
Network interface
5. Select the Review + create tab, or select the blue Review + create button at
the bottom of the page.
6. Review the settings, and then select Create.
1. Go to the Azure portal to view your virtual networks. Search for and
select Virtual networks.
2. Select the name of the virtual network you want to add a subnet to.
3. From Settings, select Subnets > Subnet.
4. In the Add subnet dialog box, enter values for the following settings:
T AB L E 1
Setting Description
Setting Description
Address The range must be unique within the address space for the virtual
range network. The range can't overlap with other subnet address ranges
within the virtual network. The address space must be specified by
using Classless Inter-Domain Routing (CIDR) notation.
Route table To control network traffic routing to other networks, you may
optionally associate an existing route table to a subnet. The
route table must exist in the same subscription and location as
the virtual network. Learn more about Azure routing and how to
create a route table.
Service A subnet may optionally have one or more service endpoints enabled
endpoints for it.
5. To add the subnet to the virtual network that you selected, select OK.
T1 adds 20% interest to Bob's savings account and T2 adds 20 pounds to Bob's account.
4. Durability
Changes that have been committed to the database should remain even in the case of
software and hardware failure. For instance, if Bob’s account contains $120, this
information should not disappear upon hardware or software failure.
In this quickstart, you create a single database in Azure SQL Database using either the
Azure portal, a PowerShell script, or an Azure CLI script. You then query the database
using query editor in the Azure portal.
This quickstart creates a single database in the serverless compute tier. To create a
single database in the Azure portal this quickstart starts at the Azure SQL page.
3. On the Basics tab of the Create SQL Database form, under Project details,
select the desired Azure Subscription.
4. For Resource group, select Create new, enter myResourceGroup, and
select OK.
5. For Database name enter mySampleDatabase.
6. For Server, select Create new, and fill out the New server form with the
following values:
o Server name: Enter mysqlserver, and add some characters for
uniqueness. We can't provide an exact server name to use
Select OK.
Once your database is created, you can use the Query editor (preview) in the Azure portal
to connect to the database and query data.
1. In the portal, search for and select SQL databases, and then select your
database from the list.
2. On the page for your database, select Query editor (preview) in the left menu.
3. Enter your server admin login information, and select OK.
SQLCopy
SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName
FROM SalesLT.ProductCategory pc
JOIN SalesLT.Product p
ON pc.productcategoryid = p.productcategoryid;
5. Select Run, and then review the query results in the Results pane.
2.14.1 Design an Azure Database for MySQL database using the Azure
portal
Azure Database for MySQL is a managed service that enables you to run, manage, and
scale highly available MySQL databases in the cloud. Using the Azure portal, you can
easily manage your server and design a database.
In this tutorial, you use the Azure portal to learn how to:
If you don't have an Azure subscription, create a free Azure account before you begin.
Open your favorite web browser, and visit the Microsoft Azure portal. Enter your
credentials to sign in to the portal. The default view is your service dashboard.
An Azure Database for MySQL server is created with a defined set of compute and
storage resources. The server is created within an Azure resource group.
1. Select the Create a resource button (+) in the upper left corner of the portal.
2. Select Databases > Azure Database for MySQL. If you cannot find MySQL
Server under the Databases category, click See all to show all available
database services. You can also type Azure Database for MySQL in the
search box to quickly find the service.
Server Unique server name Choose a unique name that identifies your Azure
name Database for MySQL server. For example,
Subscriptio Your subscription Select the Azure subscription that you want to use for
n your server. If you have multiple subscriptions, choose
the subscription in which you get billed for the resource.
Select Blank Select Blank to create a new server from scratch. (You
source select Backup if you are creating a server from a geo-
backup of an existing Azure Database for MySQL
server).
Password Your choice Provide a new password for the server admin account.
It must contain from 8 to 128 characters. Your
password must contain characters from three of the
following categories: English uppercase letters, English
lowercase letters, numbers (0-9), and non-
alphanumeric characters (!, $, #, %, and so on).
Location The region closest to Choose the location that is closest to your users or your
your users other Azure applications.
Version The latest version The latest version (unless you have specific
requirements that require another version).
Pricing tier General The compute, storage, and backup configurations for
Purpose, Gen 5, 2 your new server. Select Pricing tier. Next, select
vCores, 5 GB, 7 the General Purpose tab. Gen 5, 2 vCores, 5 GB,
and 7 days are the default values for Compute
Tip
With auto-growth enabled your server increases storage when you are
approaching the allocated limit, without impacting your workload.
4. Click Review + create. You can click on the Notifications button on the toolbar
to monitor the deployment process. Deployment can take up to 20 minutes.
Azure Databases for MySQL are protected by a firewall. By default, all connections to the
server and the databases inside the server are rejected. Before connecting to Azure
Database for MySQL for the first time, configure the firewall to add the client machine's
public network IP address (or IP address range).
1. Click your newly created server, and then click Connection security.
Tip
Azure Database for MySQL server communicates over port 3306. If you are trying to
connect from within a corporate network, outbound traffic over port 3306 may not be
allowed by your network's firewall. If so, you cannot connect to Azure MySQL server
unless your IT department opens port 3306.
Get the fully qualified Server name and Server admin login name for your Azure Database
for MySQL server from the Azure portal. You use the fully qualified server name to
connect to your server using mysql command-line tool.
1. In Azure portal, click All resources from the left-hand menu, type the name,
and search for your Azure Database for MySQL server. Select the server
name to view the details.
2. From the Overview page, note down Server Name and Server admin login
name. You may click the copy button next to each field to copy to the
clipboard.
Use mysql command-line tool to establish a connection to your Azure Database for
MySQL server. You can run the mysql command-line tool from the Azure Cloud Shell in
the browser or from your own machine using mysql tools installed locally. To launch the
Azure Cloud Shell, click the Try It button on a code block in this article, or visit the Azure
portal and click the >_ icon in the top right toolbar.
Azure CLICopy
Try It
mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
2.14.7 Create a blank database
Once you're connected to the server, create a blank database to work with.
USE mysampledb;
2.14.8.Create tables in the database
Now that you know how to connect to the Azure Database for MySQL database, you can
complete some basic tasks:
First, create a table and load it with some data. Let's create a table that stores inventory
information.
Now that you have a table, insert some data into it. At the open command prompt window,
run the following query to insert some rows of data.
INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
Now you have two rows of sample data into the table you created earlier.
Execute the following query to retrieve information from the database table.
Imagine you have accidentally deleted an important database table, and cannot recover
the data easily. Azure Database for MySQL allows you to restore the server to a point in
time, creating a copy of the databases into new server. You can use this new server to
If you don't expect to need these resources in the future, you can delete them by deleting
the resource group or just delete the MySQL server. To delete the resource group, follow
these steps:
Feature Details
Elastic database jobs For information, see Create, configure, and manage elastic jobs.
(preview)
Query editor in the Azure For information, see Use the Azure portal's SQL query editor to
portal connect and query data.
New features
SQL Managed Instance H2 2019 updates
Service-aided subnet configuration is a secure and convenient way to manage
subnet configuration where you control data traffic while SQL Managed Instance
ensures the uninterrupted flow of management traffic.
Transparent Data Encryption (TDE) with Bring Your Own Key (BYOK) enables a
bring-your-own-key (BYOK) scenario for data protection at rest and allows
organizations to separate management duties for keys and data.
Azure App Service provides a highly scalable, self-patching web hosting service using
the Windows operating system. This tutorial shows how to create a PHP app in Azure
and connect it to a MySQL database. When you're finished, you'll have a Laravel app
running on Azure App Service on Windows.
If you don't have an Azure subscription, create a free account before you begin.
Prerequisites
Install Git
Install PHP 5.6.4 or above
Install Composer
Enable the following PHP extensions Laravel needs: OpenSSL, PDO-
MySQL, Mbstring, Tokenizer, XML
Install and start MySQL
If you prefer, install the Azure CLI to run CLI reference commands.
o If you're using a local installation, sign in to the Azure CLI by using
the az login command. To finish the authentication process, follow
the steps displayed in your terminal. For additional sign-in options,
see Sign in with the Azure CLI.
o When you're prompted, install Azure CLI extensions on first use.
For more information about extensions, see Use extensions with
the Azure CLI.
o Run az version to find the version and dependent libraries that are
installed. To upgrade to the latest version, run az upgrade.
In this step, you create a database in your local MySQL server for your use in this tutorial.
In a terminal window, connect to your local MySQL server. You can use this terminal
window to run all the commands in this tutorial.
BashCopy
mysql -u root -p
If you're prompted for a password, enter the password for the root account. If you don't
remember your root account password, see MySQL: How to Reset the Root Password.
If your command runs successfully, then your MySQL server is running. If not, make sure
that your local MySQL server is started by following the MySQL post-installation steps.
SQLCopy
CREATE DATABASE sampledb;
SQLCopy
quit
2.16.3 Create a PHP app locally
In this step, you get a Laravel sample application, configure its database connection, and
run it locally.
BashCopy
git clone https://github.com/Azure-Samples/laravel-tasks
BashCopy
cd laravel-tasks
In the repository root, create a file named .env. Copy the following variables into
the .env file. Replace the <root_password> placeholder with the MySQL root user's
password.
txtCopy
APP_ENV=local
APP_DEBUG=true
APP_KEY=
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_DATABASE=sampledb
DB_USERNAME=root
DB_PASSWORD=<root_password>
For information on how Laravel uses the .env file, see Laravel Environment
Configuration.
Run Laravel database migrations to create the tables the application needs. To see which
tables are created in the migrations, look in the database/migrations directory in the Git
repository.
BashCopy
php artisan migrate
BashCopy
php artisan key:generate
BashCopy
php artisan serve
In this step, you create a MySQL database in Azure Database for MySQL. Later, you
configure the PHP application to connect to this database.
A resource group is a logical container into which Azure resources, such as web apps,
databases, and storage accounts, are deployed and managed. For example, you can
choose to delete the entire resource group in one simple step later.
Try It
az group create --name myResourceGroup --location "West Europe"
You generally create your resource group and the resources in a region near you.
When the command finishes, a JSON output shows you the resource group properties.
In the Cloud Shell, create a server in Azure Database for MySQL with the az mysql server
create command.
In the following command, substitute a unique server name for the <mysql-server-
name> placeholder, a user name for the <admin-user>, and a password for the <admin-
password> placeholder. The server name is used as part of your MySQL endpoint
(https://<mysql-server-name>.mysql.database.azure.com), so the name needs to be unique across
all servers in Azure. For details on selecting MySQL DB SKU, see Create an Azure
Database for MySQL server.
Azure CLICopy
Try It
az mysql server create --resource-group myResourceGroup --name <mysql-server-name> --location "West
Europe" --admin-user <admin-user> --admin-password <admin-password> --sku-name B_Gen5_1
When the MySQL server is created, the Azure CLI shows information similar to the
following example:
{
"administratorLogin": "<admin-user>",
"administratorLoginPassword": null,
"fullyQualifiedDomainName": "<mysql-server-name>.mysql.database.azure.com",
"id": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/myResourceGroup/providers/Microsoft.DBforMySQL/servers/<mysql-
server-name>",
"location": "westeurope",
"name": "<mysql-server-name>",
"resourceGroup": "myResourceGroup",
...
- < Output has been truncated for readability >
}
In the Cloud Shell, create a firewall rule for your MySQL server to allow client connections
by using the az mysql server firewall-rule create command. When both starting IP and end IP
are set to 0.0.0.0, the firewall is only opened for other Azure resources.
Try It
az mysql server firewall-rule create --name allAzureIPs --server <mysql-server-name> --resource-group
myResourceGroup --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
Tip
You can be even more restrictive in your firewall rule by using only the outbound IP
addresses your app uses.
In the Cloud Shell, run the command again to allow access from your local computer by
replacing <your-ip-address> with your local IPv4 IP address.
Azure CLICopy
Try It
az mysql server firewall-rule create --name AllowLocalClient --server <mysql-server-name> --resource-
group myResourceGroup --start-ip-address=<your-ip-address> --end-ip-address=<your-ip-address>
In the local terminal window, connect to the MySQL server in Azure. Use the value you
specified previously for <admin-user> and <mysql-server-name>. When prompted for a
password, use the password you specified when you created the database in Azure.
BashCopy
mysql -u <admin-user>@<mysql-server-name> -h <mysql-server-name>.mysql.database.azure.com -P
3306 -p
Create a production database
SQLCopy
CREATE DATABASE sampledb;
Create a user with permissions
SQLCopy
quit
2.16.7 Connect app to Azure MySQL
In this step, you connect the PHP application to the MySQL database you created in
Azure Database for MySQL.
In the repository root, create an .env.production file and copy the following variables into
it. Replace the placeholder_<mysql-server-name>_ in
both DB_HOST and DB_USERNAME.
Copy
APP_ENV=production
APP_DEBUG=true
APP_KEY=
DB_CONNECTION=mysql
DB_HOST=<mysql-server-name>.mysql.database.azure.com
DB_DATABASE=sampledb
DB_USERNAME=phpappuser@<mysql-server-name>
DB_PASSWORD=MySQLAzure2017
MYSQL_SSL=true
Tip
To secure your MySQL connection information, this file is already excluded from the Git
repository (See .gitignore in the repository root). Later, you learn how to configure
environment variables in App Service to connect to your database in Azure Database for
MySQL. With environment variables, you don't need the .env file in App Service.
By default, Azure Database for MySQL enforces TLS connections from clients. To
connect to your MySQL database in Azure, you must use the .pem certificate supplied by
Azure Database for MySQL.
PHPCopy
'mysql' => [
...
'sslmode' => env('DB_SSLMODE', 'prefer'),
'options' => (env('MYSQL_SSL')) ? [
PDO::MYSQL_ATTR_SSL_KEY => '/ssl/BaltimoreCyberTrustRoot.crt.pem',
] : []
],
Run Laravel database migrations with .env.production as the environment file to create
the tables in your MySQL database in Azure Database for MySQL. Remember
that .env.production has the connection information to your MySQL database in Azure.
BashCopy
php artisan migrate --env=production --force
.env.production doesn't have a valid application key yet. Generate a new one for it in the
terminal.
BashCopy
php artisan key:generate --env=production --force
BashCopy
php artisan serve --env=production
Navigate to http://localhost:8000. If the page loads without errors, the PHP application is
connecting to the MySQL database in Azure.
BashCopy
git add .
git commit -m "database.php updates"
In this step, you deploy the MySQL-connected PHP application to Azure App Service.
FTP and local Git can deploy to an Azure web app by using a deployment user. Once you
configure your deployment user, you can use it for all your Azure deployments. Your
account-level deployment username and password are different from your Azure
subscription credentials.
To configure the deployment user, run the az webapp deployment user set command in
Azure Cloud Shell. Replace <username> and <password> with a deployment user
username and password.
The username must be unique within Azure, and for local Git pushes, must
not contain the ‘@’ symbol.
The password must be at least eight characters long, with two of the following
three elements: letters, numbers, and symbols.
Try It
az webapp deployment user set --user-name <username> --password <password>
The JSON output shows the password as null. If you get a 'Conflict'. Details: 409 error,
change the username. If you get a 'Bad Request'. Details: 400 error, use a stronger password.
Record your username and password to use to deploy your web apps.
In the Cloud Shell, create an App Service plan with the az appservice plan create command.
Try It
az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku FREE
When the App Service plan has been created, the Azure CLI shows information similar
to the following example:
{
"adminSiteName": null,
"appServicePlanName": "myAppServicePlan",
"geoRegion": "West Europe",
In the Cloud Shell, you can use the az webapp create command. In the following example,
replace <app-name> with a globally unique app name (valid characters are a-z, 0-9, and -).
The runtime is set to PHP|7.2. To see all supported runtimes, run az webapp list-runtimes --
linux.
Try It
# Bash
az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --
runtime "PHP|7.2" --deployment-local-git
# PowerShell
az --% webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name>
--runtime "PHP|7.2" --deployment-local-git
When the web app has been created, the Azure CLI shows output similar to the following
example:
You’ve created an empty new web app, with git deployment enabled.
The URL of the Git remote is shown in the deploymentLocalGitUrl property, with the
format https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git. Save this URL as
you need it later.
In App Service, you set environment variables as app settings by using the az webapp config
appsettings set command.
Try It
az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings
DB_HOST="<mysql-server-name>.mysql.database.azure.com" DB_DATABASE="sampledb"
DB_USERNAME="phpappuser@<mysql-server-name>" DB_PASSWORD="MySQLAzure2017"
MYSQL_SSL="true"
You can use the PHP getenv method to access the settings. the Laravel code uses
an env wrapper over the PHP getenv. For example, the MySQL configuration
in config/database.php looks like the following code:
PHPCopy
'mysql' => [
'driver' => 'mysql',
'host' => env('DB_HOST', 'localhost'),
'database' => env('DB_DATABASE', 'forge'),
'username' => env('DB_USERNAME', 'forge'),
'password' => env('DB_PASSWORD', ''),
...
],
Configure Laravel environment variables
Laravel needs an application key in App Service. You can configure it with app settings.
In the local terminal window, use php artisan to generate a new application key without
saving it to .env.
BashCopy
php artisan key:generate --show
Try It
az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings
APP_KEY="<output_of_php_artisan_key:generate>" APP_DEBUG="true"
APP_DEBUG="true" tells Laravel to return debugging information when the deployed app
encounters errors. When running a production application, set it to false, which is more
secure.
Set the virtual application path for the app. This step is required because the Laravel
application lifecycle begins in the public directory instead of the application's root
directory. Other PHP frameworks whose lifecycle start in the root directory can work
without manual configuration of the virtual application path.
In the Cloud Shell, set the virtual application path by using the az resource update command.
Replace the <app-name> placeholder.
Try It
az resource update --name web --resource-group myResourceGroup --namespace Microsoft.Web --
resource-type config --parent sites/<app_name> --set
properties.virtualApplications[0].physicalPath="site\wwwroot\public" --api-version 2015-06-01
By default, Azure App Service points the root virtual application path (/) to the root
directory of the deployed application files (sites\wwwroot).
Back in the local terminal window, add an Azure remote to your local Git repository.
Replace <deploymentLocalGitUrl-from-create-step> with the URL of the Git remote that
you saved from Create a web app.
BashCopy
git remote add azure <deploymentLocalGitUrl-from-create-step>
Push to the Azure remote to deploy your app with the following command. When Git
Credential Manager prompts you for credentials, make sure you enter the credentials you
created in Configure a deployment user, not the credentials you use to sign in to the Azure
portal.
This command may take a few minutes to run. While running, it displays information
similar to the following example:
You may notice that the deployment process installs Composer packages at the end.
App Service does not run these automations during default deployment, so this sample
repository has three additional files in its root directory to enable it:
.deployment - This file tells App Service to run bash deploy.sh as the custom
deployment script.
deploy.sh - The custom deployment script. If you review the file, you will see
that it runs php composer.phar install after npm install.
composer.phar - The Composer package manager.
You can use this approach to add any step to your Git-based deployment to App Service.
For more information, see Custom Deployment Script.
In this step, you make a simple change to the task data model and the webapp, and then
publish the update to Azure.
For the tasks scenario, you modify the application so that you can mark a task as
complete.
Add a column
In the local terminal window, navigate to the root of the Git repository.
BashCopy
php artisan make:migration add_complete_column --table=tasks
This command shows you the name of the migration file that's generated. Find this file
in database/migrations and open it.
PHPCopy
public function up()
{
Schema::table('tasks', function (Blueprint $table) {
$table->boolean('complete')->default(False);
});
}
The preceding code adds a boolean column in the tasks table called complete.
Replace the down method with the following code for the rollback action:
PHPCopy
public function down()
{
Schema::table('tasks', function (Blueprint $table) {
$table->dropColumn('complete');
});
}
In the local terminal window, run Laravel database migrations to make the change in the
local database.
BashCopy
php artisan migrate
Based on the Laravel naming convention, the model Task (see app/Task.php) maps to
the tasks table by default.
Open the routes/web.php file. The application defines its routes and business logic here.
At the end of the file, add a route with the following code:
$task->complete = !$task->complete;
$task->save();
return redirect('/');
});
The preceding code makes a simple update to the data model by toggling the value
of complete.
Open the resources/views/tasks.blade.php file. Find the <tr> opening tag and replace it
with:
HTMLCopy
<tr class="{{ $task->complete ? 'success' : 'active' }}" >
The preceding code changes the row color depending on whether the task is complete.
HTMLCopy
<td class="table-text"><div>{{ $task->name }}</div></td>
HTMLCopy
<td>
<form action="{{ url('task/'.$task->id) }}" method="POST">
{{ csrf_field() }}
In the local terminal window, run the development server from the root directory of the Git
repository.
BashCopy
php artisan serve
To see the task status change, navigate to http://localhost:8000 and select the checkbox.
In the local terminal window, run Laravel database migrations with the production
connection string to make the change in the Azure database.
BashCopy
php artisan migrate --env=production --force
Commit all the changes in Git, and then push the code changes to Azure.
BashCopy
git add .
git commit -m "added complete checkbox"
git push azure main
Once the git push is complete, navigate to the Azure app and test the new functionality.
While the PHP application runs in Azure App Service, you can get the console logs piped
to your terminal. That way, you can get the same diagnostic messages to help you debug
application errors.
To start log streaming, use the az webapp log tail command in the Cloud Shell.
Try It
az webapp log tail --name <app_name> --resource-group myResourceGroup
From the left menu, click App Services, and then click the name of your Azure app.
You see your app's Overview page. Here, you can perform basic management tasks like
stop, start, restart, browse, and delete.
In the preceding steps, you created Azure resources in a resource group. If you don't
expect to need these resources in the future, delete the resource group by running the
following command in the Cloud Shell:
Try It
az group delete --name myResourceGroup
This command may take a minute to run.
These SQL commands are mainly categorized into four categories as:
Though many resources claim there to be another category of SQL clauses TCL –
Transaction Control Language. So we will see in detail about TCL as well.
CREATE – is used to create the database or its objects (like table, index, function, views,
store procedure and triggers).
TRUNCATE–is used to remove all records from a table, including all spaces allocated for
the records are removed.
CREATE DATABASE
A Database is defined as a structured set of data. So, in SQL the very first step to store
the data in a well-structured manner is to create a database. The CREATE DATABASE
statement is used to create a new database in SQL.
Example Query:
This query will create a new database in SQL and name the database as my_database.
CREATE TABLE
We have learned above about creating databases. Now to store the data we need a table
to do that. The CREATE TABLE statement is used to create a table in SQL. We know
that a table comprises of rows and columns. So while creating tables we have to provide
all the information to SQL about the names of the columns, type of data to be stored in
columns, size of the data etc. Let us now dive into details on how to use CREATE TABLE
statement to create tables in SQL.
Syntax:
column1 data_type(size),
column2 data_type(size),
column3 data_type(size),
....
);
Example Query:
This query will create a table named Students with three columns, ROLL_NO, NAME and
SUBJECT.
ROLL_NO int(3),
NAME varchar(20),
);
DROP
DROP is used to delete a whole database or just a table.The DROP statement destroys
the objects like an existing database, table, index, or view.
Syntax:
Examples:
ALTER TABLE is used to add, delete/drop or modify columns in the existing table. It is
also used to add and drop various constraints on the existing table.
Syntax:
Columnname_2 datatype,
Columnname_n datatype);
Syntax:
ALTER TABLE-MODIFY
It is used to modify the existing columns in a table. Multiple columns can also be modified
at once.
Syntax(Oracle,MySQL,MariaDB):
Syntax(SQL Server):
QUERY:
TRUNCATE
TRUNCATE statement is a Data Definition Language (DDL) operation that is used to
mark the extents of a table for deallocation (empty for reuse). The result of this operation
quickly removes all data from a table, typically bypassing a number of integrity enforcing
mechanisms. It was officially introduced in the SQL:2008 standard.
Syntax:
SQL | Comments
As is any programming languages comments matter a lot in SQL also. In this set we will
learn about writing comments in any SQL snippet.
In line comments
Single line comments: Comments starting and ending in a single line are considered as
single line comments.
Syntax:
-- another comment
Multi line comments: Comments starting in one line and ending in different line are
considered as multi line comments. Line starting with ‘/*’ is considered as starting point
of comment and are terminated when ‘*/’ is encountered.
Syntax:
another comment */
In line comments: In line comments are an extension of multi line comments, comments
can be stated in between the statements and are enclosed in between ‘/*’ and ‘*/’.
Syntax:
More examples:
RENAME TO new_table_name;
Columns can be also be given new name with the use of ALTER TABLE.
Syntax(MySQL, Oracle):
Syntax(MariaDB):
QUERY:
The SQL commands that deals with the manipulation of data present in the database
belong to DML or Data Manipulation Language and this includes most of the SQL
statements.
Examples of DML:
Only values: First method is to specify only the value of data to be inserted without the
column names.
value1, value2,.. : value of first column, second column,… for the new record
Column names and values both: In the second method we will specify both the columns
which we want to fill and their corresponding values as shown below:
value1, value2, value3 : value of first column, second column,… for the new record
Example:
Input :
(2,"GAURI RAO",18,12,"BANGALORE"),
(4,"RIYA KAPOOR",10,5,"UDAIPUR");
The UPDATE statement in SQL is used to update the data of an existing table in
database. We can update single columns as well as multiple columns using UPDATE
statement as per our requirement.
Basic Syntax
WHERE condition;
Updating single column: Update the column NAME and set the value to ‘PRATIK’ in all
the rows where Age is 20.
Updating multiple columns: Update the columns NAME to ‘PRATIK’ and ADDRESS to
‘SIKKIM’ where ROLL_NO is 1.
Omitting WHERE clause: If we omit the WHERE clause from the update query then all of
the rows will get updated.
Basic Syntax:
Deleting single record: Delete the rows where NAME = ‘Ram’. This will delete only the
first row.
Deleting multiple records: Delete the rows from the table Student where Age is 20. This
will delete 2 rows(third row and fifth row).
Delete all of the records: There are two queries to do this as shown below,
A database in Azure SQL Database is created with a defined set of compute and storage
resources. The database is created within an Azure resource group and is managed
using an logical SQL server.
1. On the Azure portal menu or from the Home page, select Create a resource.
2. On the New page, select Databases in the Azure Marketplace section, and
then click SQL Database in the Featured section.
Resource yourResourceGroup For valid resource group names, see Naming rules
group and restrictions.
4. Click Server to use an existing server or create and configure a new server.
Either select an existing server or click Create a new server and fill out
the New server form with the following information:
T AB L E 2
Server Any globally For valid server names, see Naming rules and restrictions.
name unique name
Server Any valid For valid login names, see Database identifiers.
admin name
login
Password Any valid Your password must have at least eight characters and must
password use characters from three of the following categories: upper
case characters, lower case characters, numbers, and non-
alphanumeric characters.
Location Any valid For information about regions, see Azure Regions.
location
After selecting the service tier, the number of DTUs or vCores, and the
amount of storage, click Apply.
7. Enter a Collation for the blank database (for this tutorial, use the default
value). For more information about collations, see Collations
8. Now that you've completed the SQL Database form, click Create to provision
the database. This step may take a few minutes.
9. On the toolbar, click Notifications to monitor the deployment process.
Azure SQL Database creates an IP firewall at the server-level. This firewall prevents
external applications and tools from connecting to the server and any databases on the
server unless a firewall rule allows their IP through the firewall. To enable external
connectivity to your database, you must first add an IP firewall rule for your IP address
(or IP address range). Follow these steps to create a server-level IP firewall rule.
Important
Azure SQL Database communicates over port 1433. If you are trying to connect to this
service from within a corporate network, outbound traffic over port 1433 may not be
allowed by your network's firewall. If so, you cannot connect to your database unless your
administrator opens port 1433.
1. After the deployment completes, select SQL databases from the Azure portal
menu or search for and select SQL databases from any page.
2. Select yourDatabase on the SQL databases page. The overview page for
your database opens, showing you the fully qualified Server name (such
as contosodatabaseserver01.database.windows.net) and provides options for further
configuration.
5. Click Add client IP on the toolbar to add your current IP address to a new IP
firewall rule. An IP firewall rule can open port 1433 for a single IP address or
a range of IP addresses.
6. Click Save. A server-level IP firewall rule is created for your current IP address
opening port 1433 on the server.
7. Click OK and then close the Firewall settings page.
Your IP address can now pass through the IP firewall. You can now connect to your
database using SQL Server Management Studio or another tool of your choice. Be sure
to use the server admin account you created previously.
Important
By default, access through the SQL Database IP firewall is enabled for all Azure services.
Click OFF on this page to disable for all Azure services.
T AB L E 3
Login The server admin The account that you specified when you created
account the server.
Password The password for The password that you specified when you created
your server admin the server.
account
Create a database schema with four tables that model a student management system for
universities using Transact-SQL:
Person
Course
Student
Credit
The following diagram shows how these tables are related to each other. Some of these
tables reference columns in other tables. For example, the Student table references
the PersonId column of the Person table. Study the diagram to understand how the tables
in this tutorial are related to one another. For an in-depth look at how to create effective
database tables, see Create effective database tables. For information about choosing
data types, see Data types.
You can also use the table designer in SQL Server Management Studio to create and
design your tables.
SQLCopy
-- Create Person table
CREATE TABLE Person
(
PersonId INT IDENTITY PRIMARY KEY,
FirstName NVARCHAR(128) NOT NULL,
MiddelInitial NVARCHAR(10),
LastName NVARCHAR(128) NOT NULL,
DateOfBirth DATE NOT NULL
)
3. Expand the Tables node under yourDatabase in the Object Explorer to see
the tables you created.
cmdCopy
You have now loaded sample data into the tables you created earlier.
Execute the following queries to retrieve information from the database tables. See Write
SQL queries to learn more about writing SQL queries. The first query joins all four tables
to find the students taught by 'Dominick Pope' who have a grade higher than 75%. The
second query joins all four tables and finds the courses in which 'Noe Coleman' has ever
enrolled.
SQLCopy
-- Find the students taught by Dominick Pope who have a grade higher than 75%
SELECT person.FirstName, person.LastName, course.Name, credit.Grade
FROM Person AS person
INNER JOIN Student AS student ON person.PersonId = student.PersonId
INNER JOIN Credit AS credit ON student.StudentId = credit.StudentId
INNER JOIN Course AS course ON credit.CourseId = course.courseId
WHERE course.Teacher = 'Dominick Pope'
AND Grade > 75
SQLCopy
-- Find all the courses in which Noe Coleman has ever enrolled
SELECT course.Name, course.Teacher, credit.Grade
FROM Course AS course
INNER JOIN Credit AS credit ON credit.CourseId = course.CourseId
INNER JOIN Student AS student ON student.StudentId = credit.StudentId
INNER JOIN Person AS person ON person.PersonId = student.PersonId
WHERE person.FirstName = 'Noe'
AND person.LastName = 'Coleman'
Combined with a relational database, a cache can store the most common database
queries and return these queries much faster than the database when the application
requests them. Not only can this result in significant reductions in latency, but it also
reduces the load on the database, lowering the need to overprovision. Additionally,
caches are typically better than databases at handling a high throughput of requests—
enabling the application to handle more simultaneous users.
Redis is one of the most popular caching solutions in the market. It is a key-value
datastore that runs in-memory, rather than on disk like most databases. Running in-
memory makes it lightning-fast, and a terrific complement to more deliberate and
consistent primary databases like Azure SQL Database or PostgreSQL. Redis is available
as a fully-managed service on Azure through Azure Cache for Redis, offering automatic
patching and updates, high-availability deployment options, and the latest Redis features.
Azure Cache for Redis can neatly plug into your Azure data infrastructure as a cache,
allowing you to boost data performance.
8) Click on “Create”.
There are a plethora of network security threats that businesses should be aware of to
ensure the continuous protection of their systems, software, and data. Let’s review what
we believe to be the top 10 network security threats and solutions that you can use to
protect your network from being compromised by these malicious attacks.
1. Malware/Ransomware
Although not technically malware, botnets are currently considered one of the biggest
threats on the internet today. These powerful networks of compromised machines can be
remotely controlled and used to launch massive attacks.
Each botnet triggers a plethora of “Zombie” computers that are used to carry out
meticulous Distributed Denial of Service (DDoS) attacks (we’ll get to these later). These
attacks are used to overwhelm the victim and make them give in to paying the ransom
and gain back control of their system.
3. Computer Viruses and Worms
Statistics show that approximately 33% of household computers are affected by some
type of malware, more than half of which are viruses. Viruses are attached to a system
or host file and can lay dormant until inadvertently activated by a timer or event. Worms,
on the other hand, infect documents, spreadsheets, and other files, sometimes by utilizing
macros.
As soon as a virus or worm enters your system, it will immediately go to work in replicating
itself with the sole goal of infecting as many networked systems and inadequately-
protected computers as possible. Transmission of viruses and worms is often done
Phishing attacks are a form of social engineering that is designed to steal sensitive data
such as passwords, usernames, credit card numbers. These attacks impersonate
reputable websites, banking institutions, and personal contacts that come in the form of
instant messages or phishing emails designed to appear legitimate. Once you hit reply or
click the embedded URL in these messages, you will be prompted to use your credentials
or enter your financial details which then sends your information to the malicious source.
5. DDoS (Distributed Denial of Service)
Overwhelming hosted servers and causing them to become completely inoperable is the
task of a cyber-attack known as a Distributed Denial of Service (DDoS) attack. According
to statistics, 33% of businesses fall victim to DDoS attacks. DDoS attacks can be
disastrous for companies that make their money operating online (social media, e-
commerce sites, etc.), potentially causing millions of dollars in lost revenue every day the
website is down.
It’s likely that not all of the potentially thousands of computers being used for a DDoS
attack actually belong to the attacker. Instead, we can assume that most of the
compromised computers are added to the attacker’s network by malware and distributed
across the globe via a botnet.
6. Cryptojacking
Even before Bitcoin skyrocketed in 2017, cryptojacking has been the tool of choice for
hackers looking to steal cryptocurrency from unsuspecting victims for their financial
gain. These attacks are similar to worms and viruses, except that instead of corrupting
sensitive data and information, the end goal of cryptojacking is to steal CPU resources.
With cryptojacking exploits, hackers trick their victims into loading mining codes onto their
computers and then use those fraudulent codes to access the target’s CPU processing
resources to mine for cryptocurrency.
7. APT (Advanced Persistent Threats) Threats
Advanced Persistent Threats (APTs for short) are cyber-attacks that call for an
unauthorized attacker to code their way into an unsuspecting system network, remaining
there undetected for quite some time. Instead of revealing its position, the APT siphons
financial information and other critical security information away from the victim’s network.
APTs architects are skilled at using a variety of techniques to gain network access; using
malware, exploit kits, and other sophisticated means to do so. Once the attacker has
made it past the network firewall, they sit idle until they discover the login credentials that
Here are the lists of some popular network security protocols that you must know to
implement them as and when required:
1. IPSec protocol is classified by the IETF IPSec Work Group, which offers
authentication of data, integrity, as well as privacy between 2 entities. Manual or
dynamic association of management in cryptographic keys is done with the help
of an IETF-specific key managing protocol named Internet Key Exchange (IKE).
2. SSL, i.e., Secure Sockets Layer is a standard security mechanism used for
preserving a secure internet connection by safeguarding different sensitive data
which is being sent and receives between 2 systems; which also helps in averting
cybercriminals from reading as well as modifying personal data, packets or details
in the network.
3. Secure Shell (SSH) was invented in the year 1995, which is a cryptographic
network security protocol used for securing data communication over a network. It
permits the command-line to login remotely as well as the execution of specific
tasks remotely. Various functionalities of FTP are incorporated in SSH. SSH-1 and
SSH-2 are the latest of its kind.
4. HyperText Transfer Protocol Secure (HTTPS) is a secured protocol used to secure
data communication among two or more systems. It set up an encrypted link with
the help of Secure Socket Layer (SSL), now known as Transport Layer Security
(TLS). Since data transferred using HTTPS is in the encrypted format, so, it stops
cybercriminals from interpretation as well as alteration of data throughout the
transfer from browser to the webserver. Even when the cybercriminals capture the
data packets, they will not be able to read them because of the strong encryption
associated with the data packets.
5. Kerberos is another network validation protocol that was intended for providing a
strong authentication between client-server applications with the help of secret-
key cryptography. According to the Kerberos network validation protocol, all of its
services and workplaces correspond to an insecure network, which makes it more
secure and responsible.
Web filtering
Screening of Web sites or pages
E-mail filtering
Screening of e-mail for spam
Other objectionable content
Intrusion Detection Systems
Intrusion Detection Systems, also known as Intrusion Detection and Prevention
Systems, are the appliances that monitor malicious activities in a network, log
information about such activities, take steps to stop them, and finally report them.
Intrusion detection systems help in sending an alarm against any malicious activity in
the network, drop the packets, and reset the connection to save the IP address from any
blockage. Intrusion detection systems can also perform the following actions −
The way cloud security is delivered will depend on the individual cloud provider or the
cloud security solutions in place. However, implementation of cloud security processes
should be a joint responsibility between the business owner and solution provider.
The first step in securing your cloud is knowing how your cloud provider secures its
solutions. Public cloud providers like Amazon Web Services (AWS), Microsoft Azure, and
Google Cloud offer proprietary security solutions to help keep cloud deployments in
check. Some providers also partner with third-party companies to independent audit cloud
security or boost the vendor’s own security solutions. If your cloud vendor delivers native
security solutions for the cloud, ensure that you’ve activated them so your provider can
secure your cloud to the best of its ability.
Understand your cloud security weaknesses
Many cloud security tips are a good fit for any organization, but the specific security
problems you need to address will depend on your cloud solutions and the security
problems you’re trying to solve. Perhaps your enterprise is worried about hackers gaining
access to your cloud infrastructure or that sensitive data could be leaked. Your company
may have already suffered a security breach in the past, and you’re looking for a way to
fix the problem. Examine your cloud infrastructure for potential security blind spots and
understand where your cloud security could be boosted.
Implement access control regulations
You don’t want just any user or device to access your cloud environment; only authorized
users should be able to enter your cloud infrastructure. Your company needs to
implement access control regulations to keep unauthorized users out. Many cloud
vendors will provide native access control tools that only allow access to sanctioned
users. This includes identity management, authorization, and authentication protocols.
Ensure your cloud data is encrypted
It’s important to keep your company up to speed on maintaining cloud security. Security
threats can come from anywhere, and if they aren’t properly trained on your cloud
environment, they can be a major internal risk. Your company needs to train its employees
on how to use and navigate its cloud deployment; it should also give special training to
your IT team on the security protocols your enterprise uses to control access and protect
data.
1) In the Home page, write down “Security” in Search box, then from Suggestion click
on “Security”.
2) Within “Security Service” page, click on “Identify Security Score”, then go to the
“Improvement Actions”.
The overarching goal for IAM is to ensure that any given identity has access to the right
resources (applications, databases, networks, etc.) and within the correct context
Identity and Access Management Explained
Identity management is a foundational security component to help ensure users have the
access they need, and that systems, data, and applications are inaccessible to
unauthorized users.
Identity and access management organizational policies define:
How users are identified and the roles they are then assigned
The systems, information, and other areas protected by IAM
The correct levels of protection and access for sensitive data, systems,
information, and locations
Adding, removing, and amending individuals in the IAM system
Adding, removing, and amending a role’s access rights in the IAM system
Technology to Support Identity and Access Management
IAM is typically implemented through centralized technology that either replaces or deeply
integrates with existing access and sign on systems. It uses a central directory of users,
roles, and predefined permission levels to grant access rights to individuals based on
their user role and need to access certain systems, applications, and data.
Role-Based Access
Most IAM technology applies “role-based access control (RBAC) — using predefined job
roles to control access to individual systems and information. As users join or change
roles in the enterprise, their job role is updated, which should impact their access rights.
IAM Tools
An identity management system typically involves the following areas:
Multi-Factor Authentication
This system uses a combination of something the user knows (e.g. a password),
something the user has (e.g. a security token), and something the user is (e.g. a
fingerprint) to authenticate individuals and grant them access.
This system typically integrates with the employee database and pre-defined job roles to
establish and provide the access employees need to perform their roles.
IAM technology can be provided on-premises, through a cloud-based model (i.e. identity-
as-a-service, or IDaaS), or via a hybrid cloud setup. Practical applications of IAM, and
how it is implemented, differ from organization to organization, and will also be shaped
by applicable regulatory and compliance initiatives.
Sophisticated IAM technology can move beyond simply allowing or blocking access to
data and systems. For example IAM can:
Restrict access to subsets of data: Specific roles can access only certain parts
of systems, databases, and information.
Only allow view access: Roles can only view data, they cannot add, update, or
amend it.
Only permit access on certain platforms: Users may have access to operational
systems, but not development or testing platforms.
Only allow access to create, amend, or delete data, not to transmit it: Some
roles may not be able to send or receive data outside the system, meaning it
cannot be exposed to other third parties and applications.
Ultimately, there are many ways to implement IAM policies to define and enforce exactly
how individual roles can access systems and data, based on a company’s specific needs.
1) Go to home, and write down in search box “User” and from suggestion, click on
“Users”.
4) Fill up the form within the same page as follows and click on “Create”.
7) A panel comes in, from where we need to checked the proper privileges, then click
on “Add”.
2.29.1 Vision
Microsoft’s Vision APIs analyze visual content (images, video and digital ink) and identify
objects within it. The APIs therefore enable apps to authenticate and group faces
according to specific characteristics, or to detect specified objects and details.
Computer Vision. The service helps analyze and enhance the discoverability of
visual content: it extracts and recognizes text, tags and categorizes images,
generates descriptions, and recognizes human faces and other objects.
2.29.2 Speech
Speech APIs help embed speech processing in apps: they convert speech to text and
vice versa, translate text to other languages, and identify speakers. The technology can
be applied in hands-free tools used to dictate text or to read instructions out loud, for
instance.
Speech to Text and Text to Speech, which helps apps transcribe audio to text
and vice versa, with support for 85+ languages
Speech Translation, which enables the transcription and translation of
conversations in real time
Speaker Recognition, which identifies the speaker based on audio content, with
the ability to be used as a means of access control and authentication
Language
Language APIs analyze text to extract meaning from it. They include the following:
Immersive Reader, which helps readers pick out the meaning of the text,
regardless of their abilities
Language Understanding, which teaches apps, smart devices and bots to
understand natural language
QnA Maker, which helps enrich apps with question-and-answer capabilities
Text Analytics, which analyzes text to detect sentiment and key phrases
Translator, which conducts real-time machine translation with multiple-language
support (more than 60 languages)
Decision APIs analyze data, and discover relationships and patterns to perform quicker,
smarter and more efficient decision-making. These include the following:
Search APIs enhance searching on the Internet. These include the following:
6) For create a new “App Service Plan”, click on “Configure required settings”
8) Write down a “App service plan name” and select location “India
(Central/South/West)”.
The Azure Face service provides AI algorithms that detect, recognize, and analyze
human faces in images. Facial recognition software is important in many different
scenarios, such as security, natural user interface, image content analysis and
management, mobile apps, and robotics.
The Detect API detects human faces in an image and returns the rectangle coordinates
of their locations. Optionally, face detection can extract a series of face-related attributes,
such as head pose, gender, age, emotion, facial hair, and glasses. These attributes are
general predictions, not actual classifications.
The Verify API builds on Detection and addresses the question, "Are these two images
the same person?". Verification is also called "one-to-one" matching because the probe
image is compared to only one enrolled template. Verification can be used in identity
verification or access control scenarios to verify a picture matches a previously captured
image (such as from a photo from a government issued ID card). For more information,
see the Facial recognition concepts guide or the Verify API reference documentation.
The Identify API also starts with Detection and answers the question, "Can this detected
face be matched to any enrolled face in a database?" Because it's like face recognition
search, is also called "one-to-many" matching. Candidate matches are returned based
on how closely the probe template with the detected face matches each of the enrolled
templates.
The following image shows an example of a database named "myfriends". Each group
can contain up to 1 million different person objects. Each person object can have up to
248 faces registered.
3) Click on “Create new” and write a Name, then click on “OK” to create a Resource
Group.
15) Remove the image url and paste new image link.
Microsoft currently offers tester’s access to "Project Ink Analysis" via its Cognitive
Services Labs. Project Ink Analysis "provides cloud APIs to understand digital ink content"
enabling developers to build apps that recognize digital handwriting, common shapes and
the layout of a document.
Project Ink Analysis provides cloud APIs for understanding digital ink content. In addition
to simply recognizing the words written by a user, it also provides information about the
structure of the content, letting you know where the paragraphs, lines, and individual
words are and how they relate to each other.
How it works
The pen is an incredibly powerful and personal tool that allows people to express
themselves in ways no other device can. In recent years, hardware advancements have
brought this into the digital world with new devices that can quickly render beautiful ink,
letting users create content beyond the traditional confines of the typed word. However,
allowing users to create content is only one half of the equation. To truly go beyond the
experience offered by traditional pen and paper, we must be able to understand what the
user has created, which is where Project Ink Analysis comes in.
Project Ink Analysis provides cloud APIs for understanding digital ink content. In addition
to simply recognizing the words written by a user, it also provides information about the
structure of the content, letting you know where the paragraphs, lines, and individual
words are and how they relate to each other. It even understands handwriting written at
an angle! This can enable scenarios such as beautifying the content by normalizing its
alignment and spacing while retaining the content as ink or after converting to text. In
addition, it allows for shape recognition, along with providing information about how to
beautify these shapes (for example turning a user’s not-so-perfect rectangle into a
rectangle with 90-degree angles while maintaining the original size).
Whether you want to convert a user’s ink, recognize its content to enable searching within
it, or beautify the document structure or drawings, Project Ink Analysis provides you the
capabilities you need.
Shape Recognition
Handwriting Recognition
Project Ink Analysis recognizes handwriting in 67 languages.
Layout Analysis
Project Ink Analysis provides grouping and content structure information so you can
beautify a user’s writing, in this case by left-aligning the list items.
2. After entering details, including Free instance selection, click on review + create
5. On left tab select keys and Endpoint. Copy key1 for further operations using python
API calls.
# For each face returned use the face rectangle and draw a red box.
print('Drawing rectangle around face... see popup for results.')
draw = ImageDraw.Draw(img)
for face in detected_faces:
draw.rectangle(getRectangle(face), outline='red')
Reference:https://docs.microsoft.com/en-us/azure/cognitive-
services/Face/Quickstarts/client-libraries?pivots=programming-language-
python&tabs=visual-studio
Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/media/apache-spark-overview/map-
reduce-vs-spark.png
Apache Spark clusters in HDInsight include the following components that are available
on the clusters by default.
Spark Core. Includes Spark Core, Spark SQL, Spark streaming APIs, GraphX, and
MLlib.
Anaconda
Apache Livy
Jupyter Notebook
Apache Zeppelin notebook
Spark clusters in HDInsight enable the following key scenarios:
Interactive data analysis and BI
Spark Machine Learning
Spark streaming and real-time data analysis
5. Select Pay-as-You-Go and choose resource group. Give your instance name and
select pricing tier as Free F0.And click on Review+Create
9. Visit python.org/downloads and Download and Install Python Software in you system
6. In the Analysis Services blade, enter the following and then click Create:
Server name: Type a unique name.
Subscription: Select your subscription.
Resource group: Select Create new, and then type a name for your new resource group.
Location: This is the Azure datacenter location that hosts the server. Choose a location
nearest you.
11. Select Microsoft SQL Azure as your data source type and click Next.Fill in the
connection information for the sample SQL Azure database created earlier and click Next.
13. At this step, you can optionally provide a friendly name for each table. For large tables,
which may not fit into cache, you can also specify a filter expression to reduce the number
of rows. When complete, click next.
Data will now be read from the database and pulled into a local cache within Visual Studio.
Once loading is complete, you will have your first model created and will be able to see
each table and the data within them. You can also switch to a diagram view by clicking
the little diagram option at the bottom right of the screen:
15. Add more business logic to the model by creating calculations and measures.
Deploy
Once your model is complete, you can now deploy it to the Azure AS server which you
created in the first step. This can be done with the following steps:
1. Copy your Azure Analysis Services server name for the Azure portal. This can be found
at the top of the overview section of your server.
3. Change the deployment server to the name of your Azure AS server and click OK.
4. Right click the project name again, but this time click Deploy.
Power BI Desktop
If you don’t already have the Power BI Desktop, you can download it for free.
1. Open the Power BI Desktop
2. Click Get Data.
You will now see your model displayed in the field list on the side. You can drag and drop
the different fields on to your page to build out interactive visuals.
A web application is
A website mostly consists of static content. It
Created for designed for interaction with
is publicly accessible to all the visitors.
the end user
All changes require the Small changes never require a full re-
Deployment entire project to be re- compilation and deployment. You just need to
compiled and deployed. update the HTML code.
HTML HTML5
Audio and video are not HTML parts. Audio and video are HTML5 part.
HTML offers local storage instead of Html5 uses cookies to store data.
cookies.
It allows you to run JavaScript in a browser. It enables you to run JavaScript code in the
background.
You can use HTML with all old browsers. You can use HTML5 with all new browsers.
You can use browser cache as temporary You can use application (database and web
storage. storage) Cache as temporary storage.
Attributes like async, charset, and ping are Attributes of async, ping, charset, and are a
not present in HTML. part of HTML5.
HTML does not allow drag and drop effects HTML5 allows drag and drop effects.
Offer new attributes like tabinex, id, tabinex, These are certain attributes which are
etc. applied to HTML 5 elements.
Selector: Selector indicates the HTML element you want to style. It could be any tag like
<h1>, <title> etc.
Declaration Block: The declaration block can contain one or more declarations
separated by a semicolon. For the above example, there are two declarations:
1. color: yellow;
2. font-size: 11 px;
Each declaration contains a property name and value, separated by a colon.
CSS Id Selector
The id selector selects the id attribute of an HTML element to select a specific element.
An id is always unique within the page so it is chosen to select a single, unique element.
It is written with the hash character (#), followed by the id of the element.
Let?s take an example with the id "para1".
Internal CSS
The internal style sheet is used to add a unique style for a single document. It is defined
in <head> section of the HTML page inside the <style> tag.
HTML <nav>
The <nav> elements is a container for the main block of navigation links. It can contain
links for the same page or for other pages.
HTML <article>
The HTML tag is used to contain a self-contained article such as big story, huge article,
etc.
HTML <footer>
HTML <footer> element defines the footer for that document or web page. It mostly
contains information about author, copyright, other links, etc.
HTML <summary>
HTML <summary> element is used with the <details> element in a web page. It is used
as summary, captions about the content of <details> element.
.nav li{
list-style: none;
display: inline-block;
padding: 8px;
}
.nav a{
color: #fff;
}
.nav ul li a:hover{
text-decoration: none;
color: #7fffd4;
}
.lside{
float: left;
width: 80%;
min-height: 440px;
background-color: #f0f8ff;
text-align: center;
}
.rside
.footer p{
color: #8fbc8f;
}
</style>
</head>
<body>
<div>
<div class="header">
<h2>javaTpoint Div Layout</h2>
</div>
<!-- Nav -->
<div class="nav">
<ul>
<li><a href="#">HOME</a></li>
<li><a href="#">MENU</a></li>
<li><a href="#">ABOUT</a></li>
<li><a href="#">CONTACT</a></li>
The statement1 is executed first even before executing the looping code. So, this
statement is normally used to assign values to variables that will be used inside
the loop.
The statement2 is the condition to execute the loop.
The statement3 is executed every time after the looping code is executed.
The “while loop” is executed as long as the specified condition is true. Inside the while
loop, you should include the statement that will end the loop at some point of time.
Otherwise, your loop will never end and your browser may crash.
You can use If….Else statement if you have to check two conditions and execute a
different set of codes.
You can use If….Else If….Else statement if you want to check more than two conditions.
Example:
3.4.1 Create style.css and add the following code into it.
body{
background-color:black;
}
ul {
list-style-type: none;
margin: 0;
padding: 0;
overflow: hidden;
background-color: #333;
}
li {
float: left;
}
li a {
display: block;
color: white;
text-align: center;
padding: 14px 16px;
text-decoration: none;
}
li a:hover {
background-color: #111;
}
* {box-sizing: border-box}
body {font-family: Verdana, sans-serif; margin:0}
.mySlides {display: none}
img {vertical-align: middle;}
/* Slideshow container */
.slideshow-container {
max-width: 1000px;
position: relative;
margin: auto;
}
/* Caption text */
.text {
color: #f2f2f2;
font-size: 15px;
padding: 8px 12px;
position: absolute;
bottom: 8px;
width: 100%;
text-align: center;
}
/* The dots/bullets/indicators */
.dot {
cursor: pointer;
height: 15px;
.active, .dot:hover {
background-color: #717171;
}
/* Fading animation */
.fade {
-webkit-animation-name: fade;
-webkit-animation-duration: 1.5s;
animation-name: fade;
animation-duration: 1.5s;
}
@-webkit-keyframes fade {
from {opacity: .4}
to {opacity: 1}
}
@keyframes fade {
from {opacity: .4}
to {opacity: 1}
}
form{
margin-left:200px;
}
3.4.2 Create sample.js file and add the following code into it.
var slideIndex = 1;
showSlides(slideIndex);
function plusSlides(n) {
showSlides(slideIndex += n);
}
function currentSlide(n) {
showSlides(slideIndex = n);
}
function showSlides(n) {
var i;
var slides = document.getElementsByClassName("mySlides");
var dots = document.getElementsByClassName("dot");
if (n > slides.length) {slideIndex = 1}
if (n < 1) {slideIndex = slides.length}
for (i = 0; i < slides.length; i++) {
slides[i].style.display = "none";
}
for (i = 0; i < dots.length; i++) {
dots[i].className = dots[i].className.replace(" active", "");
}
slides[slideIndex-1].style.display = "block";
dots[slideIndex-1].className += " active";
}
3.4.3 Create and index.html file and add the following code into it.
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="style.css">
<script type="text/javascript" src="sample.js"></script>
</head>
<body>
<div>
<ul>
<li><a class="active" href="#home">Home</a></li>
<li><a href="#news">News</a></li>
<li><a href="#contact">Contact</a></li>
</div>
<br>
<div style="text-align:center">
<span class="dot" onclick="currentSlide(1)"></span>
<span class="dot" onclick="currentSlide(2)"></span>
<span class="dot" onclick="currentSlide(3)"></span>
</div>
<div>
<img src="https://www.w3schools.com/howto/img_avatar.png" alt="Avatar"
class="avatar">
<img src="https://www.w3schools.com/howto/img_avatar2.png" alt="Avatar"
class="avatar">
<img src="https://www.w3schools.com/howto/img_avatar.png" alt="Avatar"
class="avatar">
</div>
<div class="footer">
<p>Copyright@FICE2021 </p>
</div>
To create a function in JavaScript, we have to first use the keyword function, separated
by name of function and parameters within parenthesis. The part of function inside the
curly braces {} is the body of the function.
3.6.2 Function Definition
Before, using a user-defined function in JavaScript we have to create one. We can use
the above syntax to create a function in JavaScript. A function definition is sometimes
also termed as function declaration or function statement.
if (name.value == "") {
window.alert("Please enter your name.");
name.focus();
return false;
}
if (address.value == "") {
window.alert("Please enter your address.");
address.focus();
return false;
}
if (email.value == "") {
window.alert(
"Please enter a valid e-mail address.");
email.focus();
return false;
}
if (phone.value == "") {
window.alert(
"Please enter your telephone number.");
phone.focus();
return false;
}
if (password.value == "") {
window.alert("Please enter your password");
password.focus();
return false;
}
if (what.selectedIndex < 1) {
alert("Please enter your course.");
what.focus();
return false;
}
return true;
}
</script>
form {
margin: 0 auto;
width: 600px;
}</style>
3.7.4 Output:
It usually depends on browser and it’s In this any server side technology can be use
version. and it does not depend on client.
There are many advantages link with this The primary advantage is it’s ability to highly
like faster,response times, a more customize, response
interactive application. requirements, access rights based on user.
It does not provide security for data. It provides more security for data.
HTML, CSS and JavaScript are used. PHP, Python, Java, Ruby are used.
GET: In the GET method, after the submission of the form, the form values will be visible
in the address bar of the new browser tab. It has a limited size of about 3000 characters.
It is only useful for non-secure data not for sensitive information.
POST: In the post method, after the submission of the form, the form values will not be
visible in the address bar of the new browser tab as it was visible in the GET method. It
appends form data inside the body of the HTTP request. It has no size limitation. This
method does not support bookmark the result.
Syntax:
<form method="get|post">
Get vs. Post
There are many differences between the Get and Post request. Let's see these
differences:
1) In case of Get request, only limited In case of post request, large amount of
amount of data can be sent because data data can be sent because data is sent in body.
is sent in header.
2) Get request is not secured because Post request is secured because data is not
data is exposed in URL bar. exposed in URL bar.
5) Get request is more efficient and used Post request is less efficient and used less
more than Post. than get.
Example:
Create a login page and form to post data to server script and use it
Using Post Method:
Create a login.php file and add the following code into it.
IaaS quickly scales up and down with demand, letting you pay only for what you use. It
helps you avoid the expense and complexity of buying and managing your own physical
servers and other datacenter infrastructure. Each resource is offered as a separate
service component and you only need to rent a particular one for as long as you need it.
A cloud computing service provider, such as Azure, manages the infrastructure, while you
purchase, install, configure and manage your own software—operating systems,
middleware and applications.
Test and development. Teams can quickly set up and dismantle test and development
environments, bringing new applications to market faster. IaaS makes it quick and
economical to scale up dev-test environments up and down.
Website hosting. Running websites using IaaS can be less expensive than traditional
web hosting.
Storage, backup and recovery. Organizations avoid the capital outlay for storage and
complexity of storage management, which typically requires a skilled staff to manage data
and meet legal and compliance requirements. IaaS is useful for handling unpredictable
demand and steadily growing storage needs. It can also simplify planning and
management of backup and recovery systems.
PaaS allows you to avoid the expense and complexity of buying and managing software
licenses, the underlying application infrastructure and middleware, container
orchestrators such as Kubernetes or the development tools and other resources. You
manage the applications and services you develop and the cloud service provider
typically manages everything else.
Development framework. PaaS provides a framework that developers can build upon
to develop or customize cloud-based applications. Similar to the way you create an Excel
macro, PaaS lets developers create applications using built-in software components.
Cloud features such as scalability, high-availability and multi-tenant capability are
included, reducing the amount of coding that developers must do.
Software as a service (SaaS) allows users to connect to and use cloud-based apps over
the Internet. Common examples are email, calendaring and office tools (such as Microsoft
Office 365).
If you have used a web-based email service such as Outlook, Hotmail or Yahoo! Mail,
then you have already used a form of SaaS. With these services, you log into your
account over the Internet, often from a web browser. The email software is located on the
service provider’s network and your messages are stored there as well. You can access
your email and stored messages from a web browser on any computer or Internet-
connected device.
The previous examples are free services for personal use. For organizational use, you
can rent productivity apps, such as email, collaboration and calendaring; and
sophisticated business applications such as customer relationship management (CRM),
enterprise resource planning (ERP) and document management. You pay for the use of
these apps by subscription or according to the level of use.
Bring your code or container using the framework language of your choice.
Increase developer productivity with tight integration of Visual Studio Code and
Visual Studio.
Streamline CI/CD with Git, GitHub, GitHub Actions, Atlassian Bitbucket, Azure
DevOps, Docker Hub and Azure Container Registry.
Reduce downtime and minimize risk for app updates by using deployment slots.
3.11.2 Features:
1. Fully managed platform with built-in infrastructure maintenance, security patching
and scaling.
2. Built-in CI/CD integration and zero-downtime deployments.
3. Integration with virtual networks and ability to run in an isolated and dedicated App
Service environment.
4. Rigorous security and compliance, including SOC and PCI, for seamless
deployments across public cloud, Azure Government and on-premises
environments
2. Select App Services from Dashboard Menu, or search for App Services in resource
search bar
8. On Review page, select Create button to finally create App Service Instance in
Azure Cloud
AWS Fargate is a Serverless compute engine for containers that works with both Amazon
Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate
makes it easy for you to focus on building your applications. Fargate removes the need
to provision and manage servers, lets you specify and pay for resources per application,
and improves security through application isolation by design.
Fargate allocates the right amount of compute, eliminating the need to choose instances
and scale cluster capacity. You only pay for the resources required to run your containers,
so there is no over-provisioning and paying for additional servers. Fargate runs each task
or pod in its own kernel providing the tasks and pods their own isolated compute
environment. This enables your application to have workload isolation and improved
security by design. This is why customers such as Vanguard, Accenture, Foursquare, and
Ancestry have chosen to run their mission critical applications on Fargate.
Amazon API Gateway
Amazon API Gateway is a fully managed service that makes it easy for developers to
create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front
door" for applications to access data, business logic, or functionality from your backend
services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that
API Gateway handles all the tasks involved in accepting and processing up to hundreds
of thousands of concurrent API calls, including traffic management, CORS support,
authorization and access control, throttling, monitoring, and API version management.
API Gateway has no minimum fees or startup costs. You pay for the API calls you receive
and the amount of data transferred out and, with the API Gateway tiered pricing model,
you can reduce your cost as your API usage scales.
Amazon Aurora Serverless
Manually managing database capacity can take up valuable time and can lead to
inefficient use of database resources. With Aurora Serverless, you simply create a
database endpoint, optionally specify the desired database capacity range, and connect
your applications. You pay on a per-second basis for the database capacity you use when
the database is active, and migrate between standard and Serverless configurations with
a few clicks in the Amazon RDS Management Console.
2. Select “Function App” or search for “Function App” from search bar
5. Select Runtime, Region and Version info. Finally click on Review and Create
Button
When people use the term “NoSQL database”, they typically use it to refer to any non-
relational database. Some say the term “NoSQL” stands for “non SQL” while others say
it stands for “not only SQL.” Either way, most agree that NoSQL databases are databases
that store data in a format other than relational tables.
NoSQL databases emerged in the late 2000s as the cost of storage dramatically
decreased. Gone were the days of needing to create a complex, difficult-to-manage data
model simply for the purposes of reducing data duplication. Developers (rather than
storage) were becoming the primary cost of software development, so NoSQL databases
optimized for developer productivity.
As storage costs rapidly decreased, the amount of data applications needed to store and
query increased. This data came in all shapes and sizes—structured, semi-structured,
and polymorphic—and defining the schema in advance became nearly impossible.
NoSQL databases allow developers to store huge amounts of unstructured data, giving
them a lot of flexibility.
Cloud computing also rose in popularity, and developers began using public clouds to
host their applications and data. They wanted the ability to distribute data across multiple
servers and regions to make their applications resilient, to scale-out instead of scale-up,
and to intelligent geo-place their data. Some NoSQL databases like MongoDB provided
these capabilities.
Key-value databases are a simpler type of database where each item contains
keys and values. A value can typically only be retrieved by referencing its key, so
learning how to query for a specific key-value pair is typically simple. Key-value
databases are great for use cases where you need to store large amounts of data
but you don’t need to perform complex queries to retrieve it. Common use cases
include storing user preferences or caching. Redis and DynanoDB are popular
key-value databases.
Wide-column stores store data in tables, rows, and dynamic columns. Wide-
column stores provide a lot of flexibility over relational databases because each
row is not required to have the same columns. Many consider wide-column stores
to be two-dimensional key-value databases. Wide-column stores are great for
when you need to store large amounts of data and you can predict what your query
patterns will be. Wide-column stores are commonly used for storing Internet of
Things data and user profile data. Cassandra and HBase are two of the most
popular wide-column stores.
Graph databases store data in nodes and edges. Nodes typically store
information about people, places, and things while edges store information about
the relationships between the nodes. Graph databases excel in use cases where
you need to traverse relationships to look for patterns such as social networks,
fraud detection, and recommendation engines. Neo4j and JanusGraph are
examples of graph databases.
The SQL Case. For an SQL database, setting up a database for addresses begins with
the logical construction of the format and the expectation that the records to be stored are
going to remain relatively unchanged. After analyzing the expected query patterns, an
SQL database might optimize storage in two tables, one for basic information and one
pertaining to being a customer, with last name being the key to both tables. Each row in
each table is a single customer, and each column has the following fixed attributes:
Last name :: first name :: middle initial :: address fields :: email address :: phone
number
Last name :: date of birth :: account number :: customer years :: communication
preferences
The NoSQL Case. In the section Types of NoSQL Databases above, there were four
types described, and each has its own data model.
Each type of NoSQL database would be designed with a specific customer situation in
mind, and there would be technical reasons for how each kind of database would be
organized. The simplest type to describe is the document database, in which it would be
natural to combine both the basic information and the customer information in one JSON
document. In this case, each of the SQL column attributes would be fields and the details
of a customer’s record would be the data values associated with each field.
1998- Carlo Strozzi use the term NoSQL for his lightweight, open-source relational
database
2000- Graph database Neo4j is launched
2004- Google BigTable is launched
2005- CouchDB is launched
2007- The research paper on Amazon Dynamo is released
2008- Facebooks open sources the Cassandra project
2009- The term NoSQL was reintroduced
Offers easy to use interfaces for storage and querying data provided
APIs allow low-level data manipulation & selection methods
Text-based protocols mostly used with HTTP REST with JSON
Mostly used no standard based NoSQL query language
Web-enabled databases running as internet-facing services
Distributed
No standardization rules
Limited query capabilities
RDBMS databases and tools are comparatively mature
It does not offer any traditional database capabilities, like consistency when
multiple transactions are performed simultaneously.
When the volume of data increases it is difficult to maintain unique values as keys
become difficult
Doesn't work as well with relational data
The learning curve is stiff for new developers
Open source options so not so popular for enterprises.
Fully managed and cost-effective serverless database with instant, automatic scaling
that responds to application needs
2. Select Azure CosmosDB from the Services List or Search for CosmosDB in search
bar
13. Your Cosmas DB is ready for entering data and perform CRUD operations.
DevOps is the combination of cultural philosophies, practices, and tools that increases an
organization’s ability to deliver applications and services at high velocity: evolving and
improving products at a faster pace than organizations using traditional software
development and infrastructure management processes. This speed enables
organizations to better serve their customers and compete more effectively in the market.
In some DevOps models, quality assurance and security teams may also become more
tightly integrated with development and operations and throughout the application
lifecycle. When security is the focus of everyone on a DevOps team, this is sometimes
referred to as DevSecOps.
These teams use practices to automate processes that historically have been manual
and slow. They use a technology stack and tooling which help them operate and evolve
applications quickly and reliably. These tools also help engineers independently
accomplish tasks (for example, deploying code or provisioning infrastructure) that
normally would have required help from other teams, and this further increases a team’s
velocity.
Move at high velocity so you can innovate for customers faster, adapt to changing markets
better, and grow more efficient at driving business results. The DevOps model enables
your developers and operations teams to achieve these results. For example,
microservices and continuous delivery let teams take ownership of services and then
release updates to them quicker.
Increase the frequency and pace of releases so you can innovate and improve your
product faster. The quicker you can release new features and fix bugs, the faster you can
respond to your customers’ needs and build competitive advantage. Continuous
integration and continuous delivery are practices that automate the software release
process, from build to deploy.
Reliability
Ensure the quality of application updates and infrastructure changes so you can reliably
deliver at a more rapid pace while maintaining a positive experience for end users. Use
practices like continuous integration and continuous delivery to test that each change is
functional and safe. Monitoring and logging practices help you stay informed of
performance in real-time.
Scale
Build more effective teams under a DevOps cultural model, which emphasizes values
such as ownership and accountability. Developers and operations teams collaborate
closely, share many responsibilities, and combine their workflows. This reduces
inefficiencies and saves time (e.g. reduced handover periods between developers and
operations, writing code that takes into account the environment in which it is run).
Security
Move quickly while retaining control and preserving compliance. You can adopt a DevOps
model without sacrificing security by using automated compliance policies, fine-grained
controls, and configuration management techniques. For example, using infrastructure as
code and policy as code, you can define and then track compliance at scale.
Organizations monitor metrics and logs to see how application and infrastructure
performance impacts the experience of their product’s end user. By capturing,
categorizing, and then analyzing data and logs generated by applications and
infrastructure, organizations understand how changes or updates impact users, shedding
insights into the root causes of problems or unexpected changes. Active monitoring
becomes increasingly important as services must be available 24/7 and as application
and infrastructure update frequency increases. Creating alerts or performing real-time
analysis of this data also helps organizations more proactively monitor their services.
Communication and Collaboration
The steps that form a CI/CD pipeline are distinct subsets of tasks grouped into what is
known as a pipeline stage. Typical pipeline stages include:
The microservice architecture enables the rapid, frequent and reliable delivery of large,
complex applications. It also enables an organization to evolve its technology stack.
Think of your last visit to an online retailer. You might have used the site’s search bar to
browse products. That search represents a service. Maybe you also saw
recommendations for related products—recommendations pulled from a database of
In the early days of app development, even minimal changes to an existing app required
a wholesale version update with its own quality assurance (QA) cycle, potentially slowing
down many sub-teams. This approach is often referred to as "monolithic" because the
source code for the entire app was built into a single deployment unit (like .war or .ear).
If updates to part of an app caused errors, the whole thing had to be taken offline, scaled
back, and fixed. While this approach is still viable for small applications, growing
enterprises can’t afford downtime.
Microservices can communicate with each other, usually statelessly, so apps built in this
way can be more fault tolerant, less reliant on a single ESB. This also allows dev teams
to choose their own tools, since microservices can communicate through language-
agnostic application programming interfaces (APIs).
As demand for certain services grows, you can deploy across multiple servers, and
infrastructures, to meet your needs.
Resilient
These independent services, when constructed properly, do not impact one another. This
means that if one piece fails, the whole app doesn’t go down, unlike the monolithic app
model.
Because your microservice-based apps are more modular and smaller than traditional,
monolithic apps, the worries that came with those deployments are negated. This requires
more coordination, which a service mesh layer can help with, but the payoffs can be huge.
Accessible
Because the larger app is broken down into smaller pieces, developers can more easily
understand, update, and enhance those pieces, resulting in faster development cycles,
especially when combined with agile development methodologies.
More open
Due to the use of polyglot APIs, developers have the freedom to choose the best
language and technology for the necessary function.
Container images become containers at runtime and in the case of Docker containers -
images become containers when they run on Docker Engine. Available for both Linux and
Windows-based applications, containerized software will always run the same, regardless
of the infrastructure. Containers isolate software from its environment and ensure that it
works uniformly despite differences for instance between development and staging.
Containers are an abstraction at the app layer that packages code and dependencies
together. Multiple containers can run on the same machine and share the OS kernel with
other containers, each running as isolated processes in user space. Containers take up
less space than VMs (container images are typically tens of MBs in size), can handle
more applications and require fewer VMs and Operating systems.
Virtual Machines
Virtual machines (VMs) are an abstraction of physical hardware turning one server into
many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM
includes a full copy of an operating system, the application, necessary binaries and
libraries - taking up tens of GBs. VMs can also be slow to boot.
When developers build and package their applications into containers and provide them
to IT to run on a standardised platform, this reduces the overall effort to deploy
applications and can streamline the whole dev and test cycle. This also increases
collaboration and efficiency between dev and operations teams to ship apps faster.
Containers provide a standardized format for packaging and holding all the components
necessary to run the desired application. This solves the typical problem of “It works on
my machine” and allows for portability between OS platforms and between clouds. Any
time a container is deployed anywhere, it executes in a consistent environment that
remains unchanged from one deployment to another. You now have a consistent format,
from dev box to production.
Rapid scalability
Since containers do not have the overhead typical of VMs, including separate OS
instances, many more containers can be supported on the same infrastructure. The
lightweight nature of containers means they can be started and stopped quickly, unlocking
rapid scale-up and scale-down scenarios.
3. Your registry would be created. Upload any Docker images to registry and deploy
directly to container instance in next steps
5. Add details, container instance name, type of instance, started app and click on
Review +Create
4.6.2 Security
The Security pillar includes the ability to protect data, systems, and assets to take
advantage of cloud technologies to improve your security. You can find prescriptive
guidance on implementation in the Security Pillar whitepaper.
Design Principles
There are seven design principles for security in the cloud:
Implement a strong identity foundation
Enable traceability
Apply security at all layers
4.6.3 Reliability
The Reliability pillar encompasses the ability of a workload to perform its intended function
correctly and consistently when it’s expected to. This includes the ability to operate and
test the workload through its total lifecycle. You can find prescriptive guidance on
implementation in the Reliability Pillar whitepaper.
Design Principles
There are five design principles for reliability in the cloud:
Automatically recover from failure
Test recovery procedures
Scale horizontally to increase aggregate workload availability
Stop guessing capacity
Manage change in automation
Best Practices
To achieve reliability, you must start with the foundations—an environment where service
quotas and network topology accommodate the workload. The workload architecture of
the distributed system must be designed to prevent and mitigate failures. The workload
must handle changes in demand or requirements, and it must be designed to detect
failure and automatically heal itself.
Before architecting any system, foundational requirements that influence reliability should
be in place. For example, you must have sufficient network bandwidth to your data center.
These requirements are sometimes neglected (because they are beyond a single
project’s scope).
2. Select App Services from Dashboard Menu, or search for App Services in resource
search bar
9. To access your sample application, From the App page, access your application
by clicking on Browse or URL. Also access monitoring information from monitor
link in options tab on left.
4.8.2 Features
Unified
Store and analyse all your operational telemetry in a centralised, fully managed, scalable
data store that is optimised for performance and cost.
Intelligent
Test your hypotheses and reveal hidden patterns using the advanced analytic engine,
interactive query language and built-in machine learning constructs.
Open
Integrate with popular DevOps, issue management, IT service management and security
information and event management tools.
4.8.3 Uses
Monitor your applications
Get everything you need to monitor the availability, performance and usage of your web
applications, whether they are hosted on Azure or on-premises. Azure Monitor supports
popular languages and frameworks, such as .NET, Java and Node.js and integrates with
DevOps processes and tools like Azure DevOps, Jira and PagerDuty. Track live metrics
streams, requests and response times and events.
Reference: https://azure.microsoft.com/en-in/services/monitor/#features
Get reliable event delivery at massive scale. Simplify your event-based apps with Event
Grid, a single service for managing routing of all events from any source to any
destination. Designed for high availability, consistent performance and dynamic scale,
Event Grid lets you focus on your app logic rather than infrastructure.
Event Hubs
Simple, secure and scalable real-time data ingestion. Event Hubs is a fully managed, real-
time data ingestion service that is simple, trusted and scalable. Stream millions of events
Integrate seamlessly with other Azure services to unlock valuable insights. Allow existing
Apache Kafka clients and applications to talk to Event Hubs without any code changes—
you get a managed Kafka experience without having to manage your own clusters.
Experience real-time data ingestion and micro-batching on the same stream.
Azure Relay
The Azure Relay service enables you to securely expose services that run in your
corporate network to the public cloud. You can do so without opening a port on your
firewall, or making intrusive changes to your corporate network infrastructure. The relay
service supports the following scenarios between on-premises services and applications
running in the cloud or in another on-premises environment.
Azure Relay differs from network-level integration technologies such as VPN. An Azure
relay can be scoped to a single application endpoint on a single machine. The VPN
technology is far more intrusive, as it relies on altering the network environment.
Queue Storage
Easily add real-time web functionality to applications. With Azure SignalR Service, adding
real-time communications to your web application is as simple as provisioning a service—
no need to be a real-time communications guru!
Focus on your core business instead of managing infrastructure. You do not have to
provision and maintain servers just because you need real-time features in your solution.
SignalR Service is fully managed which makes it easy to add real-time communication
functionality to your application. No more worrying about hosting, scalability, load
balancing and such details!
Take advantage of the full spectrum of Azure services. Benefit from everything Azure has
to offer! Easily integrate with services such as Azure Functions, Azure Active Directory,
Azure Storage, Azure App Service, Azure Analytics, Power BI, IoT, Cognitive Services,
Machine Learning and more.
Enterprise-ready, managed cluster service for open-source analytics. Run popular open-
source frameworks—including Apache Hadoop, Spark, Hive, Kafka, and more—using
Azure HDInsight, a customizable, enterprise-grade service for open-source analytics.
Effortlessly process massive amounts of data and get all the benefits of the broad open-
source project ecosystem with the global scale of Azure. Easily migrate your big data
workloads and processing to the cloud.
Open-source projects and clusters are easy to spin up quickly without the need to
install hardware or manage infrastructure
Big data clusters reduce costs through autoscaling and pricing tiers that allow you
to pay for only what you use
Enterprise-grade security and industry-leading compliance with more than 30
certifications helps protect your data
Optimized components for open-source technologies such as Hadoop and Spark
keep you up to date
Notification Hubs
Managed service for bidirectional communication between IoT devices and Azure. Enable
highly secure and reliable communication between your Internet of Things (IoT)
application and the devices it manages. Azure IoT Hub provides a cloud-hosted solution
back end to connect virtually any device. Extend your solution from the cloud to the edge
with per-device authentication, built-in device management and scaled provisioning.
2. Select App Services from Dashboard Menu, or search for App Services in resource
search bar
9. To access your sample application, From the App page, access your application
by clicking on Browse or URL. Also access monitoring information from monitor
link in options tab on left.
1. https://docs.microsoft.com/en-us/azure
2. https://azure.microsoft.com/en-in/get-started/
3. https://www.javatpoint.com/cloud-computing-tutorial
4. https://www.javatpoint.com/linux-directories
5. https://docs.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-
portal
6. https://docs.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal
7. https://docs.microsoft.com/en-us/azure/virtual-network/quick-create-portal
8. https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction
9. https://docs.microsoft.com/en-us/learn/modules/azure-compute-fundamentals/
10. https://azure.microsoft.com/en-in/global-infrastructure/
11. https://docs.microsoft.com/en-in/azure/
12. https://docs.microsoft.com/en-us/learn/modules/network-fundamentals/
13. https://docs.microsoft.com/en-us/learn/modules/network-fundamentals/2-network-
types-topologies
14. https://docs.microsoft.com/en-us/learn/modules/network-fundamentals/4-network-
protocols
15. https://www.networkcomputing.com/networking/cisco-networking-basics-ip-
addressing
16. https://www.cloudflare.com/learning/network-layer/internet-protocol/
17. https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/tcpip-
addressing-and-subnetting
18. https://en.wikipedia.org/wiki/Transmission_Control_Protocol
19. https://www.cisco.com/c/en_in/products/security/vpn-endpoint-security-
clients/what-is-vpn.html#~types-of-vpns
20. https://en.wikipedia.org/wiki/Virtual_private_network
21. https://developer.mozilla.org/en-US/docs/Web/HTTP
22. https://www.w3schools.com/whatis/whatis_http.asp
23. https://www.sqlservertutorial.net/sql-server-basics
24. https://docs.microsoft.com/en-us/azure/networking/fundamentals/networking-
overview
25. https://docs.microsoft.com/en-us/azure/mysql/quickstart-create-mysql-server-
database-using-azure-portal
26. https://docs.microsoft.com/en-us/azure/app-service/tutorial-php-mysql-
app?pivots=platform-windows
27. https://docs.microsoft.com/en-us/azure/mysql/quickstart-create-mysql-server-
database-using-azure-portal
28. https://docs.microsoft.com/en-us/azure/azure-sql/database/design-first-database-
tutorial
29. https://docs.microsoft.com/en-us/azure/storage/common/storage-account-
create?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal