Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

Research Proposal

Building Firewall application to enhance the cyber security


Name: VENKATESH REDDY MORTHALA
ID: 20184441
MSc in CyberSecurity
National College of Ireland
Abstract

Every single day there are numerous web application created, due to the growth of the internet
advancement companies are tending to reach out to their customer through web and mobile
application. Though there is a growth of advancement, there is also a growth of fear in safe
guarding the data from the attackers. Web Application Firewall is the defensive mechanism that
is proposed in this project to safeguard the application data from the attackers. The WAF can be
placed both inline and outward of an application, so here in the proposal I have addressed
different how to use the WAF disposal that will protect the data. This project will enable the
developers to inbuilt more security features to the application that they are building through
enhancing WAF. WAF decision works in two different ways, band detecting the attack and the
deciding whether to allow or block the request through routing the traffic. I have proposed to the
build and open source using WAF that will effectively to decide on the false positive rate and
reducing the business risk caused due to the attack.

Keywords: Firewall, network security, WAF, cyber security, cyber-attacks,

TABLE OF CONTENT

1. Introduction…………………………………………………………….3
2. Research Question……………………………………………………...5
3. Literature Review………………………………………………………7
4. Research Methods and Specifications………………………………....11
5. Video Presentation……………………………………………………..18
6. Conclusion………………………………………………………………18
7. Reference………………………………………………………………19
1. Introduction

To ensure safe and secure processing and transmission of sensitive customer data for continual
use in modern websites, organizations are required to adopt various controls and ensure a steady
rise in detection to threats. However, there are certain challenges one has to face during this.
Firstly, one of the main challenges in implementing these tools surfaces due to the requirement
that for them to analyses and decide whether the traffic must be blocked they need to be placed
in the middle of the traffic, adding latency to each request which can become prohibitive for
applications that depend on a low latency response [1] [2] [3]. Another challenging obstacle
when deploying them is caused due to the false positive rate, or how common these tools decide
a normal user is malicious and might block their activity, which can make the adoption of these
tools much harder than they would expect [5].

In the past couple of years, there has been an explosive growth in the cyber world but this also
resulted in continual increase of online attacks such as breach of privacy, money theft while
online banking etc [8]. Cyber-crime is illegal according IT ACT 2000 and IPC and numerous
methods have been designed for detecting and preventing such attacks. Listed below are some
problems of the cyberworld:

 To knock down the entire system of computer networking, network threats like
Trojan horse and other viruses are used.
 The name of trojan virus is taken from the wooden trojan horse that was used by
the Greeks to get into the city secretly of Troy during Trojan War. Likewise, this
virus is a malware that disguises itself as something that is assigned to perform a
certain task in the system but actually tries to access the network resources.
 This can crash the entire computer system and can modify and delete thr data
without any knowledge
 On the other hand, certain malware programs consume the bandwidth only to
send copies of the same to other systems of the network. They have the potential
to modify and even corrupt the entire computer database.

Figure 1: Concept of Firewall


1.1 Area
My area of research is in building a firewall application to ensure the cyber security since
network attacks have been found to be pretty much as changed as the framework that they
endeavor to infiltrate. Hackers are finding new ways to attacks the system, so there are exciting
firewall present, but still cyber security is under stake due to the hackers move. (reed 2003). This
proposal tends to show how the application will overcome the current drawbacks and performs
well secure the system from the malicious codes and hackers.

1.2 Aim and Objective

The entire project is proposed on achieving the below mentioned goal and building the most
effective firewall for the cyber security

 To build a firewall application to enhance the cybersecurity by leveraging the suitable


technologies
 To address different way of disposing WAF into the system, both enhancing both inline
and outward mechanism
 To address all the drawbacks occurred while building Firewall in the previous approaches
2. Research question
My entire research is based out on finding the best possible answer for the below research
question
 Why firewall is important in cyber security and what’s the role of firewall?
 What are the different firewalls available and its application?
 What are the necessary factors need to consider while building a firewall
application?
WAF intends to be reversal in proxy that is transparent and that will ensure at every step that
all the traffic present there will pass via it. Again, it will filter the traffic and send it to the
destined application. Application ID remains hidden in this process. It will eventually monitor
and parse all requests in the rules based on certain rules before reaching the application. The
rules which are known as policies will prepare WAF making sure that requests that are
malicious in nature are unable to to make the application vulnerable to DDoS and other threats.

Although application firewalls have a lot of potential, the deployment process should be patient
and careful. When network firewalls were originally introduced to the organizations, managers
who implement took a careful method towards these projects, doing thorough research and tests.
When implementing a Web application firewall, the same procedure should be used.

A careful testing is able to build trust amidst the company’s developers responsible for
developing applications. Like a lever, managers of security will use to make them believe that
technology is more useful for the business that hinders their daily life.

Once the network is prepared for the production phase, time will come to consider establishing
trusted firewall rule base. The following is a detailed method to create & implement an
application firewall rulebase in the company:

Allow enough time for adjustments. Application firewalls in this modern web will be having
advanced features for traffic monitoring and learning normal activity patterns. As time passes,
the firewall will be "trained" to identify certain patterns and block them. On other hand, firewalls
need training for a longer time as a result of which the rule base can contain periodic transition
cycles in activities of network.

Knowledge of the company’s structure will play a pivotal role, and preparing firewalls to satisfy
the unique needs of the business will significantly increase the effectiveness of the tools. For
example, if there is only one web application in the environment that should accept file upload,
then rules should be set to completely block PUT commands (HTTP commands for file upload)
from all other systems.
The test rule base usually demands a "soft start." Employing this strategy, the firewall will
comply with all its set rules. Before enters the active mode, the firewall will take sometime to
evaluate the traffic that violates the firewall rules. Implementers must also adjust the false alarm
rate before production. Since programmers' dislike security systems to destroy their applications,
this will greatly help improve the relationship with developers.

Once in active mode, the application must be cautious. The logs created by the blocked traffic
will tell an important story. The logs of blocked attacks will show management the return on
their security investment. There may also be additional false positives, which can further help
fine-tune the rule base.
3. Literature Review

Published in 1955, the book by Comer, D. E., (1995). Internetworking with TCP/IP: principles,
protocols, and architecture explains lucidly the concepts related to client-server computing. He
provided a detailed guide to Posixsocekts standard utilized by Linux and other operating systems
[3]. This is a bestselling book which studies the fundamentals. According to Google Books,
“Leading author Doug Comer covers layering and shows how all protocols in the TCP/IP suite
fit into the five-layer model. M. Abdelhaq, (2014) discussed with a new focus on CIDR
addressing, this revision addresses MPLS and IP switching technology, traffic scheduling, VOIP,
Explicit Congestion Notification (ECN), and Selective Acknowledgement (SACK) [1]. A.
Herzog (2007) included coverage of Voice and Video Over IP (RTP), IP coverage, a discussion
of routing architectures, examination of Internet application services such as domain name
system (DNS), electronic mail (SMTP, MIME), file transfer and access (FTP, TFTP, NFS),
remote login (TELNET, rlogin), and network management (SNMP, MIB, ANS.I), a description
of mobile IP, and private network interconnections such as NAT and VPN” (Google Books) [2].

This book is for anyone who is interested in knowing the workings of the internet.

In 1997, Micki Krause and Harold F. Tipton published Handbook of Information Security
Management [13].

Routledge considers it as, “the gold-standard reference on information security, the Information
Security Management Handbook provides an authoritative compilation of the fundamental
knowledge, skills, techniques, and tools required of today's IT security professional. Now in its
sixth edition, this 3200page, 4 volume stand-alone reference is organized under the CISSP
Common Body of Knowledge domains and has been updated yearly. Each annual update, the
latest is Volume 6, reflects the changes to the CBK in response to new laws and evolving
technology” [13]

In 2011, Larry L. Peterson and Bruce S. Davie published the 5th edition of their book Computer
networks a systems approach which explores the all the aspects of computer networking. Danny
Yee in his review writes, “In many ways Computer Networks: A Systems Approach contains just
what one would expect to find in a serious introduction to networking [15]. Peterson and Davie
begin with a quick introduction to basic networking concepts, then look at abstract protocol
implementation in the context of the x-kernel system (fragments of code from which are used
throughout to illustrate implementation issues) [15]. P. Qianwei. (2002) [14] have very broadly
follow the standard network layers upwards, with chapters on host and link issues (ethernets and
token-rings, encoding, framing, and error detection); on packet switching and routing; on
bridges, internetworking, IP and IPv6, DNS, and multicast; on end-to-end protocols (UDP, TCP,
RPC, and performance issues); and on end-to-end data (presentation, encryption, compression).
Chapters on congestion control (especially in TCP implementations) and high-speed networking
(including real-time services and quality of service guarantees) complete the volume”(Lee)

3.1 Contribution of Researchers in this Work:

Firewalls: Basic Approaches and Limitations

C. Hunt. (2010) have explained that Firewall technology can be employed to secure networks by
strategic installment at single security screen station which will enable the Intranet to connect to
the public Internet, thus making way for complete cyber safety[4]. G. Fox. (2001) This can be
further used in isolating sub-networks for providing cyber defense within the concerned
organization [5]. Firewall usually uses three services to secure a network:

1. Packet filtering
2. Circuit proxy
3. Application proxy
1. Packet filtering:

Packet filtering examines the packet header thus verifying the IP address and controlling access
without making any changes. Since the operation is simple, both speed and efficiency are the
constant companions of this process [6]. There is an additional advantage too! The job does not
require much of user’s attention and is completely independently but this does not mean that they
have poor transparency. The transparency is on point [7].

Filtration of packets can be done on the basis of these: IP address (source and destination),
TCP/UDP source destination port. This kind of firewall is able to block connections to and from
specific hosts and networks [4]. The cost is minimal since they use resident router software, and
ensures complete security due to their strategic placement at the choke point.

2. Circuit Proxy:

Circuit Proxy is the destination address to which entire communicators should address their
packets and this is an important difference between circuit proxy and packet filtering firewall.
Assuming access has been granted [11]. The circuit, on assuming the approval of access,
replaces its own address (its own) with the destination address.

3. Application Proxy

There are more complications in application proxy in comparison to packet filtering firewall and
circuit proxy. The application proxy, first, comprehends the application data, and then it attempts
to authenticate users and judge threat level of data. However, this complicated yet
comprehensive function comes with a price; users often have to be reconfigured with
transparency loss [8] [9].

3.2 Packet Inspection Approach:


M. Uddin, (2013) have explained about the approach is in complete contrast to whatever process
we have described so far.This involves proper inspection of the packet contents and also their
headers [12]. In this process the firewall’s inspection is performed through the usage of an
inspection module that inspects layers from network to application. The process takes place by
the integration of all the layers’ information into a single point which is known as the inspection
point. Then happens the full examining process.

3.3 Firewall Limitations

Every best thing in the world has its limitations. Limitations does not make anything imperfect.
It just makes us aware of what to use and wat to avoid. C. Mahalakshmi (2012) have explained
about certain limitations in firewalls does not dish its importance but it is also a work of
diligence to be aware of the limitations. Here’s listing some of the limitations of firewall that one
should be aware of [6].

a. A firewall’s security process is limited in the circumference, and is unable to combat the
enemy within
b. A firewall is not the best option for defense against certain malicious codes and viruses
such as Trojan
c. The process of configuration of packet-filtering is not user’s knowledge friendly and is
complicated due to which occurring errors is a natural outcome leading to loopholes in
the security. tends to be complicated process in the course of which errors can easily
occur, leading to holes in the defense.

3.4 Security Administration and Network

Gartner, (2013) have analyzed that application level in the cyber environment has more
vulnerabilities than the one which in the operating system. There are number of standard
organizations such as System Administration, Network and Security (SAN) Open Web
Application Security Project, and so on, that keep track of new found vulnerabilities as well as
vulnerabilities that present higher risk to any organization adopting web applications as
backbone of their business. These organization filed monthly and yearly report regarding new
and top vulnerabilities presenting high risk to web application to create awareness to the people
in order to reduced risk of being hacked as result of such vulnerabilities(Gartner, 2013).

A1. Injection flows:

Injection flaws (such as SQL, OS, and LDAP injection) are type of web application
vulnerabilities that occur when inputs from user are being sent to interpreter without proper
sanitization. This allows attacker to trick interpreter by processing unintended query or command
thereby gaining unauthorized access or data manipulations

A2. Broken Authentication and Session Management:


Broken authentication and session management occur when functions associated with
authentication and session management are not well implemented. This allows attacker to
compromise passwords, keys, or session tokens, or to exploit other implementation flaws to
assume other users’ identities.

3.5 Background of SQLI attacks

Bau et al., (2010) have explained that most database driving applications required users to log
into the application order to have access the information stored in information system. By login
into the system users can have full access to the information to him cannot or have limited access
other’s information depending on the purpose of the application. However, because of the
dynamic feature of SQL query and logical knowledge possessed by other people on how
communications between application layer and database layer are constructed, it became possible
for them manipulate these commutation to have unauthorized access to the system, bypass
authentication mechanisms and unauthorized data manipulation on backend database through
injection parameters (input provide to user to make request to application such as forms) without
being login into the system or without proper login credential. This is possible because developer
of the system trust the end users by not considering security threat at a time of developing the
query which is handling the users request, processes it and send respond back to the users. A
query that is accepting whatever input provided by the user and send it to the backend database
of the application for processing without proper security check is called vulnerable query and can
be subjected to SQL injection attack. The more details on how SQL injection attacks are been
discovered can be described in the sections below.

Amirtahmasebi, (2009) have explained SQLIA (SQL Injection Attack) Intent Attacks can be
classified based on what attacker trying to achieve or intent to do and Identifying injection
parameters, database fingerprinting, identifying database schema, database extraction, executing
remote code, performing privilege escalading and authentication bypassing [23].

3.6 Identifying Injectable Parameters

Halfond and Orso, (2007) have analyzed injectable parameters are text-input that allow users to
request information from the database. This query request is sent to the database server though
HTTP request, for example ULR; search box and authentication entries are considered as text-
input. When these text-input are sending user requests to the database without proper validation
they are considered as injectable parameters which allow attackers to inject SQL query attack.
Identifying injection parameters is the first step to perform an attack [24].

3.7 Performing database fingerprinting

After identifying the injection parameters, the second step is to know the database engine type
and version. Knowing this is very important to an attacker because it enables him to know how
to construct query format supported by that database engine and default vulnerability associated
with that version as every database engine employs a different proprietary SQL language dialect.
Determine database schema: schema is the database structure. It includes table names,
relationships, and column names. Knowing this information about database makes it easier for an
attacker to construct an attack to perform database extraction or manipulate data language.
Database manipulation and extraction: most of the attacker their aim is gaining access to
sensitive information such as secret formula employee bank details or changing friend salaries

3.8 SQLI Vulnerability Detection Approaches

Djuric, Z. (2013) Disconnecting enterprise from internet is not reasonable option to prevent
SQLI attack. In order to minimize the likelihood of successful SQLIA in web application,
researcher proposed different approach to enable security administrator to identify the course of
vulnerabilities and address them before been exploit by potential attacker. These approach can be
categorized into two static and dynamic approach [25].

3.Methodology

In this section I have discussed about the components that need to considered while building the
firewall application. Also, there are different methods and ways by which the firewall will be
deployed into the system I have focused to address all of that. In the main project, considering all
the below mentioned components and factors I will focus to build the firewall application.

Web Application Firewall


One of the tools which are used to protect websites from application attacks is called a Web
Application Firewall (WAF). This is an application firewall for HTTP applications which applies
a set of rules to an HTTP conversation [18]. Generally, these rules cover common attacks such as
Cross-site Scripting (XSS) and SQL Injection. These are usually deployed in one of the
following architectures:

Figure 2: Architecture of WAF

Inline

When on inline mode, a WAF appliance is placed in the middle of the traffic between a user and
a web application, allowing it to inspect and block attacks in a transparent manner to web servers
[20].

Figure 3: Incline WAF flow

Out of band

In this mode the WAF would have the ability to inspect the traffic sent to the web server but
unable to react to it since it would only see a copy of the traffic [21][20].
Figure 4: Incline WAF flow

Agent

When using an Agent mode for a WAF, software is placed in the web server imitating an inline
mode with a hardware setup [17]. While this allows an easier network placement it can become
more invasive on the deployment environment and lead to less efficient resource allocation for
web servers [18].

Cloud

When using a cloud provider as a WAF solution, web servers can benefit from a simple setup
and what would seem like unlimited scaling capacity. This can be achieved by allowing a third
party to be placed in the middle of all traffic between web servers and their customers. The
drawback of such a setup would be an increased latency incurred due to the traffic going through
the cloud provider (which can be reduced if it’s the same provider used for the web application)
and the risk of having the data go through a third party [1] [6].

Figure 5: WAF cloud mechanism

Typical problems with WAF setups

Network placement

As mentioned with the out of band or inline architectures, depending on the scale of the
organization what network placement strategy to use can become a challenging part of placing a
installing a WAF solution [4]. In environments where there might exist multiple datacenters
hosting the applications needed to protect, ensuring that there is a complete coverage of a WAF
(either placed in-line or always capturing copies of the traffic) can become a daunting task. This
can lead to inefficient use of resources or a heavy investment on network changes to
accommodate to the WAF solution [3].

False positive rate

One of the biggest problems with WAF solutions acting in blocking mode is the false positive
rate they can generate. The false positive rate can indicate how many customers are being
blocked while not actually performing an attack, which can translate into financial loss and
reputational damage [5]. One of the major reasons why WAF solutions are not placed in
blocking mode is due to the false positive rate affecting enough customers that security teams are
forced to stop blocking attacks, turning the solution into simply a visibility tool without the
ability to stop attacks [6].

Latency added

Given that the WAF needs to inspect the traffic and decide if it should be blocked or not, latency
is added by WAF applications. Depending on the network placement, and how costly the
analysis operations they perform might be, this can become prohibitive to applications that
depend on low latency responses to function or engage their users [13].

Blocking web application attacks

To take advantage of this model, we need to be able to efficiently decide what traffic to block.
This will minimize the latency incurred on regular users and remove the possibility of false
positives affecting them [14]. To achieve that we will need to decide what traffic should be
placed in an ‘inline’ mode in the following ways:

Traffic routing

Traffic routing is the way by which we can decide that portions of the traffic should work in an
in-line mode, having every single request analyzed by the WAF and blocked if it is considered a
web application attack [5]. This allows applications that have a low tolerance for latency to be
able to have their traffic inspected and only add latency when a threat is detected and malicious
requests need to be blocked. This can be achieved in the following ways, either by human or an
automatic decision-making process:

Fingerprint based routing

By analyzing the traffic automatically in the log processing component, we can extract
fingerprints that are performing web application attacks and only have those go through the
WAF service (adding latency to them). These fingerprints would be extracted by combination of
parts of the request (IP address, client ID, User Agent or combinations) or by particular
fingerprints that might be automatically or manually added (as for 0day fingerprints or known
attack patterns) [8]. While the Log processing component would be automatically creating these
fingerprints and adding them to the State store component, so that the Agent is aware that the
traffic must be router to the WAF service, this can also be triggered by an analyst which decides
that a particular fingerprint needs to be routed through the WAF [9].

Net block-based routing

Another option for routing traffic is based on a network block. This means that particular ISPs,
hosting providers or other known services that have a higher risk of attacks coming from them
(such as open proxies or anonymity networks like TOR) can be routed by default to the WAF
service. This would happen by updating the state store with the IP address net blocks for these
providers or members of the networks so that the Agent is aware that it must route such traffic to
the WAF [14].

Virtual patching

For situations where a vulnerability is known to exist on particular endpoints, or where these
endpoints have a higher degree of risk and need to have the WAF service inspecting every single
call (not only when a threat is detected), we can enable virtual patching. This means that
particular endpoints are always routed to the WAF service for analysis of requests coming to
them, either for the full endpoint or a combination of parameters that might be vulnerable to
attacks [15].

False positive rate management

To find a way to avoid blocking users which might not be performing web application attacks,
we need to first differentiate between a false positive on the detection process (DFP) and a false
positive on the blocking process (BFP). While we will accept having false positives on the
detection process (which means thinking a particular request might be a web application attack
when it’s not) we will focus on not having false positives in the blocking process (affecting those
requests and blocking a normal user from accessing resources on the web server). An important
point to take into account is that while it’s possible to remove the probability of false positives at
all, different applications might want to balance what false positive rate they are willing to accept
against the risks of increasing the likelihood of attacks from succeeding [3]. Given the fact that
the false positive rate can be decreased by getting more context around a request, the lower false
positive rate we want the more context we might require, which also means more time is given to
an attack to attempt to exploit the web application [6] [7]. Any application will need to balance
its threat model against the impact of affecting users to find the right false positive rate to accept.
This section will focus on how to avoid blocking user’s requests taking advantage of the
information we have available, before deciding which traffic needs to be routed to the web
server.

Business Logic Analysis


By looking at the business logic of the application we can get information relevant to change the
probability of the request being part of an attack or not. This can involve the trust we can give to
some request attributes, such as the IP address or the client, what business activity has happened
for the user in a period of time and what impact we would have by blocking them. Leveraging
these data points can allow us to reduce the impact of incorrectly blocking accounts and routing
traffic through the WAF service [11] [16].

Historical Analysis

Looking at the history of the requests and its fingerprints can help understand the risk of the
request and its possible impact. This allows to compare the request against other requests which
were similar or what is the rate of DFP’s (detection false positive) for that particular endpoint or
type of request [21] [20].

Context Analysis

The context analysis of a request will be a key part of avoiding BFP (blocking false positive)
even if we have a high amount of DFP. This would happen given the fact that most web
application attacks require many requests to be able to find an actual vulnerability on the web
application [8]. By looking at the context of a request in terms of how many requests have been
marked as a possible attack, we can specify a particular false positive rate we are willing to
accept and only place the suspicious traffic on the in-line mode for the WAF service if the
context matches our desired BFP [9].

An example of setting the desired BFP

In the situation where we have an application that wants a BFP of 0,00001% (one request would
be incorrectly blocked for every 10 million) but we have a DFP of 0.1% (one request is
incorrectly considered an attack every 1 thousand). Given that the probability of a DFP is
independent of other requests, by only placing traffic in an in-line mode once a score of 5
requests marked as DFP is reached we can guarantee the BFP of 0.00001%. Depending on the
type of attack and threat model of the application, the BFP and its related score can be modified
to get the best possible balance in terms of security and impact to users [9].

4. Research plan and approach

Background: Understanding the Application and WAF context relation to it

The first step in firewall building is planning. With the increasing complexity and dynamism of
web security, traditional/legacy techniques and a single open-source web application firewall
will not do, given the complexity and dynamics of the day. growing world today. It is therefore
important to comprehend what the objectives are, where WAF will fit into security solution, and
accordingly, tailor security to the need of users
As needed firewall plus budget constraints, to take decision on the deployment of firewall- cloud
WAF, hardware or software.

The other decision that plays an important role in this process is if desire to create the WAF or
apply in addition to complete a diligent solution. In both cases, will perform the following steps.

Security models followed by world wide web application firewalls -

The black / negative list model will allow traffic while monitoring and blocking / blocking all
requests that are malicious in nature. WAF must be constantly engaged in behavioral learning,
otherwise it will become less effective.

Whitelist / Active template in which all pre-approved requests / traffic are blocked. This will lead
to high false positives and therefore regular and continuous tuning and configuration will be
essential for this model to function properly.

The security model that is hybrid in nature will combine the positive and negative and reduce
disadvantages and improve the protection of web application. Thus, we see here a requirement of
the hybrid security model that it succeeds in juxtaposing the positive model and the negative
model. This duplication will lead to the reduction of inconvenience, which will lead to better
security of the web application.

Choosing the right model is very important and the user should keep the context of their needs in
mind.

The model that WAF will provide a beginning for implementation, and provided the nature of
web applications which is dynamic, a protection hybrid model must be the beginning point for
any serious transactional web application. Once it's done, it's important to know the way to set it
up real value starts coming from WAF

Creating and configuring WAF policies

For creating or participating in any web service, the developers have to follow certain policies of
the application firewall. So, while building application firewall the I’m planning to configure the
default policies, from the fundamental set of rules. The policy will cover the pattern analysis,
traffic analysis and OWASP of MO to the known threats.

After the adaption of the policies, customization of the policy needs to be done by adapting to
certain strategies of the policy. I’m planning use the custom strategies by ensuring it matches my
context and the needs which I noticed during planning phase. For instance, I won’t be serving for
some specific countries and regions. I will make sure that the WAF policy is adjusted
continuously so that the business strategy is not altered.

I will ensure that the security logging is enabled and analytics of the WAF is accessible by the
security professional to look for any risk that is likely to be faced while installing the WAF. This
will also enable the security professional to monitor and manage the security. I feel that, a human
should be there in order identify the errors that machine is incapable of finding. Most importantly
I will ensure that WAF policies to provide defense against existing application

Make WAF Smart with AI and ML

By continuously checking for WAF policy updates against existing risks, I might will experience
security testing flows in my application and also develop understanding the attacks happening on
my Web site based on which my will be able to create more effective Firewall Web
Applications. It will continuously learn from the history of past attacks and global threat
information and map it to the security risks of my existing applications that will allow me to
reduce risk more precisely.

5. Link to Video Presentation

https://youtu.be/AXR7Xv9Zy20

6. Conclusion

To find a way to solve the problem of latency and BFP that normal WAF setups introduce, this
paper describes a strategy that mixes both the in-line and out of band modes, in a hybrid
architecture, that can dynamically choose what parts of the traffic should be placed in in-line
mode and which ones should continue in an out of band analysis mode. By leveraging this
architecture, this does not only avoid affecting users with latency and incorrect blocking, but also
improve the response capabilities by allowing multiple components to be placed as plugins of the
WAF service working in parallel to perform decisions and analyze the traffic. Leveraging open-
source components, any organization can implement such an architecture to improve their ability
to protect its users and improve their experience at their platform. In this rapidly changing world
where technology is merging with the subconscious the importance of this paper lies in letting
people know about the most crucial thing of the cyber world that is safety. IT is no rocket
science to understand that we do not surround technology. Technology surrounds us. Data is the
most valuable thing! We are all aware of certain cyber-attacks that steals our data, money and
everything. We also know how much humans are vulnerable to technology. We live in
technologyThis proposal attempts to ensure some ways that can help to be safe. This paper
provides a detailed study on some of the aspects of cyber safety so that the reader can take wise
steps. I have also approached this from a very technical point of view because I believe the
technicalities should be exposed to the layman. However, I have refrained from using
technological jargon because my attempt is to reach a wider audience and not a niche one.

I have also given the section of literature review here. Now, some might question how can
technological books be literature. But the question itself is flawed. There is a section where I
have mentioned the cons of firewall concept. Now, this might feel contradictory to many people
because at one hand I am praising it but again I am listing it cons. But that is thing. No
technology is perfect. But that does not snatch away the credit from its importance. My objective
is always to show both sides of the same coin. I have tried to do in this paper as well.

7. Reference

[1] M. Abdelhaq, R. Alsaqour, M. Al-Hubaishi, T. Alahdal and M. Uddin. 2014. The Impact of
Resource Consumption Attack on Mobile Ad-hoc Network Routing. International Journal of
Network Security. 16: 399-404.

[2] A. Herzog and N. Shahmehri. 2007. Usability and security of personal firewalls. In New
Approaches for Security, Privacy and Trust in Complex Environments, Ed: Springer. pp. 37-48.

[3] Comer, D. E., 1995. Principles, protocols, and architecture. Internetworking with tcp/ip.

[4] C. Hunt. 2010. TCP/IP network administration: O'reilly.

[5] G. Fox. 2001. Peer-to-peer networks. Computing in Science and Engineering. 3: 75-77.

[6] C. Mahalakshmi and M. Ramaswamy. 2012. Data transfer strategy for multiple destination
nodes in virtual private networks. Journal of Engineering & Applied Sciences. 7: 1372-1378.

[8] D. B. Chapman. 1992. Network (in) security through IP packet filtering. In: Proceedings of
the 3rd UNIX Security Symposium.

[9] J.-Y. Le Boudec. 1992. The asynchronous transfer mode: a tutorial. Computer Networks and
ISDN systems. 24: 279-309.
[10] J. G. Andrews, A. Ghosh and R. Muhamed. 2007. Fundamentals of WiMAX:
understanding broadband wireless networking: Pearson Education.

[11] M. Uddin, A. A. Rehman, N. Uddin, J. Memon, R. Alsaqour and S. Kazi. 2013. Signature-
based Multi-Layer Distributed Intrusion Detection System using Mobile Agents. International
Journal of Network Security. 15: 79-87.

[12] M. Uddin, R. Alsaqour and M. Abdelhaq. 2013. Intrusion Detection System to Detect DDoS
Attack in Gnutella Hybrid P2P Network. Indian Journal of Science and Technology. 6: 71-83.

[13] Krause, M. (Ed.). (2006). Information Security Management Handbook on CD-ROM (Vol.
27). CRC Press.

[14] P. Qianwei. 2002. The crisis of and safeguard for network information environment [J].
Researches in Library Science. 5: 017.

[15] Peterson, L. L., & Davie, B. S. (2007). Computer networks: a systems approach. Elsevier.

[16] R. C. Summers. 1997. Secure computing: threats and safeguards: McGraw-Hill, Inc.

[17] C.-X. Qi and Q.-D. Du. 2009. A Smart IVR system based on application gateways. In
Hybrid Intelligent Systems. HIS'09. 9th International Conference on. pp. 110-115.

[18] S. Jajodia, S. Noel and B. O’Berry. 2005. Topological analysis of network attack
vulnerability. In Managing Cyber Threats, Ed: Springer. pp. 247-266.

[19] W. R. Cheswick, S. M. Bellovin and A. D. Rubin. 2003. Firewalls and Internet security:
repelling the wily hacke. Addison-Wesley Longman Publishing Co., Inc.
[20] Yee, D. (1999). Development, ethical trading and free software.

[21] Z.-L. Zhang, Y. Wang, D. H. Du and D. Shu. 2000. Video staging: a proxy-server-based
approach to end-to-end video delivery over wide-area networks. IEEE/ACM Transactions on
Networking (TON). 8: 429-442.

[22] Messmer, E., 2013. Gartner: Cloud-based security as a service set to take off. Network
World.

[23] Amirtahmasebi, K., Jalalinia, S. R., &Khadem, S. (2009, November). A survey of SQL
injection defense mechanisms. In 2009 International Conference for Internet Technology and
Secured Transactions,(ICITST) (pp. 1-8). IEEE.

[24] Halfond, W. G., &Orso, A. (2007, September). Improving test case generation for web
applications using automated interface discovery. In Proceedings of the the 6th joint meeting of
the European software engineering conference and the ACM SIGSOFT symposium on The
foundations of software engineering (pp. 145-154).

[25] Djuric, Z. (2013 September). A black-box testing tool for detecting SQL injection
vulnerabilities. In 2013 Second international conference on informatics & applications (ICIA)
(pp. 216-221). IEEE.

You might also like