Download as pdf or txt
Download as pdf or txt
You are on page 1of 1214

CISSP Domain 1 Security & Risk Management Detailed Notes

Code: CISSPD1SRMN

Version: 2

Date of version: 1/12/2018

Created by: AL Nafi Content Writer

Approved by: Nafi Content reviewers

Confidentiality level: For members only


Al Nafi Nafi Members Only

Change history
Date Version Created by Description of change

1/12/2018 1 Nafi Edu Dept AL Nafi mentors created the first set of notes.

2/5/2019 2 Nafi Edu Dept AL Nafi mentors created the first set of notes.

© 2018 Al-Nafi. All Rights Reserved. 2


Al Nafi Nafi Members Only

Table of contents

1. PURPOSE, SCOPE AND USERS................................................................................................................... 6


2. UNDERSTAND AND APPLY CONCEPTS OF (CIA) CONFIDENTIALITY, INTEGRITY AND AVAILABILITY........... 6

2.1. CIA TRIAD EXPLANATION: ............................................................................................................................ 6

3. EVALUATE AND APPLY SECURITY GOVERNANCE PRINCIPLES ................................................................... 8

3.1. EXECUTIVE MANAGEMENT ............................................................................................................................ 8


3.2. GOVERNING BODY ...................................................................................................................................... 8
3.3. GOVERNANCE OF INFORMATION SECURITY ....................................................................................................... 8
3.4. STAKEHOLDER ............................................................................................................................................ 9

4. SECURITY GOVERNANCE DEFINITION ....................................................................................................... 9

4.1. PRINCIPLE 1: ESTABLISH ORGANIZATION-WIDE INFORMATION SECURITY ................................................................ 9


4.2. PRINCIPLE 2: ADOPT A RISK-BASED APPROACH ................................................................................................. 9
4.3. PRINCIPLE 3: SET THE DIRECTION OF INVESTMENT DECISIONS ............................................................................ 10
4.4. PRINCIPLE 4: ENSURE CONFORMANCE WITH INTERNAL AND EXTERNAL REQUIREMENTS ........................................... 10
4.5. PRINCIPLE 5: FOSTER A SECURITY-POSITIVE ENVIRONMENT ............................................................................... 10
4.6. PRINCIPLE 6: REVIEW PERFORMANCE IN RELATION TO BUSINESS OUTCOMES......................................................... 10

5. ALIGNING THE SECURITY FUNCTION TO THE ORGANIZATION BUSINESS STRATEGY, GOALS, MISSION
AND OBJECTIVES ............................................................................................................................................ 11

6. ORGANIZATIONAL PROCESSES AND THEIR IMPACT TO SECURITY .......................................................... 11

7. ADDED DEFINITIONS .............................................................................................................................. 12

7.1. ACQUISITION ........................................................................................................................................... 12


7.2. MERGER ................................................................................................................................................. 12
7.3. DIVESTITURE ............................................................................................................................................ 12

8. ORGANIZATIONAL ROLES AND RESPONSIBILITIES .................................................................................. 12

8.1. SENIOR MANAGEMENT .............................................................................................................................. 12


8.2. SECURITY MANAGER/SECURITY OFFICER/SECURITY DIRECTOR ............................................................................. 13
8.3. SECURITY PERSONNEL ................................................................................................................................ 13
8.4. ADMINISTRATORS/TECHNICIANS .................................................................................................................. 13
8.5. USERS .................................................................................................................................................... 13

9. SECURITY CONTROL FRAMEWORKS ....................................................................................................... 14

9.1. ISO 27001/27002 ................................................................................................................................. 14


9.2. COBIT ................................................................................................................................................... 14
9.3. ITIL ....................................................................................................................................................... 14
9.4. RMF...................................................................................................................................................... 15
9.5. CSA STAR .............................................................................................................................................. 15
9.6. DUE CARE/DUE DILIGENCE ........................................................................................................................ 15

10. INFORMATION TECHNOLOGY — SECURITY TECHNIQUES — INFORMATION SECURITY RISK


MANAGEMENT............................................................................................................................................... 16

10.1. SCOPE OF ISO 27005 WHICH IS USED FOR SECURITY RISK MANAGEMENT ........................................................... 16

© 2018 Al-Nafi. All Rights Reserved. 3


Al Nafi Nafi Members Only

11. TERMS AND DEFINITIONS FOR SECURITY RISK MANAGEMENT .......................................................... 17

11.1. WHAT DOES CONSEQUENCE MEANS?............................................................................................................ 17


11.2. WHAT DOES CONTROL MEANS..................................................................................................................... 17
11.3. WHAT DOES EVENT MEANS? ....................................................................................................................... 17
11.4. WHAT DOES EXTERNAL CONTEXT MEANS? ..................................................................................................... 17
11.5. WHAT DOES INTERNAL CONTEXT MEANS?...................................................................................................... 17
11.6. WHAT DOES LEVEL OF RISK MEANS? ............................................................................................................. 17
11.7. LIKELIHOOD ............................................................................................................................................. 18
11.8. RESIDUAL RISK ......................................................................................................................................... 18
11.9. RISK ....................................................................................................................................................... 18
11.10. RISK ANALYSIS...................................................................................................................................... 18
11.11. RISK ASSESSMENT ................................................................................................................................. 18
11.12. RISK COMMUNICATION AND CONSULTATION .............................................................................................. 18
11.13. RISK CRITERIA ...................................................................................................................................... 18
11.14. RISK EVALUATION ................................................................................................................................. 19
11.15. RISK IDENTIFICATION ............................................................................................................................. 19
11.16. RISK MANAGEMENT .............................................................................................................................. 19
11.17. RISK TREATMENT .................................................................................................................................. 19
11.18. STAKEHOLDER...................................................................................................................................... 19

12. RISK MANAGEMENT APPROACH AS PER ISO 27005 STANDARD ......................................................... 19

13. SECURITY CONTROLS ......................................................................................................................... 24

14. TRADITIONAL MODEL ........................................................................................................................ 24

15. APPLICABLE TYPES OF CONTROLS ...................................................................................................... 24

15.1. TECHNICAL/LOGICAL CONTROLS:.................................................................................................................. 24


15.2. PHYSICAL CONTROLS: ................................................................................................................................ 24
15.3. ADMINISTRATIVE CONTROLS: ...................................................................................................................... 24

16. SECURITY CONTROL CATEGORIES ...................................................................................................... 25

16.1. DIRECTIVE: .............................................................................................................................................. 25


16.2. DETERRENT: ............................................................................................................................................ 25
16.3. PREVENTATIVE: ........................................................................................................................................ 25
16.4. COMPENSATING: ...................................................................................................................................... 25
16.5. DETECTIVE: ............................................................................................................................................. 25
16.6. CORRECTIVE: ........................................................................................................................................... 25
16.7. RECOVERY: .............................................................................................................................................. 25

17. MONITORING AND MEASUREMENT .................................................................................................. 26

18. UNDERSTAND AND APPLY THREAT MODELING CONCEPTS AND METHODOLOGIES ........................... 27

19. MINIMUM SECURITY REQUIREMENTS ............................................................................................... 27

20. SERVICE LEVEL REQUIREMENTS ......................................................................................................... 28

21. CONTRACTUAL, LEGAL, INDUSTRY STANDARDS, AND REGULATORY REQUIREMENTS ....................... 28

22. CONTRACTUAL MANDATES ............................................................................................................... 29

23. LEGAL STANDARDS ............................................................................................................................ 29

24. COMMON PRIVACY LAW TENETS ....................................................................................................... 29


© 2018 Al-Nafi. All Rights Reserved. 4
Al Nafi Nafi Members Only

25. CYBER CRIMES AND DATA BREACHES ................................................................................................ 30

26. IMPORT/EXPORT CONTROLS ............................................................................................................. 30

27. PRIVACY TERMS ................................................................................................................................. 31

28. POLICY ............................................................................................................................................... 32

29. STANDARDS ....................................................................................................................................... 32

30. PROCEDURES ..................................................................................................................................... 32

31. GUIDELINES ....................................................................................................................................... 32

32. BUSINESS CONTINUITY REQUIREMENTS ............................................................................................ 33

33. DEVELOP AND DOCUMENT SCOPE AND PLAN ................................................................................... 33

34. BUSINESS IMPACT ANALYSIS (BIA) ..................................................................................................... 34

© 2018 Al-Nafi. All Rights Reserved. 5


Al Nafi Nafi Members Only

1. Purpose, scope and users


The purpose of this document is to define CISSP domain for Nafi members Only. Please be honest to
yourself and with Al Nafi and do not share this with anyone else. Everyone can join Al Nafi as we are
so economical to begin with.

These notes covers all the key areas of Domain 1 and the notes are good until a new revision of CISSP
syllabus comes from ISC2. Normally the cycle is around 3 years so since we had our last revision in 2018
June, the next update to the CISSP syllabus is expected around June 2021.

Please follow the following 5 step program if you want to master CISSP domain and pass the exam
inshAllah.

1. Watch all the CISSP videos on the portal 8-10 times. Soak yourself with Brother Faisal words in
what he is teaching and try to ask questions. Think like a Security Manager who does
everything with due care and due diligence.
2. Read all the presentations slides and the detailed notes at least 8-10 times. Pay attention to
additional reading material recommended by Brother Faisal during his videos.
3. Practice all the flash cards multiple times on our website.
4. Practice all the MCQ’s on our website. (Those who score 85% in 10 out of 15 tests [The last 10
sets are counted towards exam payment by Al Nafi] Al Nafi will pay their exam fee inshAllah.
Please give dawah to 50 people to join al nafi if they want to study inshAllah)
5. Go for the CISSP exam once you are approved by Al Nafi and your examination fee is paid.

2. Understand and apply concepts of (CIA) confidentiality, integrity and


availability
2.1. CIA Triad explanation:

As a security professional or a trainee you need to know CIA (Confidentiality, integrity and availability)
down to its core.

We will focus on the three key principles as we will now refer them as CIA triad. In the field of
information security, we have assets which can be tangible (something you can touch) for example
your organization computers, servers, employees, data etc. and non-tangible (something you cannot
touch) for example your reputation, your market share, your share or stock value etc. all of these assets
requires security and the first thing as a security practitioner you will do is to utilize the CIA principle
to assess what level of security you need to apply as per your business, security and compliance
requirements. This is true even for data that is stored in any form, be it electronically or in printed
hardcopy. It also applies to any systems, mechanisms, techniques used to
process/manipulate/store/transmits that data.

For CIA examples please review the presentation notes and video files.

Throughout the course the CIA triad will be used extensively so please brush up your concepts as they
relate to Confidentiality, integrity and availability.

© 2018 Al-Nafi. All Rights Reserved. 6


Al Nafi Nafi Members Only

© 2018 Al-Nafi. All Rights Reserved. 7


Al Nafi Nafi Members Only

3. Evaluate and apply security governance principles


3.1. Executive management

Person or group of people who have delegated responsibility from the governing body for
implementation of strategies and policies to accomplish the purpose of the organization.

NOTE 1 Executive management form part of top management: For clarity of roles, these notes
distinguishes between two groups within top management: the governing body and executive
management.

NOTE 2 Executive management can include Chief Executive Officers (CEOs), Heads of Government
Organizations, Chief Financial Officers (CFOs), Chief Operating Officers (COOs), Chief Information
Officers (CIOs), Chief Information Security Officers (CISOs), and like roles.

3.2. Governing body

Person or group of people who are accountable for the performance and conformance/compliance of
the organization

NOTE governing body forms part of top management: For clarity of roles, this standard distinguishes
between two groups within top management: the governing body and executive management.

3.3. Governance of information security

System by which an organization’s information security activities are directed and controlled

© 2018 Al-Nafi. All Rights Reserved. 8


Al Nafi Nafi Members Only

3.4. Stakeholder

Any person or organization that can affect, be affected by, or perceive themselves to be affected by
an activity of the organization.

NOTE a decision maker can be a stakeholder.

4. Security Governance definition


Security Governance describes the principles and processes that, together, form the governance of
information security. Governance principles of information security are accepted rules for governance
action or conduct that act as a guide for the implementation of governance. A governance process for
information security describes a series of tasks enabling the governance of information security and
their interrelationships. It also shows a relationship between governance and the management of
information security.

Meeting the needs of stakeholders and delivering value to each of them is integral to the success of
information security in the long term. To achieve the governance objective of aligning information
security closely with the goals of the business and to deliver value to stakeholders, this sub-clause sets
out six action-oriented principles.

The principles provide a good foundation for the implementation of governance processes for
information security. The statement of each principle refers to what should happen, but does not
prescribe how, when or by whom the principles would be implemented because these aspects are
dependent on the nature of the organization implementing the principles. The governing body should
require that these principles be applied and appoint someone with responsibility, accountability, and
authority to implement them.

4.1. Principle 1: Establish organization-wide information security

Governance of information security should ensure that information security activities are
comprehensive and integrated. Information security should be handled at an organizational level with
decision-making taking into account business, information security, and all other relevant aspects.
Activities concerning physical and logical security should be closely coordinated.

To establish organization-wide security, responsibility and accountability for information security


should be established across the full span of an organization’s activities. This regularly extends beyond
the generally perceived ‘borders’ of the organization e.g. with information being stored or transferred
by external parties.

4.2. Principle 2: Adopt a risk-based approach

Governance of information security should be based on risk-based decisions. Determining how much
security is acceptable should be based upon the risk appetite of an organization, including loss of
competitive advantage, compliance and liability risks, operational disruptions, reputational harm, and
financial loss.

To adopt an information risk management appropriate to the organization, it should be consistent and
integrated with the organization’s overall risk management approach. Acceptable levels of information
© 2018 Al-Nafi. All Rights Reserved. 9
Al Nafi Nafi Members Only

security should be defined based upon the risk appetite of an organization, including the loss of
competitive advantage, compliance and liability risks, operational disruptions, reputational harm, and
financial losses. Appropriate resources to implement information risk management should be
allocated by the governing body.

4.3. Principle 3: Set the direction of investment decisions

Governance of information security should establish an information security investment strategy


based on business outcomes achieved, resulting in harmonization between business and information
security requirements, ,both in short and long term,, thereby meeting the current and evolving needs
of stakeholders.

To optimize information security investments to support organizational objectives, the governing body
should ensure that information security is integrated with existing organization processes for capital
and operational expenditure, for legal and regulatory compliance, and for risk reporting.

4.4. Principle 4: Ensure conformance with internal and external requirements

Governance of information security should ensure that information security policies and practices
conform to relevant mandatory legislation and regulations, as well as committed business or
contractual requirements and other external or internal requirements.

To address conformance and compliance issues, the governing body should obtain assurance that
information security activities are satisfactorily meeting internal and external requirements by
commissioning independent security audits.

4.5. Principle 5: Foster a security-positive environment

Governance of information security should be built upon human behaviour, including the evolving
needs of all the stakeholders, since human behaviour is one of the fundamental elements to support
the appropriate level of information security. If not adequately coordinated, the objectives, roles,
responsibilities and resources may conflict with each other, resulting in the failure to meet business
objectives. Therefore, harmonization and concerted orientation between the various stakeholders is
very important.

To establish a positive information security culture, the governing body should require, promote and
support coordination of stakeholder activities to achieve a coherent direction for information security.
This will support the delivery of security education, training and awareness programs.

4.6. Principle 6: Review performance in relation to business outcomes

Governance of information security should ensure that the approach taken to protect information is
fit for purpose in supporting the organization, providing agreed levels of information security. Security
performance should be maintained at levels required to meet current and future business
requirements.

To review performance of information security from a governance perspective, the governing body
should evaluate the performance of information security related to its business impact, not just
© 2018 Al-Nafi. All Rights Reserved. 10
Al Nafi Nafi Members Only

effectiveness and efficiency of security controls. This can be done by performing mandated reviews of
a performance measurement program for monitoring, audit, and improvement, and thereby link
information security performance to business performance.

5. Aligning the security function to the organization business strategy, goals,


mission and objectives
Organization security program MUST always be aligned closely to overall purpose, business strategy,
objectives, goals and missions. Even in this day and age, security is still treated as an afterthought. In
case of Pakistan at least security is still treated as an expense rather than a business enabler. However
in case of organizations who are developing, providing and or support security products it’s a different
case. It is absolute imperative that organizations have a robust security strategy and program which is
aligned tightly to organization business strategy, goals, mission and objectives. In todays connected
world, no organization can survive without a robust security strategy and plan in place.

There you as a security practitioner must understand the organization functions, its strategy from
operational and IT perspectives, to better create and or enhance a security function. Security function
that does not align with proper organizational goals can lead to issues which can result in decisions
that can severely impact organizations capability to run and or expand its business. It will also inhibit
its productivity, create undue costs and hinder strategic intent.

6. Organizational Processes and their impact to security


Security governance is a process which defines how a decision is made within an organization. This
task is accomplished in different ways as per organization culture, management style and other variety
of factors.

Inn large organizations the task is much more organized and at times can be more complicated as there
are multiple levels of decision makers which are required to be involved. IN small private business and
or organizations the decision making process may be as simple as one or two person, who at the end
of the day make a decision based on consultation or derived from a personal experience of the decision
makers.

In government or public organizations there is a chartered legislative body or a corporation which


makes strategic decision based defined policies, procedures, board of directors etc.

Each organization will have its own process for making decision, based on a defined structure, goals,
nature of the industry, regulations.

Some companies/organizations create a governance committee, which is a formal body of personnel


who recommends and or decisions. Governance committees are mostly required for most non-profit
organizations as well. The governance committee recruits and selects board members and determines
if the board as a whole and or an individual member(s) are perform in an optimum fashion.

© 2018 Al-Nafi. All Rights Reserved. 11


Al Nafi Nafi Members Only

7. Added definitions
7.1. Acquisition

If any organization decides to purchase a business unit or a whole organization. If the organization
decides to purchase another business unit to have as a subsidiary, the security implications are
extensive. If there is a significant difference in security policies and practices between the entities, the
security professionals in both groups will have to decide how best to align the two, with guidance and
final decision from senior management.

7.2. Merger

Much like an acquisition, a merger of two organizations entails aligning the security governance of the
resulting entity.

7.3. Divestiture

If an organization decides to sell off or cede control of a subsidiary, a considerable amount of effort
will have to go into determining which of the resulting entities controls proprietary property, to include
data, which may entail a great deal of effort on the part of the security personnel. In each of these
examples, external entities, such as regulators and investors, may have additional input and control in
determining the outcome. These examples are not exhaustive; many organizational decisions will have
vast security ramifications.

8. Organizational Roles and Responsibilities


An organization’s hierarchy is often determined by the goals of the organization or which industry it
operates in. This structure can have a bearing on how security governance is created and implemented,
or even how security functions are performed.

The following are a sampling of various roles pertaining to security encountered in many organizations.
This list is in no way inclusive of all types of organizational structures and is not presented as a definitive
guide to these roles; it is simply a way to demonstrate the form of some organizations and the bearing
of some roles on organizational security.

8.1. Senior Management

The upper strata of the organization, comprising those officers and executives that have the authority
to obligate the organization and to dictate policy. These can include such roles as president, vice
president, chief executive officer (CEO), chief operating officer (COO), chief information officer (CIO),
chief security officer (CSO), chief financial officer (CFO), and the like. Usually, these roles include
personnel with some direct legal or financial responsibilities according to statute or regulation. Senior
management is typically responsible for mandating policy, determining the strategic goals for the
organization, and making final determinations according to the organizational governance for both
security and non-security topics.

© 2018 Al-Nafi. All Rights Reserved. 12


Al Nafi Nafi Members Only

8.2. Security manager/security officer/security director

Often, this is the senior security person within an organization. In some cases, the organization has a
CSO (mentioned in the preceding entry of this list), in which case the security officer is a member of
senior management. When the senior security role is not a member of senior management, the
reporting hierarchy is an essential element of determining the importance and influence security has
within the organization. For instance, an organization wherein the security manager reports directly
to the CEO places a great deal of importance on security; an organization that has the security manager
reporting to an administrative director, who in turn reports to a vice president, who reports to senior
management, obviously does not. The security manager is typically responsible for advising senior
management on security matters, may assist in drafting security policy, manages day-to-day security
operations, represents the organization’s security needs in groups and meetings such as the
Configuration Management Board and similar committees, contracts for and selects security products
and solutions, and may manage the organization’s response to incidents and disasters.

Note: According to industry best practices, the security manager should not report to the same
role/department that is in charge of information technology (IT) because the functions are somewhat
adversarial (the security team will be reporting on/reviewing the operations and productivity of the IT
team). Having the same department responsible for both functions would constitute a form of conflict
of interest. The exception to this is when both the security office and the IT department report to the
chief information officer (CIO); this is usually an acceptable form of hierarchy.

8.3. Security personnel

The security practitioners within the organization. These can include administrators, analysts, incident
responders, and so forth. This group may also include personnel from disciplines other than IT security,
such as physical security and personnel security. Security personnel are tasked with performing the
security processes and activities within the organization. Security personnel usually report to the
security manager/director/officer.

8.4. Administrators/technicians

IT personnel who regularly perform work within the environment may have security duties as well.
These can include secure configuration of systems, applying secure networking, reporting potential
incidents, and so forth. Positions in this category include but are not limited to: system administrators
(often Tech Support and Help Desk personnel) and network administrators/engineers. This group
typically reports to the IT director or CIO.

8.5. Users

Employees, contractors, and other personnel who operate within the IT environment on a regular
basis. While this role does not have specific security duties per se, users are required to operate the
systems in a secure fashion, and they are usually required to sign a formal agreement to comply with

© 2018 Al-Nafi. All Rights Reserved. 13


Al Nafi Nafi Members Only

security guidance. Users may also be co-opted and trained to report potential security incidents, acting
as a rudimentary form of intrusion detection. Users typically report to their functional managers.

9. Security Control Frameworks


In formalizing its security governance, an organization might implement a security control framework;
this is a notional construct outlining the organization’s approach to security, including a list of specific
security processes, procedures, and solutions used by the organization. The framework is often used
by the organization to describe its security efforts, for both internal tracking purposes and for
demonstration to external entities such as regulators and auditors. There are a variety of security
frameworks currently popular in the industry, each offering benefits and capabilities, usually designed
for a certain industry, type of organization, or approach to security. The following list of framework
examples is by no means exhaustive or intended to be exclusive; the security practitioner should have
a working familiarity with the frameworks on this list, as well as whatever framework is used by their
own organization (if any). Some of these frameworks will be discussed in more detail later in the
course.

9.1. ISO 27001/27002

The International Standards Organization (ISO) is recognized globally, and it is probably the most
pervasive and used source of security standards outside the United States (American organizations
often use standards from other sources).

ISO 27001 is known as the information security management system (ISMS) and is a comprehensive,
holistic view of security governance within an organization, mostly focused on policy. ISO 27002 is a
comprehensive list of security controls that can be applied to an organization; the organization uses
ISO 27002 to select the controls appropriate to its own ISMS, which the organization designs according
to ISO 27001. ISO standards are notably thorough, well-recognized in the industry, and expensive
relative to other standards. Use of ISO standards can allow an organization to seek and acquire specific
standards-based certification from authorized auditors.

9.2. COBIT

Created and maintained by ISACA, the COBIT framework (currently COBIT 5) is designed as a way to
manage and document enterprise IT and IT security functions for an organization. COBIT widely uses a
governance and process perspective for resource management and is intended to address IT
performance, security operations, risk management, and regulatory compliance.

9.3. ITIL

An IT service delivery set of best practices managed by Axelos, a joint venture between the British
government and a private firm. ITIL (formerly the Information Technology Infrastructure Library, now
simply the proper name of the framework) concentrates on how an organization’s IT environment
should enhance and benefit its business goals. ITIL is also mapped to the ISO 20000 standard, perhaps
the only non-ISO standard to have this distinction. This framework also offers the possibility for
certification, for organizations that find certification useful.

© 2018 Al-Nafi. All Rights Reserved. 14


Al Nafi Nafi Members Only

9.4. RMF

NIST, the U.S. National Institute of Standards and Technology, publishes two methods that work in
concert (similar to how ISO 27001 and 27002 function); the Risk Management Framework (RMF), and
the applicable list of security and privacy controls that goes along with it (respectively, these
documents are Special Publications (SPs) 800-37 and 800-53). While the NIST SP series is only required
to be followed by federal agencies in the United States, it can easily be applied to any kind of
organization as the methods and concepts are universal. Also, like all American government
documents, it is in the public domain; private organizations do not have to pay to adopt and use this
framework. However, there is no private certification for the NIST framework.

9.5. CSA STAR

The Cloud Security Alliance (CSA) is a volunteer organization with participant members from both
public and private sectors, concentrating—as the name suggests—on security aspects of cloud
computing. The CSA publishes standards and tools for industry and practitioners, at no charge. The
CSA also hosts the Security, Trust, and Assurance Registry (STAR), which is a voluntary list of all cloud
service providers who comply with the STAR program framework and agree to publish documentation
on the STAR website attesting to compliance. Customers and potential customers can review and
consider cloud vendors at no cost by accessing the STAR website. The STAR framework is a composite
of various standards, regulations, and statutory requirements from around the world, covering a
variety of subjects related to IT and data security; entities that choose to subscribe to the STAR
program are required to complete and publish a questionnaire (the Consensus Assessments Initiative
Questionnaire (CAIQ), colloquially pronounced “cake”) published by CSA. The STAR program has three
tiers, 1–3, in ascending order of complexity. Tier 1 only requires the vendor self-assessment, using the
CAIQ. Tier 2 is an assessment of the organization by an external auditor certified by CSA to perform
CAIQ audits. Tier 3 is in draft form as of the time of publication of this CBK; it will require continuous
monitoring of the target organization by independent, certified entities.

9.6. Due Care/Due Diligence

Due care is a legal concept pertaining to the duty owed by a provider to a customer. In essence, a
vendor has to engage in a reasonable manner so as not to endanger the customer: the vendor’s
products/services should deliver what the customer expects, without putting the customer at risk of
undue harm. An example to clarify the concept: if a customer buys a car from the vendor, the vendor
should have designed and constructed the car in a way so that the car can be operated in a normal,
expected manner without some defect harming the customer. If the user is driving the car normally
on a road and a wheel falls off, the vendor may be culpable for any resulting injuries or damage if the
loss of the wheel is found to be the result of insufficient care on the part of the vendor (if, say, the
wheel mount was poorly designed, or the bolts holding the wheel were made from a material of
insufficient strength, or the workers assembling the car did so in a careless or negligent way). This duty
is only required for reasonable situations; if, for instance, the customer purposefully drove the car into
a body of water, the vendor does not owe the customer any assurance that the car would protect the
customer, or even that the car would function properly in that circumstance.

© 2018 Al-Nafi. All Rights Reserved. 15


Al Nafi Nafi Members Only

NOTE: There is a joke regarding the standard of reasonableness that lawyers use—“Who is a
reasonable person? The court. The court is a reasonable person.” Meaning that the “standard” is
actually quite ambiguous and arbitrary: the outcome of a case hinging on a determination of
“reasonable” action is wholly dependent on a specific judge on a specific day, and judges are only
people with opinions. Due diligence, then, is any activity used to demonstrate or provide due care.
Using the previous example, the car vendor might engage in due diligence activities such as quality
control testing (sampling cars that come off the production line for construction/assembly defects),
subjecting itself to external safety audit, prototype and regular safety testing of its vehicles to include
crash testing, using only licensed and trained engineers to design their products, and so forth. All of
these actions, and documentation of these actions, can be used to demonstrate that the vendor
provided due care by performing due diligence. In the IT and IT security arena, due diligence can also
take the form of reviewing vendors and suppliers for adequate provision of security measures; for
instance, before an organization uses an offsite storage vendor, the organization should review the
vendor’s security governance, and perhaps even perform a security audit of the vendor to ensure that
the security provided by the vendor is at least equivalent to the security the organization itself provides
to its own customers. Another form of due diligence for security purposes could be proper review of
personnel before granting them access to the organization’s data, or even before hiring; this might
include background checks and personnel assurance activities. (Personnel security measures, which
provide a measure of due diligence, will be discussed in more detail later in this domain.)

NOTE: In recent years, regulators and courts (both of which are often tasked with determining
sufficient provision of due care) have found certain activities to be insufficient for the purpose of
ensuring due diligence, even though those activities were previously sufficient. Specifically, publishing
a policy is an insufficient form of due diligence; to meet the legal duty, an organization must also have
a documented monitoring and enforcement capability in place and active to ensure the organization
is adhering to the policy.

10.Information technology — Security techniques — Information security risk


management
10.1. Scope of ISO 27005 which is used for Security Risk Management

This International Standard provides guidelines for information security risk management. This
International Standard supports the general concepts specified in ISO/IEC 27001 and is designed to
assist the satisfactory implementation of information security based on a risk management approach.

Knowledge of the concepts, models, processes and terminologies described in ISO/IEC 27001 and
ISO/IEC 27002 is important for a complete understanding of this International Standard. This
International Standard is applicable to all types of organizations (e.g. commercial enterprises,
government agencies, non-profit organizations) which intend to manage risks that could compromise
the organization’s information security.

© 2018 Al-Nafi. All Rights Reserved. 16


Al Nafi Nafi Members Only

11.Terms and definitions for Security Risk Management


11.1. What does consequence means?

 Outcome of an event
 An event can lead to a range of consequences.
 A consequence can be certain or uncertain and in the context of information security is usually negative.
 Consequences can be expressed qualitatively or quantitatively.
 Initial consequences can escalate through knock-on effects.

11.2. What does control means

 Measure that is modifying risk


 Controls for information security include any process, policy, procedure, guideline, practice or organizational
structure, which can be administrative, technical, management, or legal in nature which modify information
security risk.
 Controls may not always exert the intended or assumed modifying effect.
 Control is also used as a synonym for safeguard or countermeasure.

11.3. What does event means?

 Occurrence or change of a particular set of circumstances


 An event can be one or more occurrences, and can have several causes.
 An event can consist of something not happening.
 An event can sometimes be referred to as an “incident” or “accident”.

11.4. What does external context means?

 External environment in which the organization seeks to achieve its objectives


External context can include:
 the cultural, social, political, legal, regulatory, financial, technological, economic, natural and competitive
environment, whether international, national, regional or local;
 Key drivers and trends having impact on the objectives of the organization; and
 Relationships with, and perceptions and values of, external stakeholders.

11.5. What does internal context means?

 Internal environment in which the organization seeks to achieve its objectives

Internal context can include:


 governance, organizational structure, roles and accountabilities;
 policies, objectives, and the strategies that are in place to achieve them;
 the capabilities, understood in terms of resources and knowledge (e.g. capital, time, people, processes, systems
and technologies);
 information systems, information flows and decision-making processes (both formal and informal);
 relationships with, and perceptions and values of, internal stakeholders;
o ⎯ the organization's culture;
o ⎯ standards, guidelines and models adopted by the organization; and
o ⎯ form and extent of contractual relationships.

11.6. What does level of risk means?

 Magnitude of a risk expressed in terms of the combination of consequences and their likelihood

© 2018 Al-Nafi. All Rights Reserved. 17


Al Nafi Nafi Members Only

11.7. Likelihood

 Chance of something happening


 In risk management terminology, the word “likelihood” is used to refer to the chance of something happening,
whether defined, measured or determined objectively or subjectively, qualitatively or quantitatively, and described
using general terms or mathematically (such as a probability or a frequency over a given time period).
 The English term “likelihood” does not have a direct equivalent in some languages; instead, the equivalent of the
term “probability” is often used. However, in English, “probability” is often narrowly interpreted as a mathematical
term. Therefore, in risk management terminology, “likelihood” is used with the intent that it should have the same
broad interpretation as the term “probability” has in many languages other than English.

11.8. Residual risk

 risk remaining after risk treatment


 Residual risk can contain unidentified risk.
 Residual risk can also be known as “retained risk”.

11.9. Risk

 Effect of uncertainty on objectives


 An effect is a deviation from the expected — positive and/or negative.
 Objectives can have different aspects (such as financial, health and safety, information security, and environmental
goals) and can apply at different levels (such as strategic, organization-wide, project, product and process).
 Risk is often characterized by reference to potential events and consequences, or a combination of these.
 Information security risk is often expressed in terms of a combination of the consequences of an information
security event and the associated likelihood of occurrence.
 Uncertainty is the state, even partial, of deficiency of information related to, understanding or knowledge of, an
event, its consequence, or likelihood.
 Information security risk is associated with the potential that threats will exploit vulnerabilities of an information
asset or group of information assets and thereby cause harm to an organization.

11.10. Risk analysis

 Process to comprehend the nature of risk and to determine the level of risk
 Risk analysis provides the basis for risk evaluation and decisions about risk treatment.
 Risk analysis includes risk estimation.

11.11. Risk assessment

 Overall process of risk identification risk analysis and risk evaluation

11.12. Risk communication and consultation

 Continual and iterative processes that an organization conducts to provide, share or obtain information, and to
engage in dialogue with stakeholders regarding the management of risk
 The information can relate to the existence, nature, form, likelihood, significance, evaluation, acceptability and
treatment of risk.
 Consultation is a two-way process of informed communication between an organization and its stakeholders on an
issue prior to making a decision or determining a direction on that issue. Consultation is:
o a process which impacts on a decision through influence rather than power; and
o an input to decision making, not joint decision making.

11.13. Risk criteria

 terms of reference against which the significance of a risk is evaluated


 Risk criteria are based on organizational objectives, and external and internal context.
© 2018 Al-Nafi. All Rights Reserved. 18
Al Nafi Nafi Members Only

 Risk criteria can be derived from standards, laws, policies and other requirements.

11.14. Risk evaluation

 Process of comparing the results of risk analysis with risk criteria to determine whether the risk and/or its
magnitude is acceptable or tolerable
 Risk evaluation assists in the decision about risk treatment.

11.15. Risk identification

 Process of finding, recognizing and describing risks


 Risk identification involves the identification of risk sources, events, their causes and their potential consequences.
 Risk identification can involve historical data, theoretical analysis, informed and expert opinions, and stakeholders’
needs.

11.16. Risk management

 Coordinated activities to direct and control an organization with regard to risk


 This International Standard uses the term ‘process’ to describe risk management overall. The elements within the
risk management process are termed ‘activities’

11.17. Risk treatment

Process to modify risk

Risk treatment can involve:


 avoiding the risk by deciding not to start or continue with the activity that gives rise to the risk;
 taking or increasing risk in order to pursue an opportunity;
 removing the risk source;
 changing the likelihood;
 changing the consequences;
 sharing the risk with another party or parties (including contracts and risk financing); and
 retaining the risk by informed choice.

 Risk treatments that deal with negative consequences are sometimes referred to as “risk mitigation”, “risk
elimination”, “risk prevention” and “risk reduction”.
 Risk treatment can create new risks or modify existing risks.

11.18. Stakeholder

 Person or organization that can affect, be affected by, or perceive themselves to be affected by a
decision or activity
 A decision maker can be a stakeholder.

12.Risk Management Approach as per ISO 27005 Standard


This is the main standard that all Nafi members must read 10 times to ensure that they can draw the
figure below from memory inshAllah.

A systematic approach to information security risk management is necessary to identify organizational


needs regarding information security requirements and to create an effective information security
management system (ISMS). We will learn ISMS implementation in our ISO 27001 AL Nafi course. This

© 2018 Al-Nafi. All Rights Reserved. 19


Al Nafi Nafi Members Only

approach should be suitable for the organization´s environment, and in particular should be aligned
with overall enterprise risk management. Security efforts should address risks in an effective and
timely manner where and when they are needed. Information security risk management should be an
integral part of all information security management activities and should be applied both to the
implementation and the ongoing operation of an ISMS.

Information security risk management should be a continual process. The process should establish the
external and internal context, assess the risks and treat the risks using a risk treatment plan to
implement the recommendations and decisions. Risk management analyses what can happen and
what the possible consequences can be, before deciding what should be done and when, to reduce
the risk to an acceptable level.

Information security risk management should contribute to the following:

 Risks being identified


 Risks being assessed in terms of their consequences to the business and the likelihood of their
occurrence
 The likelihood and consequences of these risks being communicated and understood
 Priority order for risk treatment being established
 Priority for actions to reduce risks occurring
 Stakeholders being involved when risk management decisions are made and kept informed of the risk
management status
 Effectiveness of risk treatment monitoring
 Risks and the risk management process being monitored and reviewed regularly
 Information being captured to improve the risk management approach
 Managers and staff being educated about the risks and the actions taken to mitigate them
 The information security risk management process can be applied to the organization as a whole, any
discrete part of the organization (e.g. a department, a physical location, a service), any information
system, existing or planned or particular aspects of control (e.g. business continuity planning).

© 2018 Al-Nafi. All Rights Reserved. 20


Al Nafi Nafi Members Only

Figure 1

The next figure 2 shows how RISK Management process works in a detailed fashion. The information security risk
management process consists of context establishment, risk assessment, risk treatment, risk acceptance, risk
communication and consultation, and risk monitoring and review.

© 2018 Al-Nafi. All Rights Reserved. 21


Al Nafi Nafi Members Only

Figure 2 — Illustration of an information security risk management process

As Figure 2 illustrates, the information security risk management process can be iterative for risk assessment
and/or risk treatment activities. An iterative approach to conducting risk assessment can increase depth and
detail of the assessment at each iteration. The iterative approach provides a good balance between minimizing
the time and effort spent in identifying controls, while still ensuring that high risks are appropriately assessed.

The context is established first. Then a risk assessment is conducted. If this provides sufficient information to
effectively determine the actions required to modify the risks to an acceptable level then the task is complete
and the risk treatment follows. If the information is insufficient, another iteration of the risk assessment with
revised context (e.g. risk evaluation criteria, risk acceptance criteria or impact criteria) will be conducted, possibly
on limited parts of the total scope. The effectiveness of the risk treatment depends on the results of the risk
assessment.

Note that risk treatment involves a cyclical process of:


 assessing a risk treatment;
 deciding whether residual risk levels are acceptable;
 generating a new risk treatment if risk levels are not acceptable; and
 assessing the effectiveness of that treatment

© 2018 Al-Nafi. All Rights Reserved. 22


Al Nafi Nafi Members Only

It is possible that the risk treatment will not immediately lead to an acceptable level of residual risk. In this
situation, another iteration of the risk assessment with changed context parameters (e.g. risk assessment, risk
acceptance or impact criteria), if necessary, may be required, followed by further risk treatment (see Figure 2,
Risk Decision Point 2) above.

The risk acceptance activity has to ensure residual risks are explicitly accepted by the managers of the
organization. This is especially important in a situation where the implementation of controls is omitted or
postponed, e.g. due to cost. During the whole information security risk management process it is important that
risks and their treatment are communicated to the appropriate managers and operational staff. Even before the
treatment of the risks, information about identified risks can be very valuable to manage incidents and may help
to reduce potential damage. Awareness by managers and staff of the risks, the nature of the controls in place to
mitigate the risks and the areas of concern to the organization assist in dealing with incidents and unexpected
events in the most effective manner. The detailed results of every activity of the information security risk
management process and from the two risk decision points should be documented.

ISO/IEC 27001 specifies that the controls implemented within the scope, boundaries and context of the ISMS
need to be risk based. The application of an information security risk management process can satisfy this
requirement. There are many approaches by which the process can be successfully implemented in an
organization. The organization should use whatever approach best suits their circumstances for each specific
application of the process.

In an ISMS, establishing the context, risk assessment, developing risk treatment plan and risk acceptance are all
part of the “plan” phase. In the “do” phase of the ISMS, the actions and controls required to reduce the risk to
an acceptable level are implemented according to the risk treatment plan. In the “check” phase of the ISMS,
managers will determine the need for revisions of the risk assessment and risk treatment in the light of incidents
and changes in circumstances. In the”act” phase, any actions required, including additional application of the
information security risk management process, are performed.

The following table summarizes the information security risk management activities relevant to the four phases
of the ISMS process:

Figure 3 Alignment of ISMS and Information Security Risk Management Process

© 2018 Al-Nafi. All Rights Reserved. 23


Al Nafi Nafi Members Only

13.Security Controls
Security controls are methods, tools, mechanisms, and processes used in risk mitigation. Security controls can
function in two general ways: as safeguards, which reduce risk impact/likelihood before the realization of the
risk has occurred, and countermeasures, which reduce the impact/likelihood afterwards. For example, a wall
could be a safeguard, preventing hostile people from entering the facility, while a motion sensor could be
considered a countermeasure as it sends an alert when someone has entered the area in an unauthorized
fashion. Security controls should be chosen according to a cost/benefit analysis, comparing the expense of
acquiring, deploying, and maintaining the control against the control’s ability to reduce the impact/likelihood of
a specific risk (or set of risks). It is also crucial to weigh the operational impact that will be caused by the control
itself against the benefit of continuing that business function with the risk reduction offered by that control.
As Dr. Eugene “Spaf” Spafford of Purdue University once put it: “The only truly secure system is one that is
powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards—and even then, I
have my doubts.” http://spaf.cerias.purdue.edu/quotes.html

14.Traditional Model
One traditional method for selecting the appropriate security controls has been the use of the “loss expectancy”
model: annual loss expectancy (ALE) = single loss expectancy (SLE) x annual rate of occurrence (ARO)

In detail, it works like this: The SLE is the expected negative impact related to a particular risk (the risk being
assessed). Most often, this is expressed monetarily. It is calculated by determining the value of the asset that
might be affected (or lost) and multiplying it by an “exposure factor”—a percentage that represents the amount
of damage resulting from that type of loss.

So: SLE = asset value (AV) x exposure factor (EF) The ARO is the number of times per year a given impact is
expected, expressed as a number. So, the ALE is the SLE multiplied by the ARO, which gives us the estimated
annual cost related to a particular risk. The value of the ALE to the organization is that it allows the organization
to determine whether the cost of a particular kind of control for a specific risk is worth the investment.

15.Applicable Types of Controls


Security controls can be arranged according to many criteria. One way to consider controls is by the way the
controls are implemented.

15.1. Technical/logical controls:

Controls implemented with or by automated or electronic systems. Examples include firewalls, electronic badge
readers, access control lists, and so on. Many IT systems include some kind of technical control capacity or
functionality; for instance, routers can be set to reject traffic that may be indicative of possible attacks.

15.2. Physical controls:

Controls implemented through a tangible mechanism. Examples include walls, fences, guards, locks, and so forth.
In modern organizations, many physical control systems are linked to technical/ logical systems, such as badge
readers connected to door locks.

15.3. Administrative controls:

Controls implemented through policy and procedure. Examples include access control processes and requiring
multiple personnel to conduct a specific operation. Administrative controls in modern environments are often
enforced in conjunction with physical and/or technical controls, such as an access-granting policy for new users
that requires login and approval by the hiring manager.

© 2018 Al-Nafi. All Rights Reserved. 24


Al Nafi Nafi Members Only

16.Security Control Categories


Another way to group security controls is by how they take effect. In the security industry, controls are typically
arranged into these categories:

16.1. Directive:

Controls that impose mandates or requirements. These can include policies, standards, signage, or notification,
and are often combined with training.

16.2. Deterrent:

Controls that reduce the likelihood someone will choose to perform a certain activity. These can include
notification, signage, cameras, and the noticeable presence of other controls.

16.3. Preventative:

Controls that prohibit a certain activity. These can include walls and fences; they prohibit people from entering
an area in an unauthorized manner.

16.4. Compensating:

Controls that mitigate the effects or risks of the loss of primary controls. Examples include physical locks that still
function if an electronic access control system loses power, or personnel trained to use fire extinguishers/hoses
in the event a sprinkler system does not activate.

16.5. Detective:

Controls that recognize hostile or anomalous activity. These can include motion sensors, guards, dogs, and
intrusion detection systems.

16.6. Corrective:

Controls that react to a situation in order to perform remediation or restoration. Examples include fire
suppression systems, intrusion prevention systems, and incident response teams.

16.7. Recovery:

Controls designed to restore operations to a known good condition following a security incident. These can
include backups and disaster recovery plans.

This form of categorization is not absolute or distinct; many controls can fall into several categories, depending
on their implementation and operation. For instance, surveillance cameras can control that are deterrent (just
the presence of cameras discourages someone from entering a surveyed area, for fear of being observed),
detective (when combined with live monitoring by guards or a motion-sensing capability), and compensating
(when providing additional detection capability that augments gate guards or other controls). Controls of the
various types (administrative, technical, and physical) can be used in each of the categories.

When selecting and implementing security controls, it is always preferable to use multiple types and implement
them among the various categories than to rely on one type or category; this is called defense in depth (also
known as layered defense), where controls of various types and kinds overlap each other in coverage. There are
two reasons to implement defense in depth:

© 2018 Al-Nafi. All Rights Reserved. 25


Al Nafi Nafi Members Only

1. Relying on a single control type or category increases the possibility that a single control failure could
lead to enhanced risk. For instance, if the organization were to rely solely on technical controls and
power was interrupted, those controls would not function properly. Moreover, a new vulnerability
might be discovered in a specific control; if that was the sole control your organization relied on, your
organization would become completely exposed.

2. Using multiple types and categories of controls forces the aggressor to prepare multiple means of attack
instead of just one. By making the task of the attacker more complicated, we reduce the number of
possible attackers (many people know one thing well, but few people know many things well). For
instance, combining strong technical and physical controls could require the aggressor to have both
hacking and physical intrusion toolkits, which increases the price of the attack for attacker, thereby
reducing the number of potential attackers.

17.Monitoring and Measurement


Implementation of security controls is not the final action necessary for risk mitigation; the security
professional must monitor the function and operation of security controls for the organization to determine if
they are performing correctly and that they continue to provide the risk coverage as intended. Often referred
to as a security control assessment (SCA) a plan and process for determining the proper function and
management of controls is necessary and should be customized to the needs of the organization.

This is very similar to an audit with specific focus on security controls and includes performance of those
controls. The security team is often tasked with assembling SCA data and presenting a report to senior
management, detailing which controls are not performing as expected and which risks are not being addressed
by the current control set. This information might be gathered by the security team itself through the use of
automated monitoring tools, or it might be delivered by internal sources (such as the IT department) as part of
a self-reporting mechanism, or from external sources (such as a third-party security monitoring vendor).

The security practitioner must collect all relevant data and distill it into a form that is understandable and
useful to management. This security control monitoring effort should not be a singular event or even a
recurring task; the industry standard for security control maintenance and improvement is a continual,
ongoing, enduring activity. Threats continue to evolve, the organization’s IT environment is continually being
updated and modified, and security tools continue to improve; these situations require constant action on the
part of security practitioners. Other control assessment techniques include vulnerability assessments and
penetration tests:

1. Vulnerability assessment: Often performed with automated tools, the vulnerability assessment
reviews the organization’s IT environment for known vulnerabilities, cataloging and often sending
alerts for any detections. NOTE: vulnerability assessments are often limited in the respect that they
only detect known vulnerabilities; relying wholly on vulnerability assessments to determine the
organization’s risk profile is inadequate, because there may exist vulnerabilities that have not yet been
discovered and are not in the signature database of the assessment tool.

2. Penetration test: A trusted party (internal or external to the organization) tries to gain access to the
organization’s protected environment to simulate an external attack and test the organization’s
security defenses. There are many ways to structure a penetration test, including requiring that the
adversarial parties (the organization’s security team and the penetration testers) have no knowledge
beyond what an attacker would have: the security team is not given forewarning that the test is taking
place, and the testers are not given details about the organization’s environment or security. Ethical
penetration testing requires that any test not create a risk to health and human safety or destroy
property. It is essential to properly coordinate any penetration test before the engagement to
stipulate any limitations on the scope or nature of the test.

© 2018 Al-Nafi. All Rights Reserved. 26


Al Nafi Nafi Members Only

Risk Management frameworks that will be covered in a separate course will be during the implementation ISO
27001, ISO 31000, along with ISACA Risk IT and NIST publications. Please wait for that course to come online
during our 2019 classes soon inshAllah.

18.Understand and Apply Threat Modeling Concepts and Methodologies


As explained in the presentations, a threat is something that might cause a threat to be realized. To anticipate
and counter anthropomorphic threats, the security industry uses a technique called threat modeling, which
entails looking at an environment, system, or application from an attacker’s viewpoint and trying to determine
vulnerabilities the attacker would exploit. The end state of this process is addressing each of the vulnerabilities
discovered during threat modeling to ensure an actual attacker cannot use them.

In many threat modeling techniques, an abstract, nontechnical abstraction of the target (whether it is an
organization or an IT system/ application) is necessary before reviewing the details of the target itself.
Workflow diagrams (also referred to as dataflow diagrams or flowcharts) are frequently used for the purpose;
the threat modeling team creates a conceptual view of how the target actually functions—how data and
processes operate in the target from start to finish. This allows the threat modeling team to understand where
an attacker might affect the target, by understanding potential locations (in time, space, and the process) of
vulnerabilities.

In some threat models used for specific targets (systems/applications, instead of the overall organization),
another element is used (mostly in addition to, not in lieu of, the abstract); incorporating those same threat
modeling techniques into the detailed specifics of the target. With this technique, designers can identify and
troubleshoot potential vulnerabilities during the development and acquisition of the target instead of waiting
until the target reaches the production environment.

This practice (securing a system/application) during development is less expensive and time-consuming than
addressing issues after the item has entered production. The candidate should certainly be familiar with one
particular threat modeling tool: STRIDE. STRIDE, created by Microsoft, is actually a threat classification system
used to inform software developers during the development process. These are the elements of STRIDE:

Spoofing identity: the type of threat wherein an attacker poses an entity other than the attacker, often as an
authorized user.

Tampering with data: when the attacker attempts to modify the target data in an unauthorized way.

Repudiation: when the attacker, as a participant of a transaction, can deny (or conceal) the attacker’s
participation in that transaction.

Information disclosure: just like it sounds, this category can include both inadvertent release of data (where an
authorized user discloses protected data accidentally to unauthorized users, or gains access to material that
their authorization should not allow) and malicious access to data (an attacker getting unauthorized access).

Denial of service (DoS): an attack on the availability aspect of the CIA triad; creating a situation in the target
where authorized users cannot get access to the system/ application/data.

Elevation of privilege: when an attacker not only gains access to the target but can attain a level of control with
which to completely disable/destroy the entire target system.

19.Minimum Security Requirements


To provide appropriate levels of security, a fundamental understanding of the desired outcomes is necessary.
Security professionals achieve this by gathering a set of minimum security requirements to use as a goal. This
minimum set of requirements should be created for every level granularity in an operation: the organization as
a whole (where the minimum security requirements become the level of acceptable risk), the overall IT
© 2018 Al-Nafi. All Rights Reserved. 27
Al Nafi Nafi Members Only

environment, each network that is included in the environment, each system in each network, and even each
component. Moreover, this practice (gathering minimum security requirements) should not be limited only to
IT and data activity, but it should also be included in project management and process functions.

Some hints for effectively gathering minimum security requirements:


 Involve stakeholders in the development/acquisition/ planning process as soon as possible (close to
the start of the endeavor).
 Ensure that requirements are specific, realistic, and measurable.
 Record and document all elements of the discussion and outcome.
 When soliciting input from the customer, restate your understanding of their requests back to them to
confirm what they intended to say and what you comprehend.
 Don’t choose tools or solutions until the requirements are understood; too often in our field, we
already have a preferred technology in mind when starting a project, when we should instead only
select a specific product once we fully comprehend the objectives. Otherwise, we tend to allow the
technology to drive business functions, instead of the other way around.
 If possible, create diagrams, models, and prototypes to solidify mutual understanding of the
requirements before commencing full-scale development and production.

20.Service Level Requirements


When an organization uses an external provider for managed services (for example, a cloud service, or a
contractor that maintains the organization’s data center), the parties must establish a mutual understanding of
exactly what will be provided, under which terms, and at what times. This should include a detailed description
of both performance and security functions. As with other projects, the organization has to establish a set of
minimum requirements for this effort to be successful; in this type of case, however, the organization is not
usually able to dictate requirements unilaterally and must instead cooperate with the provider. Together, the
parties will construct a business contract explicitly stating the terms of the arrangement. One part of this
contract should be the service level agreement (SLA), which defines the minimum requirements and codifies
their provision. Every element of the SLA should include a discrete, objective, numeric metric with which to
judge success or failure, otherwise, the SLA implementation will not be fair or reasonable for either party.

21.Contractual, Legal, Industry Standards, and Regulatory Requirements


Every organization operates under some type of external mandate. This mandate can come in the form of
simple contracts, as part of the organization’s interactions with suppliers and customers; the organization is
compelled to fulfill their contractual obligations. Mandates can also come in the form of governmental
imposition; governments create regulations, either through legislative or administrative means, and
organizations must adhere to the regulations relevant to the industry and manner in which the organization
operates. There are also traditional and cultural mandates, arising in every society; some of these take the form
of standards, which each organization is held to by custom and, in some jurisdictions, by legal precedent and
liability.

Compliance is adherence to a mandate, regardless of the source. Almost every modern organization is required
to demonstrate compliance to the various mandates the organization is subject to. Compliance is used in our
industry as a term that means both the action on the part of the organization to fulfill the mandate and the
tools, processes, and documentation that demonstrate adherence. Many modern mandates address a specific
need: personal privacy. Privacy is the right of a human being to control the manner and extent to which
information about him or her is distributed.

Privacy mandates take all forms: contractual, regulatory, and customary. Organizations are often reviewed to
determine compliance with applicable mandates. Often, the tools, processes, and activities used to perform
compliance reviews are referred to as audits (or auditing).

© 2018 Al-Nafi. All Rights Reserved. 28


Al Nafi Nafi Members Only

22.Contractual Mandates
A contract is an agreement between parties requiring them to perform in some way and the terms for
performance. Contracts are an instrumental tool in business where the contract obligates the organization;
contracts are either used or implicit in every business transaction. Contracts could be as simple as the exchange
of money for a product, or a complicated, long-term arrangement requiring hundreds of pages of contract
documentation.

An organization enters into a contract voluntarily, and law and custom dictate that every party to a contract
will fulfill the requirements of the contract unless they are unable to do so. The importance of contracts has
been codified in most countries as law, to the extent that any party not fulfilling their contractual obligations
may be forced to do so (or pay recompense) if the other party/parties to the contract seek relief from the
courts.

In many cases, parties to a contract may have the right to review the progress and activity of each other to
ensure the terms of the contract are being met (this is also stipulated in the contract). This may involve
inspection of raw data, a measure of some performance, or audits; these actions may be performed by the
parties to the contract or by external third parties on their behalf.

It is important that all Nafi members who are following Cyber security, Offensive security, IT Audit, IT
Governance and IT Risk management tracks should attend the following courses offered by AL Nafi:

 ISO 27001 Lead Implementation


 ISO 27017 Lead Implementation
 ISO 27018 Lead Implementation
 ISO 20000 Lead Implementation
 ISO 22301 Lead Implementation
 PCI DSS QSA Training
 GDPR Training

23.Legal Standards
Legal standards are set by courts in decisions that set precedent; that is, the judgments a court has made
previously become the standard of acceptable practice for future behavior. This precedent informs other
courts in making determinations, for instance, of reasonable expectations for parties to a contract—the due
care mentioned earlier in this domain. Organizations use these standards in the formulation of their own
strategy and governance as a means of setting acceptable risk. When a court makes a decision about due care,
organizations that will be subject to similar circumstances make plans according to that standard out of
recognition of liability they might face for noncompliance.

24.Common Privacy Law Tenets


Many privacy laws address similar concepts associated with individual personal data, that have become
common globally. The candidate should be familiar with these general concepts:
 Notification: The data subject (the individual human related to the personal data in question) should
be notified before any of their personal data is collected or created.
 Participation: The subject should have the option not to take part in the transaction, if the subject
chooses not to share their personal data.
 Scope: Any personal data collected or created should be for a specific purpose; this purpose should be
legal and ethical and be included in the notification aspect of the transaction, as well as inform the
limitation aspect.
 Limitation: Any personal data should only be used for the purpose identified in the scope aspect of the
transaction; any additional use would require repeating the notification and participation aspects.
© 2018 Al-Nafi. All Rights Reserved. 29
Al Nafi Nafi Members Only

 Accuracy: Any personal data should be factual and current; data subjects should have a means to
correct/edit any information about the subject in a simple, timely manner.
 Retention: Personal data should not be kept any longer than is necessary for the purpose, or as
required by applicable law.
 Security: Any entity that has possession of personal data is responsible for protecting it.
 Dissemination: Any entity that has possession of personal data should not share it with any other
entity, nor release it, without the express permission of the data subject and in accordance with
applicable law.

25.Cyber Crimes and Data Breaches


The modern IT landscape affords criminals with a host of options for engaging in nefarious activity, including
updated versions of traditional crimes. Criminals may, for instance, conduct age-old activities such as fraud,
theft, blackmail, and extortion but use modern appliances to extend their reach, speed, and efficiency. There
are also new criminal statutes that have created new classes of crimes the security practitioner should be
aware of.

A brief description of some (but certainly not all) possible computer related crimes:

 Malware: In many jurisdictions, governments have made the creation and dissemination of malicious
software a crime.
 Unauthorized access: The modern version of trespassing, the simple act of accessing a
system/network in an unauthorized manner is against the law in many countries.
 Ransomware: A new version of the old crime of extortion; the attacker gains access (often illegally) to
the victim’s data, encrypts it, and offers to sell the victim the encryption keys to recover the data.
Ransomware tools have become so pervasive and effective that, in many cases, even federal law
enforcement entities have advised victims to pay the ransom
 Theft: Stealing data—or hardware on which data resides—can be a lucrative criminal enterprise.
 Illegal use of resources: In many situations, attackers conduct unauthorized access not to get anything
directly from the victim but to use the victim’s IT assets for the attacker’s benefit. This can take the
form of storage (where the attacker is using the victim’s memory to stash files and data the attacker
has acquired elsewhere), or processing (where the attacker is using the victim’s CPU to conduct
malicious activity such as staging DDoS attacks).
 Fraud: By engaging the victim in some way (often through an appeal to the victim’s greed or
sympathy), the attacker is able to illegally acquire the victim’s money. Common tactics include: the
attacker posing as someone else (often as someone related to the victim, through social media); the
attacker gaining access to the victim’s bank account; the attacker preying on those who are not media-
savvy such as the elderly.
 Data breach notification is another area of law that has become ubiquitous; many countries (and
jurisdictions within countries, such as U.S. states) have created legislation requiring any entity that has
personal data within its possession to notify the subjects of that data if the data is disclosed in any
unauthorized fashion. Any organization that is not in compliance with these laws (that is, any
organization that loses personal data and does not make sufficient notification in a timely manner)
faces severe financial penalties in many jurisdictions. The security practitioner should be aware of all
such applicable laws for every jurisdiction in which their organization operates.

26.Import/Export Controls
The security practitioner should be aware that IT hardware and software is often subject to international trade
restrictions, mainly for national defense purposes. In particular, encryption tools are seen by many
governments as a threat to global stability and rule of law. One such restriction scheme is the Wassenaar
Agreement, a multilateral export control restriction program involving 41 participating countries; these
countries agree not to distribute (export) certain technologies (including both weapons and, of more concern
to our field, cryptographic tools) to regions where an accumulation of these materials might disturb the local
© 2018 Al-Nafi. All Rights Reserved. 30
Al Nafi Nafi Members Only

balance of power between nation-states. Security practitioners employed or operating in either a Wassenaar
signatory country or in a region where import of these materials is controlled by the Agreement need to be
aware of these prohibitions and understand what encryption tools may or may not be used. Many countries
have their own internal laws governing the import/export of encryption technologies in addition to
international treaties. For instance, Russia and some Baltic States, Myanmar, Brunei, and Mongolia have
outright bans on the import of cryptographic

27.Privacy Terms
Many data privacy laws use a common terminology; the candidate should be familiar with the following terms
and concepts.

 Personally identifiable information (PII): PII, as it is referred to in the industry, is any data about a
human being that could be used to identify that person. The specific elements of what data
constitutes PII differs from jurisdiction to jurisdiction and from law to law. These are some elements
that are considered PII in some jurisdictions and laws:

 Name
 Tax identification number/Social Security number
 Home address
 Mobile telephone number
 Specific computer data (MAC address, IP address of the user’s machine)
 Credit card number
 Bank account number
 Facial photograph

Under some laws, PII is referred to by other terms as was mentioned earlier in this domain: for
instance, medical data in the United States is referred to as electronic protected health information
(ePHI) under HIPAA.

 Data subject: The individual human being that the PII refers to.

 Data owner/data controller: An entity that collects or creates PII. The data owner/controller is legally
responsible for the protection of the PII in their control and liable for any unauthorized release of PII.
Ostensibly, the owner/controller is an organization; the legal entity that legitimately owns the data. In
some cases (in certain jurisdictions, under certain laws), the data owner is a named individual, such as
an officer of the company, who is the nominal data owner. In actual practice, however, we usually
think of the data owner as the managerial person or office that has the most day-today use and
control of the data; that is, the department or branch that created/collected the data and which puts
the data into use for the organization.

 Data processor: Any entity, working on behalf or at the behest of the data controller, that processes
PII. Under most PII-related laws, “processing” can include absolutely anything that can be done with
data: creating, storing, sending, computing, compiling, copying, destroying, and so forth. While the
data processor does have to comply with applicable PII law, it is the data owner/controller that
remains legally liable for any unauthorized disclosure of PII even if the processor is proven to be
negligent/malicious.

 Data custodian: The person/role within the organization who usually manages the data on a day-to-
day basis on behalf of the data owner/controller. This is often a database manager or administrator;
other roles that might be considered data custodians could be system administrators or anyone with
privileged access to the system or data set.

© 2018 Al-Nafi. All Rights Reserved. 31


Al Nafi Nafi Members Only

28.Policy
The written aspect of governance (including security governance) is known as policy. Policies are documents
published and promulgated by senior management dictating and describing the organization’s strategic goals
(“strategic” entails long-term, overarching planning that addresses the whole of the organization; it is possible
to have goals that are not strategic to the organization, such as goals for a specific department, project, or
duration). Security policies are those policies that address the organization’s security goals and might include
such areas as data classification, access management, and so on.

Typically, policies are drafted by subject matter experts, shared among stakeholders for review and comment,
revised, then presented to senior management for final approval and publication. This is especially true for
security policy, which is often a topic of which senior management has little understanding and insight, and it
relies greatly on security practitioners for advice and guidance.

29.Standards
Standards are specific mandates explicitly stating expectations of performance or conformance. Standards can
either come from within the organization (internal) or from external sources such as statutory or administrative
law, case law (court decisions that set precedent), professional organizations, and/or industry groups. Some
standards are detailed and specific; an example might be an industry standard for configuring a certain IT
component or device. Some standards are general and describe a goal, outcome, or process; an example might
be a law that sets a standard declaring, “the data controller is required to use physical access control measures
to prevent unauthorized removal of hardware containing PII.”

Organizations are required to comply with standards to which they subscribe or which are applicable to the
organization; failure to do so can result in prosecution or fines assessed by law enforcement/regulators or can
increase and enhance the organization’s liability.

An example, for demonstration purposes: a retail company has some PII related to its customers, including
their contact information and shopping habits. In the wake of a data breach, investigators determine that the
company was storing data in files that could be accessed with default administrative usernames and
passwords, which is directly contrary to all current industry standards and common security practice. Because
not conforming to the standard demonstrates a form of negligence, in addition to the costs of resolving the
breach, the company may face additional expenses in the form of lawsuits from customers whose data was
exposed and fines from regulators who oversee the protection of personal information. If the company had
taken good faith steps to protect the data in a professional manner (including adherence to best practices and
industry standards), the company would still incur expenses related to resolving the loss but would have
attenuated the liability from the additional costs.

30.Procedures
Procedures are explicit, repeatable activities to accomplish a specific task. Procedures can address one-time or
infrequent actions (such as a disaster recovery checklist) or common, regular occurrences (for instance, daily
review of intrusion detection logs). Like standards, procedures aid the organization by demonstrating due
diligence and avoiding liability. Proper documentation of procedures (in both creating the procedures and in
executing them) and training personnel how to locate and perform procedures is necessary for the
organization to derive benefit of procedures.

31.Guidelines
 Guidelines are similar to standards in that they describe practices and expectations of activity to best
accomplish tasks and attain goals. However, unlike standards, guidelines are not mandates but rather
recommendations and suggestions. Guidelines may be created internally, for use by the organization,
© 2018 Al-Nafi. All Rights Reserved. 32
Al Nafi Nafi Members Only

or come from external sources such as industry participants, vendors, and interested parties. There is
a general hierarchy of importance typically associated with these governance elements; while not
applicable in all cases, usually:

 Policy is at the pinnacle of the hierarchy; the organization’s policy is informed by applicable law(s) and
specifies which standards and guidelines the organization will follow. Senior management dictates
policy, so all activity within the organization should conform to policy.

 Standards are next; the organization’s policies should specify which standards the organization
adheres to, and the organization can be held accountable for not complying with applicable standards.

 Guidelines inform the organization how to conduct activities; while not mandatory, they can be used
to shape and inform policies and procedures, and how to accomplish compliance with standards.

 Procedures are the least powerful of the hierarchy, but they are the most detailed; processes describe
the actual actions personnel in the organization will take to accomplish their tasks. Even though they
may be considered the bottom of the hierarchy, they are still crucial and can be used for obviating
liability and demonstrating due diligence.

32.Business Continuity Requirements


A detailed breakdown of Business continuity planning (BCP) and disaster recovery planning (DRP) will be shared
during the ISO 22301 training to ensure that this domain is thoroughly covered from examination perspective.
However the below notes will provide you all a good foundation inshAllah.

There is always a risk that the organization will experience a drastic and dramatic event that threatens the
existence of the organization itself; these events can take the form of natural disaster, civil unrest, international
war, and other major situations. The security practitioner is often called on to address this type of risk and to
plan accordingly.

The actions, processes, and tools for ensuring an organization can continue critical operations during a
contingency are referred to as business continuity (BC). “Critical operations” (sometimes referred to as “critical
path” or “mission critical functions”) are those activities and functions that the organization needs to perform
to stay operational; they are a subset of the overall operation of the organization. For instance, during
contingency operations, an organization might suspend janitorial functions or hiring procedures but might
continue sales and financial activity (depending on the essential needs of the organization).

Disaster recovery (DR) efforts are those tasks and activities required to bring an organization back from
contingency operations and reinstate regular operations. Typically, these functions act in concert; the same
personnel, assets, and (generally) activities will be used to conduct business continuity and disaster recovery
efforts; they are often referred to in conjunction with the term “business continuity and disaster recovery”
(BCDR).

32.1. Develop and Document Scope and Plan

To properly provide the correct assets for dealing with contingency situations, the organization must determine
several essential elements first:

 What is the critical path?


 How long can the organization survive an interruption of that critical path?
 How much data can the organization lose and still remain viable?
 We will discuss the critical path determinations in the next section of this module. Here, we’ll address
the other two elements.

© 2018 Al-Nafi. All Rights Reserved. 33


Al Nafi Nafi Members Only

 The maximum allowable downtime (MAD) (also referred to as the maximum tolerable downtime
(MTD)) is the measure of how long an organization can survive an interruption of critical functions; if
the MAD is exceeded, the organization will no longer be a viable unit.

32.2. Recovery time objective (RTO)

The recovery time objective (RTO) is the target time set for recovering from any interruption—the RTO must
necessarily be less than the MAD.

Senior management must set the RTO, based on their expert knowledge of the needs of the organization, and
all BCDR strategy and plans must support achieving the RTO.

NOTE: The term “recovery” in the context of the RTO is not a return to normal operations, but it is instead a
goal for recovering availability of the critical path. This is a temporary state that the organization will endure
until it is feasible to return to regular status.

32.3. Recovery Point Objective (RPO)

The recovery point objective (RPO) is a measure of how much data the organization can lose before the
organization is no longer viable. The RPO is usually measured not in storage amounts
(gigabytes/terabytes/petabytes) but instead in units of time: minutes, hours, days, depending on the nature of
the organization. Senior management will also set the RPO that will be used along with the RTO to inform BCDR
plans.

32.4. Business Impact Analysis (BIA)

The BIA is the effort to determine the value of each asset belonging to the organization, as well as the potential
risk of losing assets, the threats likely to affect the organization, and the potential for common threats to be
realized. This is a management process that may or may not involve the security office. However, the BIA will
also be an instrumental tool for the security function as it is usually the security office that is required to craft
and execute the BCDR plan and tasks. Along with determining the value of other assets, the BIA will also reveal
the critical path of the organization; without knowing the critical path, it is impossible to properly plan BCDR
efforts.

There are many ways to conduct a BIA and make asset value determinations. The following is a partial list of
methods that might be used, their benefits, and potential challenges:

Survey: Interview asset owners/data controllers to determine their assessment of the value of the
organization’s property they oversee. This method allows for the people closest to the assets to offer input but
is also subject to inherent bias.

Financial audit: Review the acquisition/purchase documentation to aggregate value data for all assets in the
organization. This offers a thorough review of assets but is prone to variance in actual value because value
changes over time (increasing or decreasing, depending on the type of asset and its purpose/use).

Customer response: Surveys of customers can aid the organization in determining which aspects of the
operation are most valuable to creating goodwill and long-term revenue. However, customers only see a
limited portion of the overall operations and can’t know the source of the value chain.

There are accounting and auditing firms that perform holistic organizational valuation as their business, often
as preparation for the sale/acquisition of the organization by another entity. These consultants have expertise
and knowledge of this process that may offer an advantage over performing the tasks internally.

The BIA should also consider externalities, such as likely threats and the potential for those threats to manifest.
Depending on the nature of the organization’s work, the senior management may want to consider investing in
© 2018 Al-Nafi. All Rights Reserved. 34
Al Nafi Nafi Members Only

business intelligence services; these are external consultants that constantly glean information from threat
sources (hacktivist and terror organizations, open source news reporting, government and industry information
feeds, malware management firms, and so on) and customize reports for their clients. The organization may
also want to consider creating its own threat intelligence unit, depending on the size and scope of both the
organization and its potential attackers.

© 2018 Al-Nafi. All Rights Reserved. 35


CISSP Domain 2 Asset Security Detailed
Code: CISSPD2ASDN

Version: 3

Date of version: 1/24/2019

Created by: AL Nafi Content Writer

Approved by: Nafi Content reviewers

Confidentiality level: For members only


Al Nafi Nafi Members Only

Change history
Date Version Created by Description of change

2/1/2019 1 Nafi Edu Dept AL Nafi mentors created the first set of notes.

2/10/2019 2 Nafi Edu Dept AL Nafi mentors created the first set of notes.

4/24/2019 3 Nafi Edu Dept AL Nafi mentors created the first set of notes.

© 2018-19 Al-Nafi. All Rights Reserved. 2


Al Nafi Nafi Members Only

Table of contents

1. PURPOSE, SCOPE AND USERS................................................................................................................... 5


2. ASSET SECURITY ....................................................................................................................................... 5

2.1. ASSETS, INFORMATION AND OTHER VALUABLE RESOURCES................................................................................. 6


2.2. IDENTIFICATION/DISCOVERY AND CLASSIFICATION OF ASSETS BASED ON VALUE...................................................... 6
2.3. PROTECTION OF THE VALUE OF ASSETS AND INFORMATION................................................................................. 8
2.4. CLASSIFY BASED ON VALUE........................................................................................................................... 9
2.5. PROTECTION BASED ON CLASSIFICATION ......................................................................................................... 9

3. THE ASSET LIFECYCLE ............................................................................................................................... 9

3.1. THE ASSET LIFECYCLE ................................................................................................................................ 10

4. CLASSIFICATION AND CATEGORIZATION ................................................................................................ 11

4.1. CLASSIFICATION........................................................................................................................................ 11
4.2. CATEGORIZATION ..................................................................................................................................... 11
4.3. DATA CLASSIFICATION AND POLICY .............................................................................................................. 11

5. DATA CLASSIFICATION POLICY ............................................................................................................... 12

6. EXAMPLES OF CLASSIFICATION LEVELS .................................................................................................. 12

6.1. CLASSIFICATION – DONE BY OWNERS ........................................................................................................... 13


6.2. PURPOSE OF ASSET CLASSIFICATION ............................................................................................................. 13
6.3. CLASSIFICATION BENEFITS .......................................................................................................................... 14
6.4. ISSUES RELATED TO CLASSIFICATION ............................................................................................................. 14

7. ASSET PROTECTION AND CLASSIFICATION TERMINOLOGY..................................................................... 14

7.1. DATA OWNERSHIP .................................................................................................................................... 15


7.2. INFORMATION OWNER .............................................................................................................................. 15
7.3. DOCUMENTATION .................................................................................................................................... 16
7.4. DATA CUSTODIANSHIP ............................................................................................................................... 16
7.5. DIFFERENCE BETWEEN DATA OWNER/CONTROLLER AND DATA CUSTODIAN/PROCESSOR ....................................... 16

8. PRIVACY ................................................................................................................................................. 17

8.1. THE UNITED STATES .................................................................................................................................. 17


8.2. EUROPEAN UNION .................................................................................................................................... 18
8.3. ASIA–PACIFIC ECONOMIC COOPERATION (APEC) COUNCIL .............................................................................. 18
8.4. ESSENTIAL REQUIREMENTS IN PRIVACY AND DATA PROTECTION LAWS ................................................................ 18
8.5. ORGANIZATION FOR ECONOMIC COOPERATION AND DEVELOPMENT (OECD) GUIDELINES ON PRIVACY PROTECTION .. 19
8.5.1. OECD Privacy Guidelines ............................................................................................... 20
9. DATA RETENTION................................................................................................................................... 21

9.1. ESTABLISHING INFORMATION GOVERNANCE AND RETENTION POLICIES ............................................................... 21


9.2. EXAMPLES OF DATA RETENTION POLICIES...................................................................................................... 21

10. DATA PROTECTION METHODS ........................................................................................................... 22

10.1. BASELINES ............................................................................................................................................... 22

© 2018-19 Al-Nafi. All Rights Reserved. 3


Al Nafi Nafi Members Only

11. GENERALLY ACCEPTED PRINCIPLES .................................................................................................... 23

12. SCOPING AND TAILORING .................................................................................................................. 24

13. THE CENTER FOR STRATEGIC & INTERNATIONAL STUDIES (CSIS) 20 CRITICAL SECURITY CONTROLS
INITIATIVE ...................................................................................................................................................... 25

13.1. CURRENT LIST OF CRITICAL SECURITY CONTROLS – VERSION 5.1 ....................................................................... 26

14. DATA STATES ..................................................................................................................................... 26

14.1. DATA AT REST .......................................................................................................................................... 26


14.2. DATA IN TRANSIT...................................................................................................................................... 27
14.2.1. Link Encryption .............................................................................................................. 27
14.2.2. End-to-End Encryption .................................................................................................. 27
14.3. DATA IN TRANSIT – DESCRIPTION OF RISK ..................................................................................................... 27

15. MEDIA HANDLING ............................................................................................................................. 28

15.1. MEDIA ................................................................................................................................................... 28


15.2. MARKING ............................................................................................................................................... 28
15.3. HANDLING .............................................................................................................................................. 28
15.4. STORING ................................................................................................................................................. 29
15.5. DESTRUCTION .......................................................................................................................................... 29
15.6. RECORD RETENTION .................................................................................................................................. 29

16. DATA REMANENCE ............................................................................................................................ 29

16.1. CLEARING................................................................................................................................................ 30
16.2. PURGING ................................................................................................................................................ 30
16.3. DESTRUCTION .......................................................................................................................................... 30
16.4. DATA DESTRUCTION METHODS ................................................................................................................... 31

© 2018-19 Al-Nafi. All Rights Reserved. 4


Al Nafi Nafi Members Only

1. Purpose, scope and users


The purpose of this document is to define CISSP domain for Nafi members Only. Please be honest to
yourself and with Al Nafi and do not share this with anyone else. Everyone can join Al Nafi as we are
so economical to begin with.

These notes covers all the key areas of Domain 2 and the notes are good until a new revision of CISSP
syllabus comes from ISC2. Normally the cycle is around 3 years so since we had our last revision in 2018
June, the next update to the CISSP syllabus is expected around June 2021.

Please follow the following 5 step program if you want to master CISSP domain and pass the exam
inshAllah.

1. Watch all the CISSP videos on the portal 8-10 times. Soak yourself with Brother Faisal words in
what he is teaching and try to ask questions. Think like a Security Manager who does
everything with due care and due diligence.
2. Read all the presentations is and the detailed notes at least 8-10 times. Pay attention to
additional reading material recommended by Brother Faisal during his videos.
3. Practice all the flash cards multiple times on our website.
4. Practice all the MCQ’s on our website. (Those who score 85% in 10 out of 15 tests [The last 10
sets are counted towards exam payment by Al Nafi] Al Nafi will pay their exam fee inshAllah.
Please give dawah to 50 people to join Al Nafi if they want to study inshAllah)
5. Go for the CISSP exam once you are approved by Al Nafi and your examination fee is paid.

2. Asset Security
Asset Security within the context of the second domain of the CISSP® examination deals with the
protection of valuable assets to an organization as those assets go through their lifecycle. Therefore,
it addresses the creation/collection, identification and classification, protection, storage, usage,
maintenance, disposition, retention/archiving, and defensible destruction of assets. To properly
protect valuable assets, such as information, an organization requires the careful and proper
implementation of ownership and classification processes, which can ensure that assets receive the
level of protection based on their value to the organization. The enormous increase in the collection
of personal information by organizations has resulted in a corresponding increase in the importance
of privacy considerations, and privacy protection constitutes an important part of the asset security
domain. Individual privacy protection in the context of asset security includes the concepts of asset
owners and custodians, processors, remanence, and limitations on collection and storage of valuable
assets such as information. This also includes the important issue of retention as it relates to legal
and regulatory requirements to the organization. Appropriate security controls must be chosen to
protect the asset as it goes through its lifecycle, keeping in mind the requirements of each of the
lifecycle phases and the handling requirements throughout. Therefore, understanding and applying
proper baselines, scoping and tailoring, standards selection, and proper controls need to be
understood by the security professional. The asset security domain also addresses asset handling
requirements and includes asset storage, labeling, and defensible destruction.

© 2018-19 Al-Nafi. All Rights Reserved. 5


Al Nafi Nafi Members Only

2.1. Assets, information and Other Valuable Resources

Any item deemed by a company to be valuable can be referred to as an asset. In other words, an
asset is anything that has value to an organization. In many cases, assets are also referred to as
resources. Both words, assets and resources, imply value to an organization and, therefore, must be
protected based on the value that it represents to the organization. Value can be expressed in terms
of quantitative and qualitative methodologies, and both of these valuation methods are used to
determine the level of protection that the assets require. Qualitative asset valuation implies that
value is expressed in terms of numbers, usually monetary value. It is often understood that
expressing value of intangible assets, such as information, is very difficult and, in many cases
impossible, to express in quantitative ways; therefore, value of intangible assets is usually expressed
in terms of qualitative methodologies usually using grades such as “high,” “medium,” “low,” or other
classification that can express the value of assets without using numbers. Understanding the actual
value of assets becomes very important in understanding how to protect those assets because the
value will always dictate the level of security required. It is important for us to understand that
security is not always driven by risk but rather driven by value. In fact, if you think about it, what is
risk anyway? Risk is something that can impact value, and therefore, to fully understand risk requires
the full understanding of the value of the asset first. As we have just covered, an asset is an item of
value to the organization. Value can be expressed in terms of quantitative (numbers/monetary) and
qualitative (grades such as high/medium/low, or top secret/secret/ confidential, etc.). Examples of
valuable assets include, and are not limited to, and in no particular order:

 People
 Information
 Data
 Hardware
 Software
 Systems
 Processes
 Devices
 Functions
 Ideas
 Intellectual property
 Corporate reputation
 Brand
 Identity
 Facilities

The list could include other assets, but the point has been made that any asset is really something
that has value to an organization and requires careful protection based on that value. Therefore,
protection will be dictated by the value. This domain, called Asset Security, deals with the methods
to protect assets based on value.

2.2. Identification/Discovery and Classification of Assets Based on Value

The value of assets will vary significantly, but to properly secure these assets, organizations need to
identify and locate assets that may have value and then classify the assets based on value while
defining how to properly protect each classification type. Assets, such as information, have become
challenging to protect based on value. Organizations today are creating/collecting massive amounts
of data, which makes discovery of this data for inventory purposes very difficult. To properly protect
© 2018-19 Al-Nafi. All Rights Reserved. 6
Al Nafi Nafi Members Only

assets, including information, organizations need to implement a formal asset classification system
supported by proper management support, commitment, and conviction to ensure accountability.
Proper policies need to be created and communicated to the entire organization to create the
culture and set the tone for the effectiveness of the classification initiative. Organizations then need
to understand fully where assets are created/used to establish an effective inventory system that will
drive the classification process.

At this point, once assets have been located and identified, they can be classified by owners based on
value and then protected based on classification. Classification of assets is essential to have proper
controls be implemented to allow organizations to address compliance with relevant laws,
regulations, standards, and policies. The first step in asset protection is to know what assets the
organization has. In other words, an asset inventory is required before the organization can actually
understand what assets they have that may have value.

© 2018-19 Al-Nafi. All Rights Reserved. 7


Al Nafi Nafi Members Only

Once we have an inventory of assets, understanding the value of those assets becomes the next step
as it will drive asset classification, which, in turn, will drive the protection of those assets throughout
their lifecycle. Having a complete inventory that is updated and reflective of
creation/disposition/destruction of assets becomes very important. An updated and meaningful
inventory of assets can then be used by the owners of those assets to determine value and classify
assets based on that value.

The classification system will then determine the protection requirements.

2.3. Protection of the Value of Assets and Information

To better achieve goals and objectives, organizations today are generating massive amounts of
information that obviously will represent organizational value. It is important for organizations to
understand exactly the value that this information represents. Identifying and classifying assets and
information will allow organizations to determine and achieve the protection requirements for the
information.

These are the steps involved to do this properly:


1. Identify and locate assets, including information.
2. Classify based on value.
3. Protect based on classification.

The process of identifying assets that have value in the organization can be very challenging but
nevertheless is a requirement to protect them accordingly. Valuable assets need to be identified in
order to protect them accordingly. Assets can take many forms, here are a few examples:

Information assets
 Databases
 Files
 Spreadsheets
 Business continuity plans (BCPs)
 Procedures
© 2018-19 Al-Nafi. All Rights Reserved. 8
Al Nafi Nafi Members Only

Software
 Applications
 Source code
 Object code
 Operating systems
Physical assets
 Hardware
 Media
 Network equipment
 Servers
 Buildings
Processes and services
 Communications
 Data facilities
 Voice systems
 Computing

2.4. Classify Based on Value

The next step in this process is to determine ownership to establish accountability. This may be
easier for physical and tangible assets but the same needs to be done for intangible assets such as
data. The owners are always in the best position to understand the value of what they own;
therefore, it is up to the owners to classify assets. Determining value may not be easy. There are
many factors and elements that need to be looked at to determine the true value of assets. For
instance, we need to think about implications related to impact of disclosure, impact on corporate
reputation, intellectual property, and trade secrets, etc. Regardless, the owner is always in the best
position to truly understand the value of what they own to the organization. The process of
understanding the value of an asset is very appropriately called asset valuation. The value of the
asset will drive its classification level.

2.5. Protection Based on Classification

The next step in the classification process is to protect the assets based on their classification levels.
A good way to achieve this would be to establish minimum security requirements for each of the
classification levels that are being used. We refer to these as baselines. In other words, we can
establish the minimum security baselines for each classification level that exists. Asset classification
drives the security requirements that need to be implemented to protect the assets based on their
value. Once the baselines have been determined, they can be applied to assets as they move through
their lifecycle phases, including phases such as retention and destruction.

3. The Asset Lifecycle


To protect assets properly, one must understand the asset lifecycle and apply protection mechanism
throughout the phases of the asset lifecycle. The protection will always be based on the value of
those assets at particular points in the lifecycle phases. This implies that the parties accountable and
responsible for the protection of assets must understand and monitor the value of assets as they go
through their lifecycle. Those in the best position to do this are the owners of those assets, or
designates of the owners.

© 2018-19 Al-Nafi. All Rights Reserved. 9


Al Nafi Nafi Members Only

Understanding the data security lifecycle, enables the organization to map the different phases in the
data lifecycle against the required controls that are relevant for each phase. The data lifecycle
guidance provides a framework to map relevant use cases for data access, while assisting in the
development and application of appropriate security controls within each lifecycle stage.

3.1. The Asset Lifecycle

To protect assets properly, one must understand the asset lifecycle and apply protection mechanism
throughout the phases of the asset lifecycle. The protection will always be based on the value of
those assets at particular points in that lifecycle. There are many other methodologies where there
are more or less phases, or they might be named differently. Regardless, the point to be made here is
that protection is required throughout the phases, and it is always based on the value of the assets at
those particular moments in the lifecycle phases. The lifecycle includes six phases from creation to
destruction. Although we show it as a linear progression, once created, data can bounce between
phases without restriction, and may not pass through all stages (for example, not all data is
eventually destroyed).

1. Create: This is probably better named Create/Update because it applies to creating or


changing a data/content element, not just a document or database. Creation is the
generation of new digital content, or the alteration/updating of existing content.
2. Store: Storing is the act committing the digital data to some sort of storage repository, and
typically occurs nearly simultaneously with creation.
3. Use: Data is viewed, processed, or otherwise used in some sort of activity.
4. Share: Data is exchanged between users, customers, and partners.
5. Archive: Data leaves active use and enters long-term storage.

© 2018-19 Al-Nafi. All Rights Reserved. 10


Al Nafi Nafi Members Only

6. Destroy: Data is permanently destroyed using physical or digital means


(e.g., cryptoshredding).

4. Classification and Categorization


Most dictionaries will define the words classification and categorization as follows. Classification is
the act of forming into a class or classes. This can be rephrased as a distribution into groups, as
classes, according to common attributes. Whereas categorization is the process of sorting or
arranging things into classes. This can be simplified as saying classification is the system, and
categorization is the act of sorting into the classification system.

4.1. Classification

The purpose of a classification system is to ensure protection of the assets based on value in such a
way that only those with an appropriate level of clearance can have access to the assets. Many
organizations will use the terms “confidential,” “proprietary,” or “sensitive” to mark assets. These
markings may limit access to specific individuals, such as board members, or possibly certain sections
of an organization, such as the human resources (HR) area or other key areas of the organization.

4.2. Categorization

Categorization is the process of determining the impact of the loss of confidentiality, integrity, or
availability of the information to an organization. For example, public information on a web page
may be low impact to an organization as it requires only minimal uptime, it does not matter if the
information is changed, and it is globally viewable by the public. However, a startup company may
have a design for a new clean power plant, which if it was lost or altered may cause the company to
go bankrupt, as a competitor may be able to manufacture and implement the design faster. This type
of information would be categorized as “high” impact. Classification and categorization is used to
help standardize the protection baselines for information systems and the level of suitability and
trust an employee may need to access information. By consolidating data of similar categorization
and classification, organizations can realize economy of scale in implementing appropriate security
controls. Security controls are then tailored for specific threats and vulnerabilities.

4.3. Data Classification and Policy

Data classification is all about analyzing the data that the organization has, in whatever form,
determining its importance and value and then assigning it to a category or classification level. That
category, or classification level, will determine the security requirements for protection of that
valuable asset. For example, any data that is classified at the highest level, whether contained in a
printed report or stored electronically, needs to be classified so that it can be handled and secured
properly based on its classification. The requirements for classification should be outlined in a
classification policy.

© 2018-19 Al-Nafi. All Rights Reserved. 11


Al Nafi Nafi Members Only

5. Data Classification Policy


 When classifying data, determine the following aspects of the policy:
Who will have access to the data: Define the roles of people who can access the data. Examples
include accounting clerks who are allowed to see all accounts payable and receivable but cannot add
new accounts and all employees who are allowed to see the names of other employees (along with
managers’ names and departments, and the names of vendors and contractors working for the
company). However, only HR employees and managers can see the related pay grades, home
addresses, and phone numbers of the entire staff. And only HR managers can see and update
employee information classified as private, including Social Security numbers (SSNs) and insurance
information.

 How the data is secured:


Determine whether the data is generally available or, by default, off limits. In other words, when
defining the roles that are allowed to have access, you also need to define the type of access—view
only or update capabilities—along with the general access policy for the data. As an example, many
companies set access controls to deny database access to everyone except those who are specifically
granted permission to view or update the data.

 How long the data is to be retained:


Many industries require that data be retained for a certain length of time. For example, many finance
industries in countries may require specific retention periods. Data owners need to know the
regulatory requirements for their data, and if requirements do not exist, they should base the
retention period on the needs of the business.

 What method(s) should be used to dispose of the data:


For some data classifications, the method of disposal will not matter. But some data is so sensitive
that data owners will want to dispose of printed reports through cross-cut shredding or another
secure method. In addition, they may require employees to use a utility to verify that data has been
removed fully from their PCs after they erase files containing sensitive data to address any possible
data remanence issues or concerns.

 Whether the data needs to be encrypted:


Data owners will have to decide whether their data needs to be encrypted. They typically set this
requirement when they must comply with a law or regulation such as the Payment Card Industry
Data Security Standard (PCI DSS).

 The appropriate use of the data:


This aspect of the policy defines whether data is for use within the company, is restricted for use by
only selected roles, or can be made public to anyone outside the organization. In addition, some data
have associated legal usage definitions. The organization’s policy should spell out any such
restrictions or refer to the legal definitions as required. Proper data classification also helps the
organization comply with pertinent laws and regulations. For example, classifying credit card data as
private can help ensure compliance with the PCIDSS. One of the requirements of this standard is to
encrypt credit card information. Data owners who correctly defined the encryption aspect of their
organization’s data classification policy will require that the data be encrypted according to the
specifications defined in this standard.

6. Examples of Classification Levels

© 2018-19 Al-Nafi. All Rights Reserved. 12


Al Nafi Nafi Members Only

The requirement is that the definition of the classification levels should be clear enough so that it is
easy to determine how to classify the data by the owners. Anyone else should also be able to easily
understand how to protect the assets based on their classification levels. Also, it makes sense to use
classification levels that truly reflect the value of the particular category.

Here are some examples of classification:


 Top Secret: Data that is defined as being very sensitive, possibly related to privacy, bank
accounts, or credit card information.
 Company Restricted: Data that is restricted to properly authorized employees.
 Company Confidential: Data that can be viewed by many employees but is not for general
use.
 Public: Data that can be viewed or used by employees or the general public.

What is important, however, is that whatever classifications are used, everyone in the organization
must understand the value that each classification used represents, especially the owners who start
the classification process and pass on the requirements to custodians and others.

6.1. Classification – Done by Owners

The individual who owns the data should decide the classification under which the data falls. We call
that person the “owner.” The data owner is best qualified to make this decision because he or she
has the most knowledge about the use of the data and its value to the organization. Data owners
should review their data’s classification on a regular basis to ensure that the data remains correctly
classified and protected based on that classification. As data moves through the data lifecycle, the
owner is still in the best position to monitor value and ensure that the classification level reflects the
data’s true value. If any discrepancies are uncovered during the review, they need to be documented
by the data owner and then reviewed with the proper individuals responsible for the data in question
to establish the following:

 What caused the change in value, was it warranted and under what circumstances, and for
what reason?
 Under whose authority was the change in classification carried out?
 What documentation, if any, exists to substantiate the change in value and, therefore,
classification?

6.2. Purpose of Asset Classification

To summarize, the reason we classify assets, such as a data classification system, is to afford the
assets the level of protection they require based on their value. The whole purpose of data
classification is not only to express value but to protect based on the classification level. So, the value
of data classification, is not only in the classification levels that are used but in the underlying
mechanisms and architectures that provide the levels of protection required by each classification
level. Careful implementation of technologies and support elements for data classification becomes
very important. Support elements, such as education and training, become critical in allowing
classification systems to work properly. In other words, classification is not only just having three or
four classification categories, but having the careful implementation of effective supporting elements
and security controls for each of the classification levels used.

© 2018-19 Al-Nafi. All Rights Reserved. 13


Al Nafi Nafi Members Only

As we have seen, data classification provides a way to protect assets based on value. This allows the
organization to take care of some important and critical needs that can only be addressed through
classification systems.

Some of these may include the following:


 Ensure that assets receive the appropriate level of protection based on the value of the
asset.
 Provide security classifications that will indicate the need and priorities for security
protection.
 Minimize risks of unauthorized information alteration.
 Avoid unauthorized disclosure.
 Maintain competitive edge.
 Protect legal tactics.
 Comply with privacy laws, regulations, and industry standards.

6.3. Classification Benefits

Other than the obvious benefit of protecting assets based on value, there are other potential
benefits that can be realized by an organization in using asset classification systems. Here are some
examples of these benefits:

 Awareness among employees and customers of the organization’s commitment to protect


information.
 Identification of critical information.
 Identification of vulnerability to modification.
 Enable focus on integrity controls.
 Sensitivity to the need to protect valuable information.
 Understanding the value of information.
 Meeting legal requirements.

6.4. Issues Related to Classification

Asset classification may have some other issues that the organization needs to address. The following
may be examples of some of these issues, so in other words, these may include, and are not limited
to:
 Human error.
 Proper classification is dependent on ability and knowledge of the classifier.
 Requires awareness of regulations and customer and business expectations.
 Requires consistent classification method—often the decisions can be somewhat arbitrary.
 Needs clear labeling of all classified items.
 Must include manner for declassifying and destroying material in classification process.

7. Asset Protection and Classification Terminology


In organizations, responsibilities for asset management, including data, have become increasingly
divided among several roles. Asset management and data management need to include
accountabilities and responsibilities for protection of assets based on classification. There are key
roles that are identified in many laws and regulations that dictate certain accountabilities and
responsibilities that organizations need to assign. This is especially true of privacy laws that exist

© 2018-19 Al-Nafi. All Rights Reserved. 14


Al Nafi Nafi Members Only

around the world, especially in very privacy-aware areas such as Europe. Laws for the protection of
privacy have been enacted worldwide. Regardless of the jurisdiction, privacy laws tend to converge
around the principle of allowing the individual to have control over their personal information,
including how it is protected while it is being collected, processed, and stored by organizations. For
organizations to protect the individual’s personal information according to compliance requirements,
they must assign accountability and responsibility properly. Compliance requirements will treat
personal information as data that requires protection at every step of its lifecycle, from collection, to
processing, to storage, to archiving, and to destruction.

Protection of data requires the clear distinction of roles, accountabilities, and responsibilities to be
clearly identified and defined:
 Data subject: The individual who is the subject of personal data.
 Data owner: Accountable for determining the value of the data that they own and,
therefore, also accountable for the protection of the data. Data owners also are accountable
for defining policies for access of the data and clearly defining and communicating the
responsibilities for such protection to other entities including stewards, custodians, and
processors.
 Data controller: In the absence of a “true” owner, especially for personal information that
has been collected by organizations belonging to clients and customers, the data controller is
assigned the accountability for protecting the value of the information based on proper
implementation of controls. The controller, either alone or jointly with others, determines
the purposes for which and the manner in which any personal data is to be processed and,
therefore, protected.
 Data steward: Data stewards are commonly responsible for data content, context, and
associated business rules within the organization.
 Data processor: Data processors are the entities that process the data on behalf of the data
controller, therefore, they may be given the responsibility to protect the data, although the
accountability would always remain with the controller.
 Data custodian: Data custodians are responsible for the protection of the data while in their
custody. That would mean safe custody, transport, storage, and processing of the data and
the understanding and compliance to policies in regards to the protection of the data.

7.1. Data Ownership

Data management and protection involves many aspects of technology, but it also requires involved
parties to clearly understand their roles and responsibilities. The objectives of delineating data
management roles and responsibilities are to:
 Clearly define roles associated with functions.
 Establish data ownership throughout all phases of a project.
 Instill data accountability.
 Ensure that adequate, agreed-upon data quality and metadata metrics are maintained on a
continuous basis.

7.2. Information Owner

When information is collected or created, someone in the organization needs to be clearly made
accountable for it. We refer to this entity as the “owner.” Often, this is the individual or group that
created, purchased, or acquired the information to allow the organization to achieve its mission and
goals. This individual or group is considered and referred to as the “information owner.”

© 2018-19 Al-Nafi. All Rights Reserved. 15


Al Nafi Nafi Members Only

The information owner, therefore, is in the best position to clearly understand the value, either
quantitative or qualitative, of the information. The owner is also accountable for protecting the
information based on that value. To determine the correct value, the owner, therefore, has the
following accountabilities:
 Determine the impact the information has on the mission of the organization.
 Understand the replacement cost of the information (if it can be replaced).
 Determine which laws and regulations, including privacy laws, may dictate liabilities and
accountabilities related to the information.
 Determine who in the organization or outside of it has a need for the information and under
what circumstances the information should be released.
 Know when the information is inaccurate or no longer needed and should be destroyed.

7.3. Documentation

It is very important for data owners to establish and document certain expectations that need to be
passed on to others, such as custodians, as they relate to the data that is owned by the owners. For
instance, these may be examples of documentation:
 The ownership, intellectual property rights, and copyright of their data.
 The obligations relevant to ensure the data is compliant with compliance requirements.
 The policies for protection of the data, including baselines and access controls.
 The expectations for protection and responsibilities delegated to custodians and others
accessing the data.

7.4. Data Custodianship

Data custodians, as the word implies, have custody of assets that don’t belong to them, usually for a
certain period of time. Those assets belong to owners somewhere else, but the custodians have
“custody” of those assets as they may be required for access, decisions, supporting goals, and
objectives, etc. Custodians have the very important responsibility to protect the information while
it’s in their custody, according to expectations by the owners as set out in policies, standards,
procedures, baselines, and guidelines. It will be up to the security function to ensure that the
custodians are supported and advised and have the proper skills, tools, and architectures, etc. to be
able to properly protect assets, such as information, while in their custody. How these aspects are
addressed and managed should be in accordance with the defined data policies applicable to the
data, as well as any other applicable data stewardship specifications. Typical responsibilities of a data
custodian may include the following:

 Adherence to appropriate and relevant data policies, standards, procedures, baselines, and
guidelines as set out by owners and supported by the security function.
 Ensuring accessibility to appropriate users, maintaining appropriate levels of data security.
 Fundamental data maintenance, including but not limited to data storage and archiving.
 Data documentation, including updates to documentation.
 Assurance of quality and validation of any additions to data, including supporting periodic
audits to assure ongoing data integrity.

7.5. Difference Between Data Owner/Controller and Data Custodian/Processor

The difference between the data owner and the data custodian is that the owner is accountable for
the protection of what they own based on the value of that asset to the organization. In an
environment where a controller is required as part of compliance needs, the controller will act as the
© 2018-19 Al-Nafi. All Rights Reserved. 16
Al Nafi Nafi Members Only

owner and, therefore, becomes accountable for the protection based on expectations related to
legislation and regulations and enforced through policy and the implementation of those policies as
standards, procedures, baselines, and guidelines.

In other words:

Owners/Controllers:
Accountable for the protection of data based on relevant national or community laws or regulations.
The natural or legal person, public authority, agency, or any other body that alone or jointly with
others determines the purposes and means of the processing of personal data; where the purposes
and means of processing are determined by national or community laws or regulations, the
controller or the specific criteria for his nomination may be designated by national or community
law.

Custodians/Processors:
The processor processes data on behalf of the owners (example cloud provider). Therefore,
responsible for the adherence of policies, standards, procedures, baselines, and guidelines to ensure
protection while in their custody.

8. Privacy
The global economy has, and still is, undergoing an information explosion. There has been massive
growth in the complexity and volume of global information exchange and in general, information
collection, processing, and storing. There is much more information and data that is available to
everyone. Personal data is now very sensitive, and its protection and privacy have become important
factors that organizations face as part of compliance requirements. The organization needs to
protect the privacy of information as it is being collected, used, processed, stored, and archived by
authorized individuals in the workplace. The following is an overview of some of the ways in which
different countries and regions around the world are addressing the various legal and regulatory
issues they face.

8.1. The United States

The United States has many sector-specific privacy and data security laws, both at the federal and
state levels. There is no official national privacy data protection law or authority that governs privacy
protection. In fact, privacy in the United States is said be a “sectorial” concern. For example, the
Federal Trade Commission (FTC) has jurisdiction over most commercial entities and, therefore, has
the authority to issue and enforce privacy regulations in specific areas. In addition to the FTC, there
are other industry specific regulators, particularly those in the healthcare and financial services
sectors, that have authority to issue and enforce privacy regulations. Generally, the processing of
personal data is subject to “opt out” consent from the data subject, while the “opt in” rule applies in
special cases such as the processing of sensitive and valuable health information. With regard to the
accessibility of data stored within organizations, it is important to underline that the Fourth
Amendment to the U.S. Constitution applies; it protects people from unreasonable searches and
seizures by the government. The Fourth Amendment, however, is not a guarantee against all
searches and seizures but only those that are deemed unreasonable under the law. Whether a
particular type of search is considered reasonable in the eyes of the law is determined by balancing
two important interests, the intrusion on an individual’s Fourth Amendment rights and the legitimate
government interests such as public safety. In 2012, the US government unveiled a “Consumer
Privacy Bill of Rights” as part of a comprehensive blueprint to protect individual privacy rights and

© 2018-19 Al-Nafi. All Rights Reserved. 17


Al Nafi Nafi Members Only

give users more control over how their information is handled by organizations that are collecting
such information.

8.2. European Union

The data protection and privacy laws in the European Union (EU) member states are constrained by
the EU directives, regulations, and decisions enacted by the EU. The main piece of legislation is the
EU Directive 95/46/EC “on the protection of individuals with regard to the processing of personal
data and on the free movement of such data.” These provisions apply in all business and, therefore,
cover the processing of personal data in organizations. There is also the EU Directive 2002/58/EC (the
ePrivacy Directive) “concerning the processing of personal data and the protection of privacy in the
electronic communications sector.” This directive contains provisions that deal with data breaches
and the use of cookies. Latin American, North Africa, and medium-size Asian countries have privacy
and data protection legislation largely influenced by the EU privacy laws and, in fact, those EU privacy
laws may have been used as models for specific legislation.

8.3. Asia–Pacific Economic Cooperation (APEC) Council

The Asia–Pacific Economic Cooperation (APEC) council has become the point of reference for the
data protection and privacy regulations. The APEC countries have endorsed the APEC privacy
framework, recognizing the importance of the development of effective privacy protections that
avoid barriers to information flows and ensure continued trade and economic growth in the APEC
region. The APEC privacy framework promotes a flexible approach to information privacy protection
across APEC member economies, while avoiding the creation of unnecessary barriers to information
flows.

8.4. Essential Requirements in Privacy and Data Protection Laws

The ultimate goal of privacy and data protection laws is to provide protection to individuals that are
referred to as data subjects for the collection, storage, usage, and destruction of their personal data
with respect to their privacy. This is achieved with the definitions of requirements to be fulfilled by
the operators involved in the data processing. These operators can process the data, playing the role
of data controllers or data processors; in other words, controllers end up having accountability for
protection, and processors end up having responsibility for protection.

One such example is the Data Protection Act (DPA) in the UK. According to the Information
Commissioner’s Office (ICO) of the UK, which is an independent organization devoted to uphold
information rights in the public interest, promoting openness by public bodies and committed to
data privacy for individuals, the Data Protection Act sets out rights for individuals regarding their
personal information. Personal data is defined as information pertaining to an identifiable living
individual. The DPA mandates that whenever personal data is processed, collected, recorded, stored
or disposed of it must be done within the terms of the Data Protection Act (DPA). The Information
Commissioner’s Office (ICO) helps organizations understand their compliance requirements and find
out about their obligations and how to comply, including protecting personal information. As such
they advise on how to comply with the DPA by providing any organization that handles personal
information about individuals, a framework that guides how to meet the obligations under the DPA.
The framework guides those who have day-to-day responsibility for data protection. It is split into
eight data protection principles, and the guide explains the purpose and effect of each principle,

© 2018-19 Al-Nafi. All Rights Reserved. 18


Al Nafi Nafi Members Only

gives practical examples, and answers frequently asked questions. The data protection principles are
as follows, taken directly from the ICO website:

1. Personal data shall be processed fairly and lawfully and, in particular, shall not be
processed unless – (a) at least one of the conditions in Schedule 2 is met, and (b) in the
case of sensitive personal data, at least one of the conditions in Schedule 3 is also met.
2. Personal data shall be obtained only for one or more specified and lawful purposes, and
shall not be further processed in any manner incompatible with that purpose or those
purposes.
3. Personal data shall be adequate, relevant and not excessive in relation to the purpose or
purposes for which they are processed.
4. Personal data shall be accurate and, where necessary, kept up to date.
5. Personal data processed for any purpose or purposes shall not be kept for longer than is
necessary for that purpose or those purposes.
6. Personal data shall be processed in accordance with the rights of data subjects under this
Act.
7. Appropriate technical and organizational measures shall be taken against unauthorized
or unlawful processing of personal data and against accidental loss or destruction of, or
damage to, personal data.
8. Personal data shall not be transferred to a country or territory outside the European
Economic Area unless that country or territory ensures an adequate level of protection
for the rights and freedoms of data subjects in relation to the processing of personal
data.

8.5. Organization for Economic Cooperation and Development (OECD) Guidelines on


Privacy Protection

With the proliferation of technology and the increasing awareness that most of our personally
identifiable information (PII) is stored online or electronically in some way and being collected,
stored, and used by organizations, there is a need to protect personal information. That expectation
today is in most cases dictated by privacy laws and regulations. There is an organization that has
been devoted to helping governments and organizations around the world in dealing with issues that
focus on improving the economic and social well-being of people around the world. That
organizations is the OECD. The following is taken directly from the OECD website (www.oecd.org); it
describes what the focus and initiatives of the OECD are. The OECD provides a forum in which
governments can work together to share experiences and seek solutions to common problems. We
work with governments to understand what drives economic, social, and environmental change. We
measure productivity and global flows of trade and investment. We analyze and compare data to
predict future trends. We set international standards on a wide range of things, from agriculture and
tax to the safety of chemicals. We also look at issues that directly affect everyone’s daily life, like how
much people pay in taxes and social security and how much leisure time they can take. We compare
how different countries’ school systems are readying their young people for modern life and how
different countries’ pension systems will look after their citizens in old age. In the many decades that
the OECD has existed, it has played an important role in promoting respect for privacy as a
fundamental value and a condition for the free flow of personal data across borders. A perfect
example of this is what the OECD has published as the ‘OECD Privacy Guidelines.’ These guidelines
can act as a framework that organizations can use in order to understand and address the
requirements of privacy protection. They can provide comprehensive guidance on what

© 2018-19 Al-Nafi. All Rights Reserved. 19


Al Nafi Nafi Members Only

organizations need to implement as far as security controls to address the requirements of the
privacy principles.

8.5.1. OECD Privacy Guidelines

The OECD has broadly classified these principles into the collection limitation, data quality, purpose
specification, use limitation, security safeguards, openness, individual participation, and
accountability.

The guidelines are as follows:

1. Collection Limitation Principle: There should be limits to the collection of personal data, and
any such data should be obtained by lawful and fair means and, where appropriate, with the
knowledge or consent of the data subject.

2. Data Quality Principle: Personal data should be relevant to the purposes for which they are
to be used, and, to the extent necessary for those purposes, should be accurate, complete,
and kept up-to-date.

3. Purpose Specification Principle: The purposes for which personal data are collected should
be specified not later than at the time of data collection and the subsequent use limited to
the fulfilment of those purposes or such others as are not incompatible with those purposes
and as are specified on each occasion of change of purpose.

4. Use Limitation Principle: Personal data should not be disclosed, made available or otherwise
used for purposes other than those specified except with the consent of the data subject; or
by the authority of law.

5. Security Safeguards Principle: Personal data should be protected by reasonable security


safeguards against such risks as loss or unauthorized access, destruction, use, modification or
disclosure of data.

6. Openness Principle: There should be a general policy of openness about developments,


practices and policies with respect to personal data. Means should be readily available of
establishing the existence and nature of personal data, and the main purposes of their use,
as well as the identity and usual residence of the data controller.

7. 7. Individual Participation Principle: An individual should have the right to a) obtain from a
data controller, or otherwise, confirmation of whether or not the data controller has data
relating to him; b) to have communicated to him, data relating to him within a reasonable
time; c) at a charge, if any, that is not excessive; d) in a reasonable manner; and in a form
that is readily intelligible to him; e) to be given reasons if a request is denied, and to be able
to challenge such denial; and f) to challenge data relating to him and, if the challenge is
successful to have the data erased, rectified, completed or amended.

8. Accountability Principle: A data controller should be accountable for complying with


measures which give effect to the principles stated above.

© 2018-19 Al-Nafi. All Rights Reserved. 20


Al Nafi Nafi Members Only

9. Data Retention
Data retention, which is sometimes also referred to as records retention, is defined as the continued
and long-term storage of valuable assets driven by compliance requirements or corporate
requirements. Companies are required to comply with legal and regulatory legislation in retaining
assets, especially information and records. Each company should have those requirements clearly
addressed and expressed in a retention policy that usually is accompanied by a retention schedule.
This will then provide the basis for how long to keep data and assets around and also when they
should be securely destroyed.

9.1. Establishing Information Governance and Retention Policies

To understand retention requirements, we need to understand the various types of assets, such as
data and records that may have retention needs. As part of proper asset governance, the
establishment of effective asset archiving and retention policies needs to be done. These are the
issues and factors to consider:

 Understand where the data exists: The enterprise cannot properly retain and archive data
unless knowledge of where data resides and how different pieces of information relate to
one another across the enterprise is available and known.

 Classify and define data: Define what data needs to be archived and for how long, based on
business and retention needs that are driven by laws, regulations, and corporate
requirements related to goals and objectives.

 Archive and manage data: Once data is defined and classified, the archiving of that data
needs to be done appropriately, based on business access needs. Manage that archival data
in a way that supports the defined data retention policies but at the same time allows
authorized and timely access.

9.2. Examples of Data Retention Policies

Some examples of retention policies are as follows which you should all read via google.

1. European Document Retention Guide 2013: A Comparative View Across 15 Countries To Help
You Better Understand Legal Requirements And Records Management Best Practices (Iron
1. Mountain, January 2013)
2. State of Florida Electronic Records and Records Management Practices, November 2010
3. The Employment Practices Code, Information Commissioner’s Office, UK, November 2011
4. Wesleyan University, Information Technology Services Policy Regarding Data Retention for
ITS-Owned Systems, September 2013
5. Visteon Corporation, International Data Protection Policy, April 2013
6. Texas State Records Retention Schedule (Revised 4th edition), effective July 4, 2012

© 2018-19 Al-Nafi. All Rights Reserved. 21


Al Nafi Nafi Members Only

10.Data Protection Methods


10.1. Baselines

A baseline is a minimum level of protection that can be used as a reference point. As a reference
point, baselines can therefore be used as a comparison for assessments and requirements to ensure
that those minimum levels of security controls are always being achieved. Baselines can also provide
a way to ensure updates to technology and architectures are subjected to the minimum understood
levels of security requirements. As part of what security does, once controls are in place to mitigate
risks, the baselines can be referenced, after which all further comparisons and development are
measured against it. Specifically when protecting assets, baselines can be particularly helpful in
achieving protection of those assets based on value. Remember, if we have classified assets based on
value, as long as we come up with meaningful baselines for each of the classification levels, we can
conform to the minimum levels required. In other words, let’s say that we are using classifications
such as HIGH, MEDIUM, and LOW.

Baselines could be developed for each of our classifications and provide that minimum level of
security required for each. For example, we could establish baselines as follows, keeping in mind that
these examples may not be complete, they are just meant to show the concepts of how baselines
can provide that reference point for minimum levels of security:

HIGH:
 Access
o Strong passwords
o Asset owner approved request, review, and termination process
o Non-disclosure agreement
 Encryption
o 128 bit symmetric encryption for creation, storage, and transmission
 Labelling
o Watermark
 Monitoring
o Real-time

MEDIUM:
 Access
o passwords
o Asset owner approved request, review, and termination process
 Encryption
o 128 bit symmetric encryption for transmission
 Labeling
o None
 Monitoring
o Timely

LOW:
 Access
o Asset owner approved request, review, and termination process
 Encryption
o None
© 2018-19 Al-Nafi. All Rights Reserved. 22
Al Nafi Nafi Members Only

 Labelling
o None
 Monitoring
o None

Baselines can be technology and architecture related and specific to certain types of systems. For
example, an organization may dictate what the minimum levels of security requirements need to be
for a Windows machine before it can be connected to the corporate network. Baselines can also be
non-technology related, such as an organization requiring all employees to display their identification
badges while in certain areas of the organization, or requiring that any visitors must be escorted in
valuable areas of the organizations. While these types of controls can be mandated and, therefore,
be considered to be policies, they can also establish the minimum levels of security required as part
of the security program and, therefore, create a baseline of protection.

As a summary:

1. A baseline is a consistent reference point.

2. Baselines provide a definition of the minimum level of protection that is required to protect
valuable assets.

3. Baselines can be defined as configurations for various architectures, which will indicate the
necessary settings and the level of protection that is required to protect that architecture.

In ISO 27001 and PCI DSS courses taught by AL Nafi we will cover in detail various baseline config
guides and models.

In the mean time you can read the following to understand further how various configuration guides
are created and updated based on best practices.

 United States Government Configuration Baseline (USGCB)


 Estonian Information System’s Authority IT Baseline Security System ISKE

11.Generally Accepted Principles


This section introduces some generally accepted principles that address information security from a
very high-level viewpoint that again can provide comprehensive guidance to organizations. These
principles are fundamental in nature and rarely change over time, regardless of technology focus.
They are NOT stated here as security requirements but are provided as useful guiding references for
developing, implementing, and understanding security policies and baselines for use in any
organization, regardless of industry or focus. The principles listed below are by no means exhaustive
and only meant to be examples:

Information System Security Objectives: Information system security objectives or goals are
described in terms of three overall objectives: confidentiality, integrity, and availability. Security
policies, baselines, and measures are developed and implemented according to these objectives.

Prevent, Detect, Respond, and Recover: Information security is a combination of preventive,


detective, response, and recovery measures. Preventive measures are for avoiding or deterring the

© 2018-19 Al-Nafi. All Rights Reserved. 23


Al Nafi Nafi Members Only

occurrence of an undesirable event. Detective measures are for identifying the occurrence of an
undesirable event. Response measures refer to coordinated response to contain damage when an
undesirable event (or incident) occurs. Recovery measures are for restoring the confidentiality,
integrity, and availability of information systems to their expected state.

Protection of Information While Being Processed, in Transit, and in Storage: Security measures
should be considered and implemented as appropriate to preserve the confidentiality, integrity, and
availability of information while it is being processed, in transit, and in storage.

External Systems Are Assumed to Be Insecure: In general, an external system or entity that is not
under your direct control should be considered insecure. Additional security measures are required
when your information assets or information systems are located in, or interfacing with, external
systems. Information systems infrastructure could be partitioned using either physical or logical
means to segregate environments with different risk levels.

Resilience for Critical Information Systems: All critical information systems need to be resilient to
withstand major disruptive events, with measures in place to detect disruption, minimize damage,
and rapidly respond and recover.

Auditability and Accountability: Security requires auditability and accountability. Auditability refers
to the ability to verify the activities in an information system. Evidence used for verification can take
the form of audit trails, system logs, alarms, or other notifications. Accountability refers to the ability
to audit the actions of all parties and processes that interact with information systems. Roles and
responsibilities should be clearly defined, identified, and authorized at a level commensurate with
the sensitivity of information.

12.Scoping and Tailoring


Scoping can be defined as limiting the general baseline recommendations by removing those that do
not apply. We “scope” to ensure the baseline control applies to the environment as best as it can.
Tailoring is defined as altering baseline control recommendations to apply more specifically. This
means we “tailor” to make sure controls apply as required probably specifically to the technology or
environment. To scope and tailor, a thorough understanding of the environment and risks is
necessary.

Scoping guidance provides an enterprise with specific terms and conditions on the applicability and
implementation of individual security controls. Several considerations can potentially impact how
baseline security controls are applied by the enterprise. System security plans should clearly identify
which security controls employed scoping guidance and include a description of the type of
considerations that were made. The application of scoping guidance must be reviewed and approved
by the authorizing official for the information system in question.

Tailoring involves scoping the assessment procedures to more closely match the characteristics of
the information system and its environment of operation. The tailoring process gives enterprises the
flexibility needed to avoid assessment approaches that are unnecessarily complex or costly while
simultaneously meeting the assessment requirements established by applying the fundamental
concepts of a risk management framework. Supplementation involves adding assessment procedures
or assessment details to adequately meet the risk management needs of the organization (e.g.,
adding organization-specific details such as system/platform-specific information for selected
security controls). Supplementation decisions are left to the discretion of the organization to

© 2018-19 Al-Nafi. All Rights Reserved. 24


Al Nafi Nafi Members Only

maximize flexibility in developing security assessment plans when applying the results of risk
assessments in determining the extent, rigor, and level of intensity of the assessments.
Be aware of the value that scoping, tailoring, and supplementation can bring to the security
architectures being planned and assessed for the enterprise. The use of scoping and tailoring to
properly narrow the focus of the architecture will ensure that the appropriate risks are identified and
addressed based on requirements. The use of supplementation will allow the architecture to stay
flexible over time and grow to address the needs of the enterprise that arise during operation of the
architecture once it is implemented fully and as time goes on.

Various frameworks which are outside of the scope of exams, but Al Nafi we will be covering those
frameworks in a separate class before the final exam.

13.The Center for Strategic & International Studies (CSIS) 20 Critical


Security Controls Initiative
The need to understand the scope of the security needs to be addressed, as well as the business
requirements to be supported and the resources available to accomplish the tasks at hand are all
part of the formula for success that you must learn to master. The Center for Strategic &
International Studies (CSIS) 20 Critical Security Controls initiative provides a unified list of 20 critical
controls that have been identified through a consensus of federal and private industry security
professionals as the most critical security issues seen in the industry. The CSIS team includes officials
from the NSA, US Cert, DoD JTF-GNO, the Department of Energy Nuclear Laboratories, Department
of State, DoD Cyber Crime Center, and the commercial sector. The CSIS controls do not introduce any
new security requirements, but they organize the requirements into a simplified list to aid in
determining compliance and ensure that the most important areas of concern are addressed.
In 2013, the stewardship and sustainment of the Controls was transferred to the Council on Cyber
Security (the Council), an independent, global, non-profit entity committed to a secure and open
internet. The CSIS initiative is designed to help the federal government prioritize resources and
consolidate efforts to reduce costs and ensure that the critical security issues are addressed. The five
“critical tenets” of the CSIS initiative, as listed on the SANS website, are as follows:

Offense Informs Defense: Use knowledge of actual attacks that have compromised systems to
provide the foundation to build effective, practical defenses. Include only those controls that can be
shown to stop known real-world attacks.

Prioritization: Invest first in controls that will provide the greatest risk reduction and protection
against the most dangerous threat actors and that can be feasibly implemented in your computing
environment.

Metrics: Establish common metrics to provide a shared language for executives, IT specialists,
auditors, and security officials to measure the effectiveness of security measures within an
organization so that required adjustments can be identified and implemented quickly.

Continuous Monitoring: Carry out continuous monitoring to test and validate the effectiveness of
current security measures.

Automation: Automate defenses so that organizations can achieve reliable, scalable, and continuous
measurements of their adherence to the controls and related metrics.

© 2018-19 Al-Nafi. All Rights Reserved. 25


Al Nafi Nafi Members Only

13.1. Current List of Critical Security Controls – Version 5.1

The current list of Critical Security Controls—Version 5.1 are as follows:

 Inventory of Authorized and Unauthorized Devices


 Inventory of Authorized and Unauthorized Software
 Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations,
and Servers
 Continuous Vulnerability Assessment and Remediation
 Malware Defenses
 Application Software Security
 Wireless Access Control
 Data Recovery Capability
 Security Skills Assessment and Appropriate Training to Fill Gaps
 Secure Configurations for Network Devices such as Firewalls, Routers, and Switches
 Limitation and Control of Network Ports, Protocols, and Services
 Controlled Use of Administrative Privileges
 Boundary Defense
 Maintenance, Monitoring, and Analysis of Audit Logs
 Controlled Access Based on the Need to Know
 Account Monitoring and Control
 Data Protection
 Incident Response and Management
 Secure Network Engineering
 Penetration Tests and Red Team Exercises

14.Data States
It is typically agreed upon that data and information can be in three basic states: data at rest, data in
motion (transit), and data in use. Understanding these three states and how information and data
can be represented in each of the states can allow an organization to apply the security measures
that are appropriate for its protection.

1. Data at Rest: data stored on media in any type of form. It is at rest because it is not being
transmitted or processed in any way.

2. Data in Motion: data that is currently traveling, typically across a network. It is in motion
because it is moving.

3. Data in Use: data that is being processed by applications or processes. It is in use because it is
data that is currently in the process of being generated, updated, appended, or erased. It
might also be in the process of being viewed by users accessing it through various endpoints
or applications.

14.1. Data at Rest

The protection of stored data is often a key requirement for a company’s sensitive information.
Databases, backup information, off-site storage, password files, and many other types of sensitive
information need to be protected from disclosure or undetected alteration and availability. Much of

© 2018-19 Al-Nafi. All Rights Reserved. 26


Al Nafi Nafi Members Only

this can be done through the use of cryptographic algorithms that limit access to the data to those
that hold the proper encryption (and decryption) keys. Some modern cryptographic tools also permit
the condensing, or compressing, of messages, saving both transmission and storage space, making
them very efficient.

Data at Rest – Description of Risk


Malicious users may gain unauthorized physical or logical access to a device, transfer information
from the device to an attacker’s system, and perform other actions that jeopardize the
confidentiality of the information on a device.

Data at Rest – Recommendations


Removable media and mobile devices must be properly encrypted, following the guidelines below
when used to store valuable data. Mobile devices include laptops, tablets, wearable tech, and
smartphones. Proper access controls and redundancy controls also need to be applied to protect
data at rest.

14.2. Data in Transit

Data that moves, usually across networks, is said to be data in motion, or in transit. One of the
primary needs of organizations today is to move data and information across various types of media,
but the need is to prevent the contents of the message from being revealed even if the message
itself was intercepted in transit. Whether the message is sent manually, over a voice network, or via
the internet, or any other network, including wireless networks, modern cryptography can provide
secure and confidential methods to transmit data and allows the verification of the integrity of the
message so that any changes to the message itself can be detected. Recent advances in quantum
cryptography have shown that the “viewing” of a message can be detected while in transit.

14.2.1. Link Encryption

Data are encrypted on a network using either link or end-to-end encryption. In general, link
encryption is performed by service providers, such as a data communications provider on a Frame
Relay network. Link encryption encrypts all of the data along a communications path (e.g., a satellite
link, telephone circuit, or T-1 line). Because link encryption also encrypts routing data,
communications nodes need to decrypt the data to continue routing. The data packet is decrypted
and re-encrypted at each point in the communications channel. It is theoretically possible that an
attacker compromising a node in the network may see the message in the clear. Because link
encryption also encrypts the routing information, it provides traffic confidentiality better than end-
to-end encryption. Traffic confidentiality hides the addressing information from an observer,
preventing an inference attack based on the existence of traffic between two parties.

14.2.2. End-to-End Encryption

End-to-end encryption is generally performed by the end user within an organization. The data are
encrypted at the start of the communications channel or before and remain encrypted until
decrypted at the remote end. Although data remain encrypted when passed through a network,
routing information remains visible. An example of end-to-end encryption would be a virtual private
network (VPN) connection.

14.3. Data in Transit – Description of Risk

© 2018-19 Al-Nafi. All Rights Reserved. 27


Al Nafi Nafi Members Only

The risks associated with data in motion are the same as those associated with data at rest. These
include unauthorized disclosure, modification, and unavailability. Malicious actors may intercept or
monitor plaintext data transmitting across network and gain unauthorized access that jeopardizes
the confidentiality, integrity, and availability of the data.

Data in Transit will be discussed further in PCI DSS and ISO 27001 courses taught by Al Nafi along
with its uses cases and real life implementations.

15.Media Handling
15.1. Media

Media storing sensitive information requires physical and logical controls. Media lacks the means for
digital accountability when the data is not encrypted. For this reason, extensive security must be
taken when handling sensitive media. Logical and physical controls, such as marking, handling,
storing, and declassification, provide methods for the secure handling of sensitive media containing
sensitive information.

15.2. Marking

Organizations should have policies in place regarding the marking and labeling of media based on its
classification. For example:

Storage media should have a physical label identifying the sensitivity of the information contained.
The label should clearly indicate if the media is encrypted. The label may also contain information
regarding a point of contact and a retention period. When media is found or discovered without a
label, it should be immediately labeled at the highest level of sensitivity until the appropriate analysis
reveals otherwise.

The need for media marking typically is strongest in organizations where sensitive intellectual
property and confidential data must be stored and shared among multiple people. If the security
architect can design centrally managed and controlled enterprise content management (ECM)
systems paired with Data Loss (Leakage) Protection technology (DLP), then the entire threat vector
that media marking is designed to address may be able to be handled in a totally different way as
well.

15.3. Handling

Only designated personnel should have access to sensitive media. Policies and procedures describing
the proper handling of sensitive media should be promulgated. Individuals responsible for managing
sensitive media should be trained on the policies and procedures regarding the proper handling and
marking of sensitive media. Never assume that all members of the organization are fully aware of or
understand security policies. It is also important that logs and other records be used to track the
activities of individuals handling backup media. Manual processes, such as access logs, are necessary
to compensate for the lack of automated controls regarding access to sensitive media.

© 2018-19 Al-Nafi. All Rights Reserved. 28


Al Nafi Nafi Members Only

15.4. Storing

Sensitive media should not be left lying about where a passerby could access it. Whenever possible,
backup media should be encrypted and stored in a security container, such as a safe or strong box
with limited access. Storing encrypted backup media at an off-site location should be considered for
disaster recovery purposes. Sensitive backup media stored at the same site as the system should be
kept in a fire-resistant box whenever possible.

In every case, the number of individuals with access to media should be strictly limited, and the
separation of duties and job rotation concepts should be implemented where it is cost effective to do
so.

15.5. Destruction

Media that is no longer needed or is defective should be destroyed rather than simply disposed of. A
record of the destruction should be used that corresponds to any logs used for handling media.
Implement object reuse controls for any media in question when the sensitivity is unknown rather
than simply recycling it.

15.6. Record Retention

Information and data should be kept only as long as it is required. Organizations may have to keep
certain records for a period as specified by industry standards or in accordance with laws and
regulations. Hard- and soft-copy records should not be kept beyond their required or useful life.
Security practitioners should ensure that accurate records are maintained by the organization
regarding the location and types of records stored. A periodic review of retained records is necessary
to reduce the volume of information stored and ensure that only relevant information is preserved.

Record retention policies are used to indicate how long an organization must maintain information
and assets. Ensure the following:

 The organization understands the retention requirements for different types of data
throughout the organization.
 The organization documents in a record’s schedule the retention requirements for each type
of information.
 The systems, processes, and individuals of the organization retain information in accordance
with the schedule but not longer.

A common mistake in records retention is finding the longest retention period and applying it
without analysis to all types of information in an organization. This not only wastes storage but also
adds considerable “noise” when searching or processing information in search of relevant records.
Records and information no longer mandated to be retained should be destroyed in accordance with
the policies of the enterprise and any appropriate legal requirements that may need to be taken into
account.

16.Data Remanence
Data remanence is defined as the residual data remaining on some sort of object after the data has
been deleted or erased. The problem related to data remanence is that there may some physical
characteristics of that data remaining on the media even after we’ve tried to securely erase it.

© 2018-19 Al-Nafi. All Rights Reserved. 29


Al Nafi Nafi Members Only

Depending on the value of the data, it may be very important to securely erase the data so that there
are no residual characteristics remaining that may allow anyone to recover the information. On a
typical hard disk drive (HDD), the data is represented onto the hard drive by using magnetic
technology. In other words, the zeroes and the ones are represented by using magnetic technology.
This type of technology can be used to re-record new data onto the drive as we can alter the
magnetic field so that we can overwrite and erase any data that may have been represented onto
the data previously.

Solid-state drive (SSD) technology, which is newer technology, does not use magnetic fields to
represent the information, instead, it uses flash memory to store data. Flash technology uses
electrons that change the electronic “charge” in a “flash” to represent the information. That is why it
is called “flash” technology. Flash memory, such as SSD, does not require power as moving parts are
not required to access any stored data.

Data remaining on media that use magnetic technologies, such as HDDs, become an issue if the value
of the data that was stored on that media is high. Since there may be methods to recover the original
data, sanitizing the information must be done effectively by using secure methods. Secure methods
to address data remanence (data remaining on the media after erasure) can be summarized by three
options. These options are clearing, purging, and destruction.

16.1. Clearing

Clearing is defined as the removal of sensitive data from storage devices, using methods that provide
some assurance that the data may not be reconstructed using most known data recovery techniques.
The original data may still be recoverable but typically not without special recovery techniques and
skills.

16.2. Purging

Purging, sometimes referred to as sanitizing, is the removal of sensitive data from media with the
intent that the sensitive data cannot be reconstructed by any known technique.

16.3. Destruction

This is exactly as it sounds. The media is made unusable by using some sort of destruction method.
This could include shredding, or melting the media into liquid by using very high temperatures. We
must note, however, that the effectiveness of destroying the media varies. For example, simply
drilling a hole through a hard drive may allow most of the data to still be recovered, whereas, melting
the hard drive into liquid would not. The destruction method should be driven by the value of the
sensitive data that is residing on the media. To summarize, destruction using appropriate techniques
is the most secure method of preventing retrieval. Destruction of the media is the best method as it
destroys the media and also the data that is on it. However, the destruction method must be a very
good one to prevent the recovery of the data. If we ensure that the data cannot be reconstructed,
we refer to that as defensible destruction of the data. In other words, we ensure that the data is not
recoverable.

© 2018-19 Al-Nafi. All Rights Reserved. 30


Al Nafi Nafi Members Only

16.4. Data Destruction Methods

As we have discussed, the three options available to address data remanence are clearing, purging,
and destruction. Destruction is thought of as being the best option, as long as the destruction
method is a good one. The following methods may fit into the three categories as described above:

Overwriting: One common method used to address data remanence is to overwrite the storage
media with new data. We can overwrite with zeroes or ones. This is sometimes called wiping. The
simplest overwrite technique is to write zeroes over the existing data, and depending on the
sensitivity of the data, this might need to be done several times.

Degaussing: During the mainframe days, a technology called degaussing was created. This technique
uses a degausser that basically erases the information on the magnetic media by applying a varying
magnetic field to the media to erase the information that was stored using magnetic technology. The
media is basically saturated with a magnetic field that erases all of the information. Since this uses a
magnetic field to saturate the media, it can be useful for any technology that uses magnetic
technology to represent the data, including mainframe tapes and also HDDs. While many types of
older magnetic storage media, such as tapes, can be safely degaussed, degaussing usually renders
the magnetic media of modern HDDs completely unusable, which may be ultimately desirable to
address remanence properly.

Encryption: Encrypting data before it is stored on the media can address data remanence very
effectively. But this is only true if the encryption key used to encrypt the information is then
destroyed securely. This would make it very difficult, if not impossible, for an untrusted party to
recover any data from the media. The industry refers to this process as crypto-erase or in some
cases, crypto-shredding. This method of addressing data remanence may be very useful in cloud
environments.

16.5. Media Destruction – Defensible Destruction

As we have discussed, destruction of the media and the data on it is the most desirable way to
address data remanence. But this is only effective based on the method used for destruction.
Defensible destruction implies that the method used will not allow the reconstruction and recovery
of that data contained on the media device itself through any known means. The following may be
examples of effective defensible destruction methods:
 Physically breaking the media apart, such as hard drive shredding, etc.
 Chemically altering the media into a non-readable state by possibly using corrosive
chemicals.
 Phase transition, which means using temperature and pressure to change the state of
something into something else.
 For media using magnetic technology, raising its temperature above the Curie Temperature,
which is at the point where devices lose their magnetic properties.

16.6. Solid-State Drives (SSDs)

Solid-State Drives (SSDs) use flash memory for data storage and retrieval. Flash memory differs from
magnetic memory in one key way: flash memory cannot be overwritten. When existing data on an
HDD is changed, the drive overwrites the old data with the new data. This makes overwriting an

© 2018-19 Al-Nafi. All Rights Reserved. 31


Al Nafi Nafi Members Only

effective way of erasing data on an HDD. However, when changes are made to existing data on an
SSD, the drive writes that data, along with the new changes, to a different location rather than
overwriting the same section. The flash translation layer then updates the map so that the system
finds the new, updated data rather than the old data. Because of this, an SSD can contain multiple
iterations of the same data, even if those iterations are not accessible by conventional means. This is
what causes data remanence on SSDs.

16.6.1. Solid-State Drive (SSD) Data Destruction

SSDs have a unique set of challenges that require a specialized set of data destruction techniques.
Unlike HDDs, overwriting is not effective for SSDs. Because the flash translation layer controls how
the system is able to access the data, it can effectively “hide” data from data destruction software,
leaving iterations of the data un-erased on different sections of the drive. Instead, SSD
manufacturers include built-in sanitization commands that are designed to internally erase the data
on the drive. The benefit of this is that the flash translation layer does not interfere with the erasure
process. However, if these commands were improperly implemented by the manufacturer, this
erasure technique will not be effective.

Another technique, called cryptographic erasure or crypto-erase, takes advantage of the SSD’s built-
in data encryption. Most SSDs encrypt data by default. By erasing the encryption key, the data will
then be unreadable. However, this approach relies again on being able to effectively erase data
despite interference by the flash translation layer. If the flash translation layer masks the presence of
any data pertaining to the encryption, the “encrypted” drive may still be readable.
Due to the unique complexities of SSDs, the best data destruction method is, in fact, a combination
of techniques such as crypto-erase, sanitization, and overwrite. SSDs require the careful data
destruction techniques to effectively prevent data remanence on SSDs. The use of cloud-based
storage today also presents a data remanence challenge for the organizations moving to the cloud.
As more and more data is being moved to the cloud, the ability to address data security issues in
general can become much more difficult for the enterprise.

16.7. Cloud-Based Data Remanence

Among the many challenges that face the security practitioner in this area is the ability to
authoritatively certify that data has been successfully destroyed upon decommissioning of cloud-
based storage systems. Due to the fact that a third party owns and operates the system and the
enterprise is effectively renting storage space, there is little to no visibility into the management and
security of the data in many cases. While the challenge is a big one for the enterprise, the use of
Platform as a Service-based (PaaS) architectures can actually provide a solution for the issues raised
by data remanence in the cloud. The security practitioner and the cloud vendor have to be willing to
work together to architect a PaaS solution that addresses the daunting issues of media and
application-level encryption via a platform offering. There are many parts that have to be properly
set up and synchronized for this solution to work, such as messaging, data transactions, data storage
and caching, and framework APIs. In addition, the platform has to be set up in such a way, with
appropriate safeguards available, to ensure that no unencrypted data is ever written to physical
media at any time during the data lifecycle, including data in transit.

© 2018-19 Al-Nafi. All Rights Reserved. 32


1
2
3
4
The goal of the Security Architecture and Engineering domain is to provide you with
concepts, principles, structures, and standards used to design, implement, monitor,
and secure operating systems, equipment, networks, applications, and those
controls used to enforce various levels of confidentiality, integrity, and availability.

5
6
7
8
Older sources such as the System Security Engineering Capability Maturity Model
(SSE-CMM) provided systems security specific processes that did not directly map to
systems engineering processes. While valuable resources, earlier system security
engineering models were difficult to relate to standard engineering and software
design processes that limited their adoption in many industries.

The current direction with major standards has been to converge systems security
engineering as a specialty engineering discipline under traditional systems
engineering processes. This allows for closer alignment between traditional
engineering and security engineering. Both the International Council on Systems
Engineering (INCOSE) and the National Institute of Standards and Technology (NIST)
recognize Systems Security Engineering as a specialty engineering discipline of
systems engineering. All systems engineering processes are applicable to systems
security engineering and are applied with a systems security perspective.
Commonly accepted sources for engineering and security engineering include the
following:

9
INCOSE Systems Engineering Handbook
INCOSE is a not-for-profit membership organization founded to develop and
disseminate the interdisciplinary principles and practices that enable the realization
of successful systems.

NIST SP800-160 System Security Engineering


This publication addresses the engineering-driven actions necessary to develop
more defensible and survivable systems—including the components that compose
and the services that depend on those systems. It starts with and builds upon a set
of well-established International Standards for systems and software engineering
published by the International Organization for Standardization (ISO), the
International Electro technical Commission (IEC), and the Institute of Electrical and
Electronics Engineers (IEEE) and infuses systems security engineering techniques,
methods, and practices into those systems and software
engineering activities.

ISO/IEC 15026 Series-Systems and Software Engineering


A series of standards focused on Systems and Software Engineering.

10
ISO/IEC/IEEE 15288 Systems and Software Engineering
A systems engineering standard defining processes.

10
The following processes are defined in the NIST SP800-160 dated November 2016.
The processes and process definitions are consistent with the INCOSE Systems
Engineering Handbook and easily related to ISO-based standards with some minor
differences.

11
12
Business and mission analysis process:
Helps the engineering team to understand the scope, basis, and drivers of the
business or mission problems or opportunities and ascertain the asset loss
consequences that present security and protection issues associated with those
problems or opportunities.

13
Stakeholder needs and requirements definition process:
Defines the stakeholder security requirements that include the protection
capability, security characteristics, and security-driven constraints for the system to
securely provide the capabilities needed by users and other stakeholders.

1
System requirements definition process:
Transforms the stakeholder security requirements into the system requirements
that reflect a technical security view of the system.

2
Architecture definition process:
Generates a set of representative security views of the system architecture
alternatives to inform the selection of one or more alternatives.

3
Design definition process:
Provides security-related data and information about the system and its elements
to enable implementation consistent with security architectural entities
and constraints as defined in the models and views of the system architecture.

4
System analysis process:
Provides a security view to system analyses and contributes specific system security
analyses to provide essential data and information for the technical
understanding of the security aspects of decision-making.

5
Implementation process:
Realizes (implements, builds) the security aspects of all system elements.

6
Integration process:
Addresses the security aspects in the assembly of a set of system elements such
that the realized system achieves the protection capability in a trustworthy manner
as specified by the system security requirements and
in accordance with the system architecture and system design.

7
Verification process:
Produces evidence sufficient to demonstrate that the system satisfies its security
requirements and security characteristics with the level of assurance that
applies to the system.

Validation process:
Provides evidence sufficient to demonstrate that the system, while in use, fulfills its
business or mission objectives while being able to provide adequate protection of
stakeholder and business or mission assets; minimize or contain asset loss and
associated consequences; and achieve its intended use in its intended operational
environment with the desired level of trustworthiness.

Transition process:
Establishes a capability to preserve the system security characteristics during all
aspects of an orderly and planned transition of the system into operational status.

Operation process:

8
Establishes the requirements and constraints to enable the secure operation of the
system in a manner consistent with its intended uses, in its intended operational
environment, and for all system modes of operation.

Maintenance process:
Establishes the requirements and constraints to enable maintenance elements to
sustain delivery of the specified system security services and provides engineering
support to maintenance elements.

Disposal process:
Provides for the security aspects of ending the existence of a system element or
system for a specified intended use. It accounts for the methods and techniques used
to securely handle, transport, package, store, or destroy retired elements to include
the data and information associated with the system or contained in system
elements.

8
9
10
11
1
2
3
4
5
6
Security models define rules of behavior for an information system to enforce
policies related to system security but typically involving confidentiality and/or
integrity policies of the system. Models define allowable behavior for one or more
aspect of system operation. When implemented in a system, technology enforces
the rules of behavior to ensure security goals (e.g., confidentiality, integrity) are
met.

7
A key principle of Systems Security Engineering and a differentiator from traditional
Systems Engineering is that Systems Security Engineering is focused on supporting
the confidentiality, integrity, and availability (CIA) needs of the system and not on
the system functional requirements. This is known as the CIA triad and is a prime
governing factor for all system security engineering activities.

8
The Bell–LaPadula (BLP) model is intended to address confidentiality in a multilevel
security (MLS) system. It defines two primary security constructs, subjects and
objects. Subjects are the active parties, while objects are the passive parties. To
help determine what subjects will be allowed to do, they are assigned clearances
that outline what modes of access (e.g., read, write) they will be allowed to use
when
they interact with objects.

The model system uses labels to keep track of clearances and classifications and
implements a set of rules to limit interactions between different types of subjects
and objects. It was an early security model and does not provide a mechanism for a
one-tone mapping of individual subjects and objects. This also needs
to be addressed by other models or features within a practical operating system.

The model defines two properties, the ss-property and the *-property.

Simple Security property: A subject cannot read/access an object of a higher


classification (no read up)

9
Star property: A subject can only save an object at the same or higher classification
(no write down)

The model does not attempt to define technical constructs or solutions. It merely
identifies a high level set of rules that if implemented correctly, prevent the exposure
or unauthorized disclosure of information in a system processing different
classification levels of data.

9
In computer security, lattice-based access control (LBAC) is a complex access control model
based on the interaction between any combination of objects (such as resources,
computers, and applications) and subjects (such as individuals, groups or organizations).

In this type of label-based mandatory access control model, a lattice is used to define the
levels of security that an object may have and that a subject may have access to. The
subject is only allowed to access an object if the security level of the subject is greater than
or equal to that of the object.

Mathematically, the security level access may also be expressed in terms of the lattice
(a partial order set) where each object and subject have a greatest lower bound (meet) and
least upper bound (join) of access rights. For example, if two subjects A and B need access
to an object, the security level is defined as the meet of the levels of A and B. In another
example, if two objects X and Y are combined, they form another object Z, which is
assigned the security level formed by the join of the levels of X and Y. LBAC is also known
as a label-based access control (or rule-based access control) restriction as opposed to role-
based access control (RBAC).

1
The Biba model is designed to address data integrity and does not address data
confidentiality. Like Bell–LaPadula, Biba is also a lattice-based model with multiple
levels. It defines similar but slightly different modes of access (e.g., observe,
modify) and also describes interactions between subjects and objects. Where Biba
differs most obviously is that it is an integrity model; it focuses on
ensuring that the integrity of information is being maintained by preventing
corruption.

At the core of the model is a multilevel approach to integrity designed to prevent


unauthorized subjects from modifying objects. Access is controlled to ensure that
objects maintain their current state of integrity as subjects interact with them.
Instead of the confidentiality levels used by Bell–LaPadula, Biba assigns integrity
levels to subjects and objects depending on how trustworthy they are considered to
be.

Like Bell–LaPadula, Biba considers the same modes of access but with different
results. The model defines three properties, the ss-property and the *-property
as in BLP, but also includes a new property, the invocation property.

2
Simple Integrity property: A subject cannot observe an object of lower integrity (no
read down)
Star property: A subject cannot modify an object of higher integrity (no write up)
Invocation property: A subject cannot send logical service requests to an object of
higher integrity

2
Brewer and Nash
This model focuses on preventing conflict of interest when a given subject
has access to objects with sensitive information associated with two competing
parties. The principle is that users should not access the confidential information of
both a client organization and one or more of its competitors. At the beginning,
subjects may access either set of objects. Once, however, a subject accesses an
object associated with one competitor, they are instantly prevented from accessing
any objects on the opposite side. This is intended to prevent the subject from
sharing information inappropriately between the two competitors even
unintentionally. It is called the Chinese Wall Model because, like the Great Wall of
China, once on one side of the wall, a person cannot get to the other side. It is an
unusual model in comparison with many of the others
because the access control rules change based on subject behavior.

3
Clark–Wilson
Biba only addresses one of three key integrity goals. The Clark–Wilson model
improves on Biba by focusing on integrity at the transaction level and addressing
three major goals of integrity in a commercial environment.
To address the second goal of integrity, Clark and Wilson realized that they
needed a way to prevent authorized subjects from making undesirable changes.
This required that transactions by authorized subjects be evaluated by another
party before they were committed on the model system. This provided separation
of duties where the powers of the authorized subject were limited by another
subject given the power to evaluate and complete the transaction. To address
internal consistency (or consistency within the model system itself), Clark and
Wilson recommended a strict definition of well-formed transactions.
In other words, the set of steps within any transaction would need to be carefully
designed and enforced. Any deviation from that expected path would result in a
failure of the transaction to ensure that the model system’s integrity was not
compromised. To control all subject and object interactions, Clark–Wilson
establishes a system of subject–program–object bindings such that the subject no
longer has direct access to the object. Instead, this is done through a program with

4
access to the object. This program arbitrates all access and
ensures that every interaction between subject and object follows a defined set of
rules. The program provides for subject authentication and identification and limits
all access to objects under its control.

4
Graham–Denning
Graham–Denning is primarily concerned with how subjects and objects are created,
how subjects are assigned rights or privileges, and how ownership of objects is
managed. In other words, it is primarily concerned with how a model system
controls subjects and objects at a very basic level where other models simply
assumed such control.
The Graham–Denning access control model has three parts: a set of objects, a set of
subjects, and a set of rights. The subjects are composed of two things: a process
and a domain. The domain is the set of constraints controlling how subjects may
access objects. Subjects may also be objects at specific times. The set of rights
govern how subjects may manipulate the passive objects.
This model describes eight primitive protection rights called commands that
subjects can execute to have an effect on other subjects or objects.

The eight basic rules under Graham–Denning govern the following:


1. Secure object creation
2. Secure object deletion
3. Secure subject creation

5
4. Secure subject deletion
5. Secure provisioning of read access right
6. Secure provisioning of grant access right
7. Secure provisioning of delete access right
8. Secure provisioning of transfer access right

5
6
Harrison, Ruzzo, Ullman (HRU)
This model is very similar to the Graham–Denning model, and it is composed of a
set of generic rights and a finite set of commands. It is also concerned with
situations in which a subject should be restricted from gaining particular privileges.
To do so, subjects are prevented from accessing programs, or subroutines, that can
execute a particular command (to grant read access for example) where necessary.

7
Security controls are safeguards or countermeasures that mitigate risks to
confidentiality, integrity, or availability in a system or operating environment.
Controls may impact or modify the behavior of people, process, or technology. They
may be directly applied or inherited from another system or organization.

8
9
10
These frameworks will be covered in detail in later courses taught within Cyber, Offensive,
DFIRM, IT Audit, SCADA and other tracks inshAllah.

1
Control frameworks and standards are intended to be tailored to specific use-cases.
By nature, the control frameworks are general cases that are intended to be widely
applied. For that reason, they may lack specifics on implementation details or
require the control user to input specific values for their organization or
environment (e.g., control says you have to have a screen lock but allows the
adopter to select a lock timeout that makes sense for their use). It is critical to
adjust control specifications or parameters to meet the needs of a specific system
or environment to provide the optimal security value. The tailoring process is well
documented in most control frameworks and fully supported by the frameworks
themselves. Some organizations choose to treat controls and control frameworks as
checklists and forego intelligent tailoring, thus, reducing the overall security value of
the controls.

2
Each control should include specific evaluation methods and expected
results. To be effective as a security control, the control must be valuable
and have one or more measures of effectiveness associated with it.

The NIST framework defines three primary methods of control evaluation:


Test: Conduct a direct test of the control (usually used for technical type controls)
Interview: Interview or question staff (usually used for management or operational
controls)
Examine: Examine documentation or artifacts for evidence that a control is properly
employed (used for all control types). In many cases, a control may (and should) be
evaluated using multiple evaluation methods to ensure control effectiveness. For
instance, to evaluate a particular control, the assessor may perform a technical test
to validate a function, examine documentation to ensure the function was correctly
configured, and interview a system administrator regarding operation of the
function. Taken together, the results may show that the control is effectively applied
or that there is some deficiency that limits the control effectiveness.

3
Variations of these common capabilities are integrated into most modern
operating systems and hardware platforms. The specific methods and types of
implementation will vary from platform to platform but all typically share some of
the common security value obtained from these capabilities.

System security capabilities generally interact with one or more other security
capabilities or have some level of integration with other security components.
This provides an integrated defense-in-depth model within the system architecture
itself to limit the overall attack surface of the system and harden
it against different forms of attack. However, security capabilities may be disabled
or not fully integrated based on particular vendor products chosen
as system components, or technical implementation by the system manufacturer or
operator. For maximum functionality, integrated system
security capabilities must typically be enabled and properly configured to provide
desired protective capabilities.

4
In the above generic representation of an operating system. It does not represent a
particular operating system, but it contains elements common to most modern
operating systems. This diagram can be used for reference when considering the
system security capabilities described in the following sections.

In a modern operating system, there are two primary processor states:


a user mode and a kernel mode. The kernel mode is reserved for core operating
system management while the user mode is exposed to user applications and
services. Functions allowed to execute on the hardware are limited in user mode
and managed by processes that exist in kernel mode. This provides a level of
abstraction that restricts actions that can be taken at the user level. There is an
additional layer of separation in many operating systems call the Hardware
Abstraction Layer (HAL) that acts as an interface between some user and kernel
mode operations and the actual system hardware. This allows for standardized
commands directed at hardware to be generalized and
translated to device specific commands but also limits the binary command set that
can be sent directly to hardware components. Device drivers function in a similar
fashion but may allow more direct control over specific hardware devices based on

5
manufacturer specifications. The hardware layer may include specialized security
hardware such as a Trusted Platform Module (TPM).

5
The system kernel is the core of an OS, and one of its main functions is to provide
access to system resources, which includes the system’s hardware and processes.

The kernel:
• Loads and runs binary programs
• Schedules the task swapping that allows computer systems to do more than one
thing at a time
• Allocates memory
• Tracks the physical location of files on the computer’s hard disks.

The kernel provides these services by acting as an interface between other


programs operating under its control and the physical hardware of the computer;
this insulates programs running on the system from the complexities of the
computer.

6
Allocates and manages physical and/or virtual memory within a system.

7
Enforces access control policy and rules over subjects interacting with objects and
performing operations. It is typically intended to be always on and impossible to
bypass for any function. It operates in kernel mode and provides oversight to the
operation of internal OS functions.

8
Manages and controls input and output from the operating system.

9
Provides a generalized or common set of commands for applications or
processes executing on a system to perform standard operations and
communications. It removes the need for applications to directly interface with
some OS components and hardware.

10
The UI presents control and input methods to system users in an understandable
and controlled fashion. It often includes common user
interaction functions that can be easily implemented by applications or
code executing on the system.

11
Modern systems include some form of access control. Even kiosk or general user
type systems internally implement a system of permissions and rules for accessing
processes, memory, applications, and operating system functions even if those
controls are transparent to the end user. Access controls are typically enforced by a
kernel level module known as the security monitor or reference monitor.

Access control mechanisms are typically supported by the file system that often
stores security attributes with files and enables fine-grained access control in
storage objects.

1
From a security perspective, memory and storage are the most important resources
in any computing system. Ideally, it would be possible to easily separate memory
used by subjects (such as running processes and threads) from objects (such as data
in storage). Buffer overflows are a common type of attack that
attempts to write executable code into memory locations where it may be
inadvertently executed.

Modern operating systems utilize a variety of techniques to limit the exposure of


the memory space to a potential attacker. Direct access to the system memory is
limited to user-space programs or allocated randomized blocks of memory space to
limit the utility of a crafted memory attack running within a program or piece of
code. Additionally, memory space for user programs may be monitored by the
operating system to ensure it is utilizing memory properly and that executable
code is only located in authorized memory blocks. An example is Data Execution
Prevention (DEP) technology in Windows that will close a program or code that is
mismanaging memory or attempting to execute code from unauthorized locations.

2
Processors and their supporting chipsets provide one of the first layers of defense in
any computing system. In addition to providing specialized processors for security
functions (such as cryptographic coprocessors), processors also have states that can
be used to distinguish between more or less privileged instructions.

Most processors support at least two states:


• A supervisor state
• A problem state

In supervisor state (also known as kernel mode), the processor is operating at the
highest privilege level on the system, and this allows the process running in
supervisor state to access any system resource (data and hardware) and execute
both privileged and non-privileged instructions.

In problem state (also known as user mode), the processor limits the access to
system data and hardware granted to the running process. A malicious process
running in supervisor state has very few restrictions placed upon it and can be used
to cause a lot of damage. Ideally, access to supervisor state is limited only to core

3
OS functions that are abstracted from end-user interaction through other controls,
but this is not always the case.

Process isolation can also be used to prevent individual processes from interacting
with each other. This can be done by providing distinct address spaces for each
process, and preventing other processes from accessing that area of memory, and
assigning access permissions to files or other resources to each process.

Naming distinctions are also used to distinguish between different processes. Virtual
mapping is also used to assign randomly chosen areas of actual memory to a process
to prevent other processes from finding those locations easily. Encapsulation of
processes as objects can also be used to isolate them, since an object includes the
functions for operating on it, the details of how it is implemented can be hidden. The
system can also ensure that shared resources are managed to ensure that processes
are not allowed to access shared resources in the same time slots.

3
Abstraction involves the removal of characteristics from an entity to easily
represent its essential properties. Abstraction negates the need for users to know
the particulars of how an object functions. They only need to be familiar with the
correct syntax for using an object and the nature of the information that will be
presented as a result. Since a separate subject controls the access to the object, the
ability to manipulate the object outside of the defined rules is limited.

4
The security kernel or “reference monitor” within an operating system or hardware
device, acts as a security oversight mechanism that enforces a predefined set of
rules when a subject accesses an object. The rules may include validating
permissions from a table (e.g., DAC) but are mandatorily applied and designed to
prevent being bypassed.

However, when user subjects are executing with administrative rights on a system
(e.g., Windows Administrator, Linux/Unix root), the subject often has full control of
most system objects. The security kernel will still operate, but it will lose
effectiveness when the subject has full security rights to all objects. To maximize the
effectiveness of the security kernel, user subjects must be
executed with the least privilege necessary to perform their intended function.

1
Encryption can be applied to data at rest (e.g., files on hard drive) or data in
transit (e.g., communication channel). Encryption may be used to protect
confidentiality, integrity, or both concurrently. The most direct value of encryption is
the protection of data while the operating system protections are not active or
available. For example, encrypted data may be stored on a hard drive. If the
computer system is turned off and the hard drive removed, the data cannot be read
or modified since it is encrypted. Also, once data has been transmitted from the
system, if encrypted, it is protected from access or
modification if intercepted in transit. The specific protections (confidentiality,
integrity) and level of protection provided by encryption varies depending on the
specific cryptographic mechanism utilized.

2
Code signing and validation is a cryptographic function. Executable code
is digitally signed using mechanisms presented in this module. This allows
an operating system, firmware, or even hardware components to validate
the digital signature on the executable code prior to it being loaded for
execution. This ensures that only known, approved code is able to execute
on a system or device.

In some operating systems, the system checks the OS components before


they are loaded. This helps to prevent unauthorized code replacing
legitimate system components and being executed at a higher privilege
level than would normally be granted to user code.

Code signing may also be used during system or component updates or when
loading new software to ensure that the copy being loaded is an approved copy
from a recognized source. The protects the system from loading malicious or
unapproved code presented as legitimate code.

3
Secure systems must also have the ability to provide administrators with evidence
of their correct operation. This is performed using logging subsystems that allow
for important system, security, and application messages to be recorded for
analysis. More secure systems will provide considerable protection to ensure these
logs cannot be tampered with, including secure export of such logs to external
systems.
As part of an organizational security architecture, logs and monitoring
data must be collected from individual systems and reviewed by automated or
manual means. This is typically done centrally where data from multiple systems
can be used to build an overall protection picture of the entire information
environment. Logs that are not reviewed or managed, either by automated or
manual means, provide some value to correct issues after they have occurred. By
monitoring logs and information systems, the audit data can provide some
preventative and detective control value as well.

4
Virtualization offers numerous advantages from a security perspective. Virtual
machines are typically isolated in a sandbox environment and if infected can be
removed quickly or shut down and replaced by another virtual machine. The
sandbox environment is intentionally designed to keep executing code
within the controlled sandbox space and limit communications into or out of the
sandbox.
Virtual machines:
Have limited access to hardware resources and, therefore, help protect the host
system and other virtual machines
Do require strong configuration management control and versioning to ensure
known good copies are available for restoration if needed
Are also subject to all the typical requirements of hardware based systems,
including anti-malware software, encryption, host intrusion detection system
(HIDS), firewalls, and patching.

Some operating systems automatically, or can be configured to, sandbox certain


types of code. Mobile code (e.g., Java, ActiveX, etc.) may be allowed only to execute
in a controlled sandbox where the system configuration controls how much or little

5
access to the rest of the system is possible for code executing within the sandbox.

5
1
Hardware components may be used to provide security services to the system. A
common example is the Trusted Platform Module (TPM) that is provided by or
available as an option on most major device manufacturers. The TPM is a hardware
module that includes a secure storage container and a cryptographic processor with
some cryptographic functions. It is typically used to securely generate and store
cryptographic keys or provide secure storage of small data sets.
The most common use for a TPM is to generate and store cryptographic keys
associated with file system or drive encryption mechanisms. Since the keys
are stored within the dedicated hardware module, they are extremely difficult to
extract when the system is powered down. They are only exposed at
certain points during the boot process that are difficult to monitor prior to the OS
being functional and taking over the role of protecting the keys. Other hardware
security modules exist for specialty functions and may be added to systems or used
as peripheral devices for special security functions.

2
Modern files systems store security attributes, or permissions, associated with files
as an integral part of the file system. This enables advanced security models to be
employed in practical systems and ensures easy association of security attributes
with individual files. Some file systems include journaling that protects file integrity
by ensuring that incomplete disk operations are identified and completed.

3
The following are examples of host protection software that may be installed at the
system level to provide additional protections beyond those built into the OS and
system architecture. Some may be available as OS components but must typically
be enabled and configured for full function. In other cases, third-party software
suites may be used to provide these functions.

Antivirus: Protects against viruses and malicious code by checking files against a list
of known malware. Many products also include a heuristics function that allows
them to identify malware that is not in their database based on software behavior.

Host-based intrusion prevention system (HIPS): HIPS provides monitoring of system


communications and performs a similar function to a network-based intrusion
prevention system (NIPS) within a specific host.

Host firewall: Blocks inbound or outbound communications from the host based on
a defined rule set. Some host firewalls allow applications to dynamically configure
the firewall to allow on-demand communications when necessary.

4
File integrity monitoring (FIM): Creates a known baseline of all files on a system,
typically using a cryptographic hashing mechanism to create unique signatures for
each file. It can then compare files against the known baseline periodically or when
the files are loaded into memory for use.

Configuration and policy monitor: A configuration or policy monitor provides


oversight to ensure defined system configurations or policies are correctly configured
and not improperly modified. It may also report system status or compliance to an
enterprise tool.

4
This topic introduces some common vulnerabilities and mitigation approaches that
are common among most system types. It then presents typical vulnerabilities and
mitigation approaches for various system types. The vulnerabilities and mitigations
are not intended to be comprehensive for each system type and represent the most
common issues and solutions associated with the system type. For each system
type, consider which common vulnerabilities might exist in the various system
components in addition to the system specific vulnerabilities. In particular consider
how common vulnerabilities might exist in the
following:
• System hardware
• System code
• System misuse opportunities
• System communications

5
6
7
8
9
10
Hacking: Human action attempting various permutations of actions to defeat or
bypass system protections or system security.

11
Social engineering: Attempting to gain information or access by impacting human
behavior or process. Generally implemented through human interaction but may be
message or communication based.

12
Malware distribution: Manual or automated distribution of malware. May be
targeted, untargeted, or the result of self replicating malware moving
autonomously.

13
Phishing: Attempting to gain information or access by sending messages (e.g.,
email) that seem to be legitimate but are not. May be combined with types of social
engineering or malware distribution.

14
15
1
2
3
4
Know what you have: Maintain a good inventory of all IT operating in the
environment and understand the operational status. While this sounds simple, it is
one of the most difficult things to accomplish for most large organizations.

5
Patch and manage what you have: Keep hardware, firmware, and software up to
date and manage system configurations to ensure they are kept in a secure and
well-maintained state. This is a basic security function but is also commonly
neglected and not well implemented in many organizations.

6
Assess/monitor/log: Assess system security status, monitor the status continuously,
and log system, user, and process actions to the greatest extent possible. At the
enterprise level, this includes collecting and aggregating individual system logs with
automated and manual reviews.

7
Educate users: At the enterprise level, this is critical to address human-based
attacks (social engineering, phishing, etc.) that technology alone cannot defend
against.

8
The following are common system vulnerability types that exist to some degree in
most systems. For each of the specific system types in this module, the common
system vulnerabilities should be considered
applicable to some degree. The impact of the common vulnerabilities may be
different based on system type.

Hardware vulnerabilities are most typically associated with loss of availability when
components fail. However, supply chain concerns over inappropriate modification
or counterfeit hardware components are
valid concerns. Improperly configured or illicitly modified hardware can impact
system confidentiality and integrity.

Hardware:
Hardware components may fail at any time
o Mean time between failures (MTBF) used to calculate expected life
o Failure rates higher during initial system operation

Supply chain issues may introduce technical flaws/vulnerabilities or malicious

9
modification

Old hardware may be difficult to repair/replace

Communication vulnerabilities can directly impact confidentiality, integrity, or


availability depending on system functions. Typically, the communication sub-systems
of an information system are the most exposed components of the system and the
most susceptible to technical attacks.

Communications:
• Can fail
• Can be blocked (denial of service (DoS))
• Can be intercepted
• Can be counterfeited (replayed)
• Can be modified
• Characteristics can expose information about the sender/receiver (e.g., address,
location, etc.)

Misuse by user:
• Can be intentional or accidental
• Can degrade or bypass security controls
• Increases in likelihood as difficulty to operate increases
• For example, difficult security requirements increase likelihood of intentional
misuse to “get the job done”

Code flaws:
• Exist in all software products with more than trivial complexity
• May be introduced accidentally or intentionally
• Typical risk conditions:
o Known flaws, patch available, systems not patched, exploit available
o Known flaws, patch not available, exploit available
o Unknown flaws, exploit available (zero-day attack possible)

Emanations vulnerabilities are primarily a concern to very high security systems (eg.
Used by Govt or military agencies that can have a high impact on the national
security of a given country.
• Hardware/physical elements may radiate information
• Radio frequency
• Visible and non-visible spectrum
• Can be used to discern system functions

9
• Can be used to locate systems/components

9
Client-based systems are systems in which the end user directly interfaces with the
computing hardware in the form of desktops, laptops, thin client terminals, and so
on. They are typically present in large quantities in most organizations. Most
organizations are continually adding new and decommissioning old client systems.
They are typically general-purpose computers that are used for a variety of
purposes across an organization.

10
End users in most cases physically control these devices. This allows for end user
modification or removal from enterprise control of the system. They may be more
susceptible to loss or theft for this reason. Since the devices are typically under user
control, monitoring and updating the systems may be difficult as the location and
power status (e.g., on/off) may be indeterminate.
• Physically under user control
• Susceptible to user misuse (intentional or accidental)
• May be lost/stolen
• Monitoring may be difficult
• 100 percent update may be difficult

11
The following mitigations are the basic mitigations to apply to a general purpose
computer. While these mitigations seem basic in nature, they are difficult to do well
across a large installation base of client devices.
• Patch/update*: Continuous action
• General network protections: e.g. Network segmentation, firewall devices,
network intrusion prevention or detection
• Host protections*: Antivirus, host intrusion prevention system (IPS), host firewall,
disk encryption
• Monitor*: Logs, alerts, track location
• Educate users: Anti-phishing campaign, detecting attacks

12
Server-based systems generally provide a specific purpose and may be specially
configured or have special software loaded to provide a specific function. Typical
types include: application servers, file servers, domain controllers, print servers, and
network service servers (e.g., Domain Name Service). They are often
centrally managed and controlled in most organizations and have limited access or
functionality beyond their specific intended purpose. They are also often
maintained in a controlled, limited access environment.

1
2
3
Database systems are hosted on various platforms to include stand-alone server,
cloud hosting environments, distributed computing environments, and so on.
Database systems inherit any platform vulnerabilities and add database-specific
vulnerabilities. They typically contain large quantities of valuable information and
require high-speed operation with large number of transactions. This tends to make
database systems high-value targets for any attacker.

4
Inference: Attacker guesses information from observing available information.
Essentially, users may be able to determine unauthorized information from what
information they can access and may never need to directly access unauthorized
data.

Aggregation: Aggregation is combining nonsensitive or lower sensitivity data from


separate sources to create higher sensitivity information. For example, a user takes
two or more publicly available pieces of data and combines them to form a
classified piece of data that then becomes unauthorized for that user. Thus,
the combined data sensitivity can be greater than the sensitivity of individual parts.

Data mining: Data mining is a process of discovering information in data


warehouses by running queries on the data. A large repository of data is required to
perform data mining. Data mining is used to reveal hidden relationships, patterns,
and trends in the data warehouse. Data mining is based on a series
of analytical techniques taken from the fields of mathematics, statistics,
cybernetics, and genetics. The techniques are used independently and in

5
cooperation with one another to uncover information from data warehouses.

High value target: Databases are considered a high-value target and may be sought
out by attackers and have attackers willing to spend greater effort to find technical
vulnerabilities to exploit than other system types.

5
Input validation: User input or query input is carefully validated to ensure only
allowable information is sent from the user interface to the database server. This
limits the utility of Structured Query Language (SQL) injection type attacks and
potentially protects database information
integrity from invalid entries.

Robust authentication/access control: Database access is strictly controlled and user


interface is limited to preconfigured or controlled interface methods.
l Output throttling: To reduce an attacker’s ability to siphon off database data one
record at a time, throttling can be employed to limit the number of records
provided over a specific time period. This limits an attacker’s ability to perform data
mining and some inference and aggregation attacks.

Anonymization: This approach permanently removes identifying data features from


a database, typically to protect personal information.

Tokenization: Similar to anonymization except that information is replaced with an


identifier that can be used to reconstruct the original data if necessary. The

6
identifiers (tokens) are then kept in a more secure system or offline. This approach
also allows data to be shared or made available with less risk to inference and
aggregation attacks.

6
We will cover this in a very detailed manner in stage 2 of the CODS track covering all topics
and hands on exercises as they relate to SCADA.

Industrial systems and critical infrastructures are often monitored and controlled by
simple computers called industrial control systems (ICS). ICSs are based on standard
embedded systems platforms, and they often use commercial off-the-shelf
software. ICSs are used to control industrial processes such as
manufacturing, product handling, production, and distribution. They typically have
components that execute on embedded, limited function hardware. They also
typically contain interfaces between logical (computer) space and the physical
world. These may include sensors, motors, actuators, valves, gauges, and so on.

7
8
Following are three well-known types of ICS systems:

Supervisory control and data acquisition (SCADA): A SCADA system can be typically
viewed as an assembly of interconnected equipment used to monitor and control
physical equipment in industrial environments. They are widely used to automate
geographically distributed processes such as electricity power generation,
transmission and distribution, oil and gas refining and pipeline management, water
treatment and distribution, chemical production and processing, rail systems, and
other mass transit.

Distributed control systems (DCSs): Typically confined to a geographic area or


specific plant (e.g., manufacturing facility). They are characterized by large numbers
of semi-autonomous controllers. They share many similarities with SCADA systems,
but they are typically confined to a defined area with a local control center.

Programmable logic controllers (PLCs): Ruggedized industrial controller. Typically


use specialized code that reacts in real time to inputs. May be stand-alone systems
or included as components in SCADA or DCS infrastructure.

9
10
11
1
2
Reference https://www.cleo.com/blog/knowledge-base-on-premise-vs-cloud a good article
to read.

Cloud security will be covered in a lot of detail in the advanced CODS to be released in 2021
inshAllah.

In today’s world of enterprise IT, there are many factors that a company must consider in
order to decide whether a cloud infrastructure is the right fit. Conversely, there are many
companies that are unable make the leap into the cloud, instead relying on their tried-and-
true legacy and on-premise applications and software to do business.
On Premise vs. Cloud
It’s no surprise that cloud computing has grown in popularity as much as it has, as its allure
and promise offer newfound flexibility for enterprises, everything from saving time and
money to improving agility and scalability. On the other hand, on-premise software –
installed on a company’s own servers and behind its firewall – was the only offering for
organizations for a long time and may continue to adequately serve your business needs
(think, “if it ain’t broke then don’t fix it”). Additionally, on-premise applications are reliable,
secure, and allow enterprises to maintain a level of control that the cloud often cannot. But
there's agreement among IT decision-makers that in addition to their on-premise and
legacy systems, they'll need to leverage new cloud and SaaS applications to achieve their

3
business goals.
On-Premise Software
Whether a company places its applications in the cloud or whether it decides to keep them
on premises, data security will always be paramount. But for those businesses in highly
regulated industries, the decision might already be made for them as to whether to house
their applications on premise. And knowing your data is located within your in-house servers
and IT infrastructure might also provide more peace of mind anyway.
On-premise software requires that an enterprise purchases a license or a copy of the
software to use it. Because the software itself is licensed and the entire instance of software
resides within an organization’s premises, there is generally greater protection than with
a cloud computing infrastructure. So, if a company needs all this extra security, why would
they dip its proverbial toes into the cloud?
The downside of on-premise environments is that costs associated with managing and
maintaining all the solution entails can run exponentially higher than a cloud computing
environment. An on-premise setup requires in-house server hardware, software licenses,
integration capabilities, and IT employees on hand to support and manage potential issues
that may arise. This doesn’t even factor in the amount of maintenance that a company is
responsible for when something breaks or doesn’t work.
Cloud Computing
Cloud computing differs from on-premises software in one critical way. A company hosts
everything in-house in an on-premise environment, while in a cloud environment, a third-
party provider hosts all that for you. This allows companies to pay on an as-needed basis and
effectively scale up or down depending on overall usage, user requirements, and the growth
of a company.
A cloud-based server utilizes virtual technology to host a company’s applications offsite.
There are no capital expenses, data can be backed up regularly, and companies only have to
pay for the resources they use. For those organizations that plan aggressive expansion on a
global basis, the cloud has even greater appeal because it allows you to connect with
customers, partners, and other businesses anywhere with minimal effort.
Additionally, cloud computing features nearly instant provisioning because everything is
already configured. Thus, any new software that is integrated into your environment is ready
to use immediately once a company has subscribed. With instant provisioning, any time
spent on installation and configuration is eliminated and users are able to access the
application right away.
Key Differences of On Premise vs. Cloud
As outlined above, there are a number of fundamental differences between an on-premises
and a cloud environment. Which path is the correct one for your enterprise depends entirely
on your needs and what it is you’re looking for in a solution.
Deployment
On Premises: In an on-premises environment, resources are deployed in-house and within
an enterprise’s IT infrastructure. An enterprise is responsible for maintaining the solution and
all its related processes.
Cloud: While there are different forms of cloud computing (such as public cloud, private
cloud, and a hybrid cloud), in a public cloud computing environment, resources are hosted

3
on the premises of the service provider but enterprises are able to access those resources
and use as much as they want at any given time.
Cost
On Premises: For enterprises that deploy software on premise, they are responsible for the
ongoing costs of the server hardware, power consumption, and space.
Cloud: Enterprises that elect to use a cloud computing model only need to pay for the
resources that they use, with none of the maintenance and upkeep costs, and the price
adjusts up or down depending on how much is consumed.
Control
On Premises: In an on-premises environment, enterprises retain all their data and are fully in
control of what happens to it, for better or worse. Companies in highly regulated industries
with extra privacy concerns are more likely to hesitate to leap into the cloud before others
because of this reason.
Cloud: In a cloud computing environment, the question of ownership of data is one that
many companies – and vendors for that matter, have struggled with. Data and encryption
keys reside within your third-party provider, so if the unexpected happens and there is
downtime, you maybe be unable to access that data.
Security
On Premises: Companies that have extra sensitive information, such as government
and banking industries must have a certain level of security and privacy that an on-premises
environment provides. Despite the promise of the cloud, security is the primary concern for
many industries, so an on-premises environment, despite some if its drawbacks and price
tag, make more sense.
Cloud: Security concerns remain the number one barrier to a cloud computing deployment.
There have been many publicized cloud breaches, and IT departments around the world are
concerned. From personal information of employees such as login credentials to a loss of
intellectual property, the security threats are real.
Compliance
On Premises: Many companies these days operate under some form of regulatory control,
regardless of the industry. Perhaps the most common one is the Health Insurance Portability
and Accountability Act (HIPAA) for private health information, but there are many others,
including the Family Educational Rights and Privacy Act (FERPA), which contains detailed
student records, and other government and industry regulations. For companies that are
subject to such regulations, it is imperative that they remain compliant and know where
their data is at all times.
Cloud: Enterprises that do choose a cloud computing model must do their due diligence and
ensure that their third-party provider is up to code and in fact compliant with all of the
different regulatory mandates within their industry. Sensitive data must be secured, and
customers, partners, and employees must have their privacy ensured.
Hybrid Cloud Solutions
While the debate of the pros and cons of an on-premises environment pitted against a cloud
computing environment is a real one, and one that many enterprises are having within their
offices right now, there is another model that offers the best of both worlds.
A hybrid cloud solution is a solution that features an element of different types of IT

3
deployment models, ranging from on premises to private cloud and public cloud. A hybrid
cloud infrastructure depends on the availability of a public cloud platform from a trusted
third-party provider, a private cloud constructed either on premises or through a hosted
private cloud provider, and effective WAN connectivity between both of those environments.
Cleo Integration Cloud
Regardless of what kind of environment you are looking for, whether that’s to add
a software-as-a-service (SaaS) solution to address a specific business need, move processes
and data into a cloud integration platform, or whether you are a SaaS organization that
thrives on delivering faster responses to customer requests, you rely on integration to make
your data flows work.
Every successful company needs a scalable infrastructure that can support any-to-any hybrid
integration, data transformation, fast and secure file transfer, and end-to-end visibility of all
the data that flows their dynamic ecosystems. Cleo Integration Cloud enables enterprises to
accelerate ground-to-cloud and cloud-to-cloud integration processes to easily integrate
applications, and storage and business platforms, to connect all your data, no matter what it
is, and wherever you want it, be it on premises or in the cloud.
Cleo Integration Cloud also features self-service and managed services for business and
technical users alike, and allows them to build, control, and monitor any and all B2B,
application, cloud integration, and data lake ingestion processes.
Contact Cleo today to learn more about integrating the critical on-premise and cloud
applications running your business.

3
4
5
6
NIST defines the five essential characteristics of cloud computing
as the following:

On-Demand Self-Service: A consumer can unilaterally provision computing capabilities,


such as server time and network storage, as needed automatically without requiring
human interaction with each service provider.

Broad Network Access: Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms
(e.g., mobile phones, tablets, laptops, and workstations).

Resource Pooling: The provider’s computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer demand. Examples of
resources include storage, processing, memory, and network
bandwidth.

Rapid Elasticity: Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand.

7
Measured Service: Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of
service (e.g., storage, processing, bandwidth, and active user accounts).

Multi-Tenancy: A feature where physical or virtual resources are allocated in such a way that
multiple tenants and their computations and data are isolated from and inaccessible to one
another.

7
Reference https://www.fingent.com/blog/cloud-service-models-saas-iaas-paas-choose-the-
right-one-for-your-business

The future of computing is in the cloud. What it implies is that you adapt your business to
fit in the cloud model. The opposite also holds true as your business can be left behind if
this new technology is underutilized.

Once you sign up for cloud service models Saas, IaaS, Paas , you can leverage its wider
possibilities to bring the flexibility and efficiency that pushes your business growth.

Over the years cloud services have witnessed an exponential growth worldwide. Gartner’s
forecast of global public cloud services revenue estimates that 2018 alone will generate
305.8 billion dollars with 411.4 billion dollars growth projected at 2020.

What these reports signify is a steady adoption of cloud services by businesses across the
world to tackle the entire range of operations that they do. Meanwhile, numerous leading
players in the information technology sector now compete to deliver flexible cloud services
for both the public and enterprises.
Increasing competition meant better delivery of services and innovations, which could
deeply benefit you in scaling up your business. Hence, now is the right time that you deploy

8
a cloud model into your business infrastructure.
An Overview of Cloud Benefits
By adopting a cloud service into your enterprise, what could it possibly do to widen the
scope of your operations? The pros of cloud adoption far outweigh its cons, which is one
reason why you should consider it in the first place. Some of its advantages include:
Scalable – A cloud service allows quick scaling up and down of computing resources to
accommodate your changing needs.
Affordable – You pay less for a cloud service, as it eliminates unnecessary costs involved in
hardware upgrades and maintenance.
Secure – By signing up for a cloud service, you are essentially making your data more secure
using their industry-grade security protocols.
If you have envisioned a goal of making your business more dynamic, then the cloud is the
way. And the question comes down to this: what type of cloud service model would you
implement and which one will fit to your unique business requirements?
Cloud Service Models Saas, IaaS, PaaS
Cloud models come in three types: SaaS (Software as a Service), IaaS (Infrastructure as a
Service) and PaaS (Platform as a Service). Each of the cloud models has their own set of
benefits that could serve the needs of various businesses.
Choosing between them requires an understanding of these cloud models, evaluating your
requirements and finding out how the chosen model can deliver your intended set of
workflows. The following is a brief description of the three types of cloud models and their
benefits.
SaaS
SaaS or Software as a Service is a model that gives quick access to cloud-based web
applications. The vendor controls the entire computing stack, which you can access using a
web browser. These applications run on the cloud and you can use them by a paid licensed
subscription or for free with limited access.
SaaS does not require any installations or downloads in your existing computing
infrastructure. This eliminates the need for installing applications on each of your computers
with the maintenance and support taken over by the vendor. Some known example of SaaS
includes Google G Suite, Microsoft Office 365, Dropbox etc.
IaaS
IaaS or Infrastructure as a Service is basically a virtual provision of computing resources over
the cloud. An IaaS cloud provider can give you the entire range of computing infrastructures
such as storage, servers, networking hardware alongside maintenance and support.
Businesses can opt for computing resources of their requirement without the need to install
hardware on their premises. Amazon Web Services, Microsoft Azure, and Google Compute
Engine are some of the leading IaaS cloud service providers.
PaaS
Platform as a Service or PaaS is essentially a cloud base where you can develop, test and
organize the different applications for your business. Implementing PaaS simplifies the
process of enterprise software development. The virtual runtime environment provided by
PaaS gives a favorable space for developing and testing applications.
The entire resources offered in the form of servers, storage and networking are manageable

8
either by the company or a platform provider. Google App Engine and AWS Elastic Beanstalk
are two typical examples of PaaS. PaaS is also subscription based that gives you flexible
pricing options depending on your business requirements.

Reference https://www.sdxcentral.com/networking/virtualization/definitions/what-is-naas/

Networking-as-a-service (NaaS) is the sale of network services from third parties to


customers that don’t want to build their own networking infrastructure.
NaaS packages networking resources, services, and applications as a product that can be
purchased for a number of users, usually for a contracted period of time. It can include
services such as Wide Area Networking (WAN) connectivity, data center connectivity,
bandwidth on demand (Bandwidth On Demand), security services, and other applications.
Many forms of NaaS
Virtualization technology provides a platform for NaaS, which is related to other cloud
services. Services are offered by Cloud Service Providers (Cloud Service Providers) in addition
to NaaS include Software-as-a-service (SaaS), a computing platform for developing or hosting
applications, known as Platform-as-a-service (PaaS) (PaaS); or an entire networking or
computing infrastructure, known as Infrastructure-as-a-service (IaaS). Cloud Services such as
NaaS and Paas are provided by building a large, scalable infrastructure that can be virtualized
so that it can be sold to individual customers.
Source: NTTLarge NaaS providers includes major CSPs, including Amazon and Rackspace, as
well as the global service providers such as AT&T, Level 3 Communications, Telefonica, and
Verizon. More recently, niche NaaS providers have emerged in areas such as Software-
Defined WAN, which includes players such as Aryaka, Cloudgenix, Pertino, and VeloCloud. It
also includes specialized network providers such as Akamai, which has its own Content
Delivery Network (CDN) for digital media delivery, as well as enterprise SaaS acceleration and
security services..
Overall, NaaS applies to a broad set of applications and services. For example, Aryaka and
Pertino offer WAN and secure Virtual Private Networks (VPN) as a service, Akamai offers CDN
as a service, Amazon offers web-hosting, private cloud, and storage as a service, and many
service providers offer Bandwidth On Demand and hosted networks as a service. Even entire
service providers might outsource their networks, as in the case of mobile virtual network
operator (MVNO).
Standards and SDN to Drive NaaS Growth
One of the challenges of NaaS is developing standards for network interoperability and
portability. For example, a customer buying a NaaS service may want to make sure that if the
service can be double-sourced or swapped out. In this case, the technology would need to
be compatible with other platforms or standards. This is where important standards, APIs,
and open-source initiatives such as OpenStack come into play. Existing Internet standards
developed in the IEEE, such as MPLS and IP are important. The MEF and TM Forum are two
standards working on interoperability between carrier NaaS offerings.
SDN is likely to make Networking- as-a-Service services more prevalent, as service providers
look to leverage their hardware infrastructure so that it can be sold as enterprise network
services. Virtualization enables the capability to share, market, or sell just about any

8
platform, whether it’s cloud infrastructure, networking, or business applications.

8
9
10
11
12
1
NIST, ISO/IEC 17888, and ISO/IEC 17889 both describe four different deployment
models:

Private cloud: In this model, the cloud infrastructure is provisioned for exclusive use
by a single organization comprising multiple consumers (e.g., business units). It may
be owned, managed, and
operated by the organization, a third party, or some combination of them, and it
may exist on or off premises.

Community cloud: Community cloud infrastructure is provisioned for exclusive use


by a specific community of consumers from organizations that have shared
concerns (e.g., mission, security requirements, policy, and compliance
considerations). It may be owned, managed, and operated by one or more of the
organizations in the community, a third party, or some combination of them, and it
may exist on or off premises.

Public cloud: The public cloud infrastructure is provisioned for open use by the
general public. It may be owned, managed, and operated by a business, academic,

2
or government organization, or some combination of them. It exists on the premises
of the cloud provider.

Hybrid cloud: The hybrid cloud infrastructure is a composition of two or more distinct
cloud infrastructures (private, community, or public) that remain unique entities but
are bound together by
standardized or proprietary technology that enables data and application portability
(e.g., cloud bursting for load balancing between clouds). As more organizations are
leveraging SaaS, PaaS, and IaaS, it is important to be aware of the limited ability they
have to define specific security controls and functions.

2
3
Inherently exposed to external communication/access: By their nature, cloud
systems tend to be more exposed to external communications.

Misconfiguration a major risk: Cloud providers typically have well managed


infrastructure, but unfamiliarity with the interface and management functions often
results in users misconfiguring the cloud service or hosted components in a way
that exposes data.

May exist for long periods (risk of being outdated): Services ported to cloud
environment may exist for long periods of time. While the underlying components
provisioned by the cloud service provider (CSP) may be periodically updated, it is
often the user’s responsibility to update some components, but assumptions may
exist that it is not necessary or that the CSP is providing that function when they are
not.

Gap between CSP and data owner security controls: There is a high risk for
misunderstanding on the cloud customer’s part where the responsibilities of the
CSP end for security and the customer responsibilities begin.

4
5
Read this article for added knowledge https://en.wikipedia.org/wiki/Internet_of_things

The Internet of Things (IoT) is made up of small dedicated use devices that are
typically designed as small form factor, embedded hardware with a limited
functionality OS. They may interface with the physical
world and tend to be pervasively deployed where they exist. They are often
connected to general purpose networks with the protections applied to general
purpose computing systems, and their full range of
functions and external accessibility may be unclear to owner or user.

6
7
1
Reference https://www.newgenapps.com/blog/iot-ecosystem-components-the-complete-
connectivity-layer

The systems are becoming smarter day by day and it is claimed that with time we
will be able to see a lot of variations in technology specifically, Internet of Things
(IoT). IoT is a network of smart devices, sensors, and actuators that can
interconnect with each other.

IoT ecosystem is like a community that consists of data and monetary flows that
helps in connecting enterprises and vendors together. This new chain of
development is the best way to connect companies together. Even in future, the
companies will be offering the IoT ecosystem similar to risk management and cyber
security.

What are the Key Components?


IoT is not just transforming connectivity among devices and objects but it is also
allowing the people to get remote access easily. With so many advantages of IoT, it

2
is interesting to see the main ecosystem components that IoT works on. Here are the
main components on which IoT works on.

1. Gateway
Gateway enables easy management of data traffic flowing between protocols and
networks. On the other hand, it also translates the network protocols and makes sure
that the devices and sensors are connected properly.
It can also work to preprocess the data from sensors and send them off to next level
if it is configured accordingly. It is essential to configure it as the presence of TCP/IP
protocol allows easy flow.

Not only this, it gives proper encryption with the network flow and data transmission.
The data flowed through it is in the higher order that is protected by using latest
encryption techniques. You can assume it like an extra layer between the cloud and
devices that filter away the attack and illegal network access.

2. Analytics
The analog data of devices and sensors are converted into a format that is easy to
read and analyze. This is all possible due to the IoT ecosystem that manages and
helps in improving the system. The main factor that is influenced is security.

The most important function of IoT technology is that it supports real-time analysis
that easily observes the irregularities and prevents any loss or scam. Preventing the
malicious things to attack the smart devices will not only give you a sense of security
but also it will save all your private data from being used for illegal purposes.

The big companies collect the data in bulk and analyze it to see the future
opportunity so that they can easily develop more business advancement and gain
something out of it. This analysis easily helps in setting future trends that have a
capability to rule the market. From this analysis, they can be one step ahead of the
time and easily achieve success. Data may be a small word but it holds the power to
make or break the business if used correctly.
Also Read: Uses and Applications of Industrial IoT

3. Connectivity Of Devices
The main components that complete connectivity layer are sensors and devices.
Sensors collect the information and send it off to the next layer where it is being
processed. With the advancement of technology, semiconductor technology is used
that allows the production of micro smart sensors that can be used for several
applications.

2
The main components are:
• Proximity detection,
• Humidity or Moisture Level,
• Temperature sensors and thermostats,
• Pressure sensors,
• RFID tags.

The modern smart sensors and devices use various ways to be connected. The
wireless networks like LORAWAN, Wi-Fi, and Bluetooth makes it easy for them to stay
connected. They have their own advantages and drawbacks that are classified in
various forms like efficiency rate, data transfer, and power.

4. Cloud
With the help of internet of things ecosystem, companies are able to collect bulk data
from the devices and applications. There are various tools that are used for the
purpose of data collection that can collect, process, handle and store the data
efficiently in real time. It is also responsible for making a tough decision that can
easily break the deal. This all is done by one system that is IoT Cloud.

It is an intimidating high-performance network that connects servers together to


optimize the performance of data process that is being processed by many devices at
once. It also helps in controlling traffic and delivering accurate data analytics results.

One of the most important components of IoT cloud is database management that is
distributed in nature. The cloud basically combines many devices, gateways,
protocols, devices and a data store that can be analyzed efficiently. These systems are
used by many companies in order to have improved and efficient data analysis that
can help in the development of the services and products. In addition to this, it also
helps in forming an accurate strategy that can help in building an ideal business
model.

5. User Interface
This is another factor on which IoT ecosystem depends immensely. It provides a
visible and physical part that can be easily accessed by the user. It is important for the
developer to create a user-friendly interface that could be accessed without putting
any extra efforts in it and that can help in easy interaction.
With the help of advancement, there are various interactive designs that could be
used easily and that can easily solve any complex query. For examples, at home
people have started to use the colorful touch panels instead of the hard controls that
were used earlier. It is increasing day by day as now the touchpads are also launched
that can switch on the air conditioners from a distance.

2
This has set out a trend for the digital generations and have managed to hype up
today’s competitive market. The user interface is the first thing that user pay
attention to before buying a device. Even customers are oriented to buy the devices
that are user-friendly and less complex that could be used with wireless connectivity.

6. Standards And Protocols


The webpages are now using the HTML format with the cascading style sheet. This
has made the internet more stable and reliable service to use. They are the most
used standard protocols making it no just friendly but easily acceptable. However, IoT
doesn’t have that standard.

It is important to choose a platform the IoT that can help in determining the way your
platform will interact with the system. Thus, you will be able to have an interaction
with devices and networks with the same standard as yours. It is important for having
the same protocol to have a successful interaction.

7. Database
Internet of things are increasing dynamically and is all dependent on data that are
used immensely in the data centers. It is essential to have a proper database system
that can store and manage the data that is being gathered from various device and
end-users. There are also various management tools that offer many automated
features that help in easy accumulation of data stored and managed in bulk at the
same place.

8. Automation
As mentioned above, the database system is using the automotive features that help
in managing data and accumulating it. However, the data management is the only
limited thing that is used by the internet of things. It is now used for a much more
advanced version that allows the automatic adjustment of the wireless things. For
example, you can easily control light with a click of the remote. The air conditioner is
now connected to your smartphone and you can switch them on and off whenever
you want. Even it is possible to play with the temperature.

9. Development
Internet of things is the latest advancement in technology. The need for the
development is growing and increasing with time. Each and every one is not
depending on the launch of various automotive devices and smart sensors. There are
various prototypes that are in the market that are being deployed and are running in
the testing phase. Also, IoT is not working with only one device. Hence, it is
important that the devices are completed tested according to the compatibility of the

2
device and checked thoroughly that whether the devices can connect wirelessly or
not.

The journey of the internet of things is growing for years. We have managed to
experience many things and advancement in most of the technology. IoT ecosystem
is used to make the protocols easily accessible, reasonably priced, efficient and
secure. It will be excited to see the new demands and development in the several
sectors that will be bought by the internet of things.

Especially the way it connects different companies and vendors together. The main is
that we need to lookout the way everyone can incorporate this IoT ecosystem to
increase their production.

2
This is a reference architecture that I created for a smart city project in North America.
InshAllah we will cover many IoT items in the upcoming CODS advance track due to be
released in 2021 inshAllah.

3
4
1
Reference https://en.wikipedia.org/wiki/Cryptography

Cryptography or cryptology (from Ancient


Greek: κρυπτός, romanized: kryptós "hidden, secret"; and γράφειν graphein, "to
write", or -λογία -logia, "study", respectively[1]) is the practice and study of
techniques for secure communication in the presence of third parties
called adversaries.[2] More generally, cryptography is about constructing and
analyzing protocols that prevent third parties or the public from reading private
messages;[3] various aspects in information security such as
data confidentiality, data integrity, authentication, and non-repudiation[4] are
central to modern cryptography. Modern cryptography exists at the intersection of
the disciplines of mathematics, computer science, electrical
engineering, communication science, and physics. Applications of cryptography
include electronic commerce, chip-based payment cards, digital
currencies, computer passwords, and military communications.

Cryptography prior to the modern age was effectively synonymous with encryption,
the conversion of information from a readable state to apparent nonsense. The

2
originator of an encrypted message shares the decoding technique only with
intended recipients to preclude access from adversaries. The cryptography
literature often uses the names Alice ("A") for the sender, Bob ("B") for the intended
recipient, and Eve ("eavesdropper") for the adversary.[5] Since the development
of rotor cipher machines in World War I and the advent of computers in World War II,
the methods used to carry out cryptology have become increasingly complex and its
application more widespread.
Modern cryptography is heavily based on mathematical theory and computer science
practice; cryptographic algorithms are designed around computational hardness
assumptions, making such algorithms hard to break in practice by any adversary. It is
theoretically possible to break such a system, but it is infeasible to do so by any
known practical means. These schemes are therefore termed computationally
secure; theoretical advances, e.g., improvements in integer factorization algorithms,
and faster computing technology require these solutions to be continually adapted.
There exist information-theoretically secure schemes that provably cannot be broken
even with unlimited computing power—an example is the one-time pad—but these
schemes are more difficult to use in practice than the best theoretically breakable
but computationally secure mechanisms.
The growth of cryptographic technology has raised a number of legal issues in the
information age. Cryptography's potential for use as a tool
for espionage and sedition has led many governments to classify it as a weapon and
to limit or even prohibit its use and export.[6] In some jurisdictions where the use of
cryptography is legal, laws permit investigators to compel the disclosure of
encryption keys for documents relevant to an investigation.[7][8] Cryptography also
plays a major role in digital rights management and copyright infringement of digital
media.[9]

2
Reference https://en.wikipedia.org/wiki/Cryptography

Terminology

The first use of the term cryptograph (as opposed to cryptogram) dates back to the
19th century—originating from The Gold-Bug, a novel by Edgar Allan Poe.[10][11]

Until modern times, cryptography referred almost exclusively to encryption, which is


the process of converting ordinary information (called plaintext) into unintelligible
form (called ciphertext).[12] Decryption is the reverse, in other words, moving from
the unintelligible ciphertext back to plaintext.

A cipher (or cypher) is a pair of algorithms that create the encryption and the
reversing decryption. The detailed operation of a cipher is controlled both by the
algorithm and in each instance by a "key". The key is a secret (ideally known only to
the communicants), usually a short string of characters, which is needed to decrypt
the ciphertext. Formally, a "cryptosystem" is the ordered list of elements of finite
possible plaintexts, finite possible cyphertexts, finite possible keys, and the

3
encryption and decryption algorithms which correspond to each key. Keys are
important both formally and in actual practice, as ciphers without variable keys can
be trivially broken with only the knowledge of the cipher used and are therefore
useless (or even counter-productive) for most purposes.

Historically, ciphers were often used directly for encryption or decryption without
additional procedures such as authentication or integrity checks. There are two kinds
of cryptosystems: symmetric and asymmetric. In symmetric systems the same key
(the secret key) is used to encrypt and decrypt a message. Data manipulation in
symmetric systems is faster than asymmetric systems as they generally use shorter
key lengths. Asymmetric systems use a public key to encrypt a message and a private
key to decrypt it. Use of asymmetric systems enhances the security of
communication.[13] Examples of asymmetric systems include RSA (Rivest–Shamir–
Adleman), and ECC (Elliptic Curve Cryptography). Symmetric models include the
commonly used AES (Advanced Encryption Standard) which replaced the older DES
(Data Encryption Standard).[14]

In colloquial use, the term "code" is often used to mean any method of encryption or
concealment of meaning. However, in cryptography, code has a more specific
meaning. It means the replacement of a unit of plaintext (i.e., a meaningful word or
phrase) with a code word (for example, "wallaby" replaces "attack at dawn").

Cryptanalysis is the term used for the study of methods for obtaining the meaning of
encrypted information without access to the key normally required to do so; i.e., it is
the study of how to crack encryption algorithms or their implementations.

Some use the terms cryptography and cryptology interchangeably in English, while
others (including US military practice generally) use cryptography to refer specifically
to the use and practice of cryptographic techniques and cryptology to refer to the
combined study of cryptography and cryptanalysis.[15][16] English is more flexible than
several other languages in which cryptology (done by cryptologists) is always used in
the second sense above. RFC 2828 advises that steganography is sometimes included
in cryptology.[17]
The study of characteristics of languages that have some application in cryptography
or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is
called cryptolinguistics.

History of cryptography and cryptanalysis


Main article: History of cryptography
Before the modern era, cryptography focused on message confidentiality (i.e.,
encryption)—conversion of messages from a comprehensible form into an

3
incomprehensible one and back again at the other end, rendering it unreadable by
interceptors or eavesdroppers without secret knowledge (namely the key needed for
decryption of that message). Encryption attempted to
ensure secrecy in communications, such as those of spies, military leaders,
and diplomats. In recent decades, the field has expanded beyond confidentiality
concerns to include techniques for message integrity checking, sender/receiver
identity authentication, digital signatures, interactive proofs and secure computation,
among others.

3
Cryptography today can be said to provide some important
security services. The five key services that cryptography can provide are the
following:

Confidentiality: Cryptography provides confidentiality through altering or hiding a


message so that ideally it cannot be understood by anyone except the intended
recipient. Confidentiality is a service that ensures keeping information secret from
those who are not authorized to have it. Secrecy is a term sometimes used to mean
confidentiality.

Integrity: Cryptographic tools can provide integrity services that allow a recipient to
verify that a message has not been altered. Cryptography tools cannot prevent a
message from being altered, but they can be effective to detect either intentional
or accidental modification of the message.

Cryptographic functions use several methods to ensure that a message has not
been changed or altered. These may include hash functions, digital signatures, and
simpler message integrity controls such a message authentication codes (MACs),

4
Cyclic Redundancy Checks (CRC), or even checksums. The concept
behind this is that the recipient is able to detect any change that has been made to a
message, whether accidentally or intentionally.

Authenticity: Sometimes referred to as “proof of origin,” this is a service that allows


entities wanting to communicate with each other to positively identify each other.
Information delivered over a channel should be authenticated as to the origin of that
transmission. Authenticity can allow a recipient to know positively that a
transmission of information actually came from the entity that we expect it from.

Non-repudiation: This is a service that prevents an entity from denying having


participated in a previous action. Typically, nonrepudiation can only be achieved
properly through the use of digital signatures. The word repudiation means the ability
to deny. So, non-repudiation means the inability to deny. There are
two flavors of non-repudiation:
o Non-repudiation of origin means that the sender cannot deny they sent a
particular message.
o Non-repudiation of delivery means that the receiver cannot say that they
received a different message than the one they actually did receive.

Access Control: This is an added benefit of Cryptography. Through the use of


cryptographic tools,
many forms of access control are supported—from log-ins via passwords and
passphrases to the prevention of access to confidential files or messages. In all cases,
access would only be possible for those individuals who had access to the correct
cryptographic keys.

4
We will cover them in detail later in the chapter inshAllah.

5
As we have seen, cryptography is about writing secrets. The first secret messages
were exchanged as long as thousands of years ago. Cryptography involves
scrambling some kind of useful information in its original form, called plaintext, into
a garbled or secret form, called ciphertext. The usual intent is to allow two or more
parties to communicate the information while preventing other parties from being
privy to it.

6
7
Data at Rest
The protection of stored data is often a key requirement for an organization’s
sensitive information. Backups, off-site storage, password files, sensitive databases,
valuable files, and other types of sensitive information need to be protected from
disclosure or undetected alteration. This can usually be done through the use of
cryptographic algorithms that limit access to the data to those that hold the proper
encryption (and decryption) keys. Protecting these valuable examples of assets of
the organization can be done usually through cryptography, but it is usually referred
to as protecting data at rest. Data at rest means the data is resting, stored on some
storage media without it moving at any point.

Data in Transit or in motion aka data is on the move


Data in transit, sometimes referred to as data in motion, is data that is moving,
usually across networks. Whether the message is sent manually, over a voice
network, or via the internet, modern cryptography can provide secure and
confidential methods to transmit data and allows the verification of the integrity of

8
the
message so that any changes to the message itself can be detected.

8
End-to-end encryption is generally performed by the end user within an
organization. The data is encrypted at the start of the communications channel or
before and remains encrypted until it is
decrypted at the remote end. Although data remain encrypted when passed
through a network, routing information remains visible.

9
Data that is moving across a network can be protected using cryptography. There
are two methods for protecting data in transit across a network, link or end-to-end
encryption. In general, link encryption is performed by service providers, such as a
data communications provider on networks. Link encryption encrypts all of the
data along a communications path (e.g., a satellite link, telephone circuit, or T-1
line). Because link encryption also encrypts routing data, communications nodes
need to decrypt the data to
continue routing. The data packet is decrypted and re-encrypted at each point in
the communications channel. It is theoretically possible that an attacker
compromising a node in the network may see the message in the clear. Because link
encryption also encrypts the routing information, it provides traffic confidentiality
(not data confidentiality) better than end-to end encryption. In other words, it can
be used to hide the routing information. Traffic confidentiality hides the addressing
information from an observer, preventing an inference attack based on the
existence of traffic between two parties.

10
Reference https://en.wikipedia.org/wiki/History_of_cryptography

Please read this in detail

Cryptography, the use of codes and ciphers to protect secrets, began thousands of
years ago. Until recent decades, it has been the story of what might be called classic
cryptography — that is, of methods of encryption that use pen and paper, or
perhaps simple mechanical aids. In the early 20th century, the invention of complex
mechanical and electromechanical machines, such as the Enigma rotor machine,
provided more sophisticated and efficient means of encryption; and the subsequent
introduction of electronics and computing has allowed elaborate schemes of still
greater complexity, most of which are entirely unsuited to pen and paper.

The development of cryptography has been paralleled by the development


of cryptanalysis — the "breaking" of codes and ciphers. The discovery and
application, early on, of frequency analysis to the reading of encrypted
communications has, on occasion, altered the course of history. Thus
the Zimmermann Telegram triggered the United States' entry into World War I;

11
and Allied reading of Nazi Germany's ciphers shortened World War II, in some
evaluations by as much as two years.
Until the 1970s, secure cryptography was largely the preserve of governments. Two
events have since brought it squarely into the public domain: the creation of a public
encryption standard (DES), and the invention of public-key cryptography.

Antiquity

A Scytale, an early device for encryption.


The earliest known use of cryptography is found in non-standard hieroglyphs carved
into the wall of a tomb from the Old Kingdom of Egypt circa 1900 BC.[1] These are not
thought to be serious attempts at secret communications, however, but rather to
have been attempts at mystery, intrigue, or even amusement for literate
onlookers.[1][failed verification]
Some clay tablets from Mesopotamia somewhat later are clearly meant to protect
information—one dated near 1500 BC was found to encrypt a craftsman's recipe for
pottery glaze, presumably commercially valuable.[2][3] Furthermore, Hebrew scholars
made use of simple monoalphabetic substitution ciphers (such as the Atbash cipher)
beginning perhaps around 500 to 600 BC.[4][5]
In India around 400 BC to 200 AD, Mlecchita vikalpa or the art of understanding
writing in cypher, and the writing of words in a peculiar way was documented in
the Kama Sutra for the purpose of communication between lovers. This was also
likely a simple substitution cipher.[6][7] Parts of the Egyptian demotic Greek Magical
Papyri were written in a cypher script.[8]
The ancient Greeks are said to have known of ciphers.[9] The scytale transposition
cipher was used by the Spartan military,[5] but it is not definitively known whether the
scytale was for encryption, authentication, or avoiding bad omens in
speech.[10][11] Herodotus tells us of secret messages physically concealed beneath wax
on wooden tablets or as a tattoo on a slave's head concealed by regrown hair,
although these are not properly examples of cryptography per se as the message,
once known, is directly readable; this is known as steganography. Another Greek
method was developed by Polybius (now called the "Polybius
Square").[5] The Romans knew something of cryptography (e.g., the Caesar cipher and
its variations).[12]

Medieval cryptography
The first page of al-Kindi's manuscript On Deciphering Cryptographic Messages,
containing the first descriptions of cryptanalysis and frequency analysis.
See also: Classical cipher and Voynich Manuscript
David Kahn notes in The Codebreakers that modern cryptology originated among
the Arabs, the first people to systematically document cryptanalytic methods.[13] Al-

11
Khalil (717–786) wrote the Book of Cryptographic Messages, which contains the first
use of permutations and combinations to list all possible Arabic words with and
without vowels.[14]

The invention of the frequency analysis technique for breaking


monoalphabetic substitution ciphers, by Al-Kindi, an Arab
mathematician,[15][16] sometime around AD 800, proved to be the single most
significant cryptanalytic advance until World War II. Al-Kindi wrote a book on
cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering
Cryptographic Messages), in which he described the first cryptanalytic techniques,
including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and
syntax, and most importantly, gave the first descriptions on frequency analysis.[17] He
also covered methods of encipherments, cryptanalysis of certain encipherments, and
statistical analysis of letters and letter combinations in Arabic.[18][19] An important
contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency
analysis.[14]

In early medieval England between the years 800-1100, substitution ciphers were
frequently used by scribes as a playful and clever way encipher notes, solutions to
riddles, and colophons. The ciphers tend to be fairly straightforward, but sometimes
they deviate from an ordinary pattern, adding to their complexity and, possibly, to
their sophistication as well.[20] This period saw vital and significant cryptographic
experimentation in the West.

Ahmad al-Qalqashandi (AD 1355–1418) wrote the Subh al-a 'sha, a 14-volume
encyclopedia which included a section on cryptology. This information was attributed
to Ibn al-Durayhim who lived from AD 1312 to 1361, but whose writings on
cryptography have been lost. The list of ciphers in this work included
both substitution and transposition, and for the first time, a cipher with multiple
substitutions for each plaintext letter. Also traced to Ibn al-Durayhim is an exposition
on and worked example of cryptanalysis, including the use of tables of letter
frequencies and sets of letters which cannot occur together in one word.

The earliest example of the homophonic substitution cipher is the one used by Duke
of Mantua in the early 1400s.[21] Homophonic cipher replaces each letter with
multiple symbols depending on the letter frequency. The cipher is ahead of the time
because it combines monoalphabetic and polyalphabetic features.

Essentially all ciphers remained vulnerable to the cryptanalytic technique of


frequency analysis until the development of the polyalphabetic cipher, and many
remained so thereafter. The polyalphabetic cipher was most clearly explained

11
by Leon Battista Alberti around the year AD 1467, for which he was called the "father
of Western cryptology".[1] Johannes Trithemius, in his work Poligraphia, invented
the tabula recta, a critical component of the Vigenère cipher. Trithemius also wrote
the Steganographia. The French cryptographer Blaise de Vigenère devised a practical
polyalphabetic system which bears his name, the Vigenère cipher.[1]

In Europe, cryptography became (secretly) more important as a consequence of


political competition and religious revolution. For instance, in Europe during and
after the Renaissance, citizens of the various Italian states—the Papal States and the
Roman Catholic Church included—were responsible for rapid proliferation of
cryptographic techniques, few of which reflect understanding (or even knowledge) of
Alberti's polyalphabetic advance. 'Advanced ciphers', even after Alberti, weren't as
advanced as their inventors / developers / users claimed (and probably even
themselves believed). They were regularly broken. This over-optimism may be
inherent in cryptography, for it was then - and remains today - fundamentally difficult
to accurately know how vulnerable one's system actually is. In the absence of
knowledge, guesses and hopes, predictably, are common.

Cryptography, cryptanalysis, and secret-agent/courier betrayal featured in


the Babington plot during the reign of Queen Elizabeth I which led to the execution
of Mary, Queen of Scots. Robert Hooke suggested in the chapter Of Dr. Dee's Book of
Spirits, that John Dee made use of Trithemian steganography, to conceal his
communication with Queen Elizabeth I.[22]

The chief cryptographer of King Louis XIV of France was Antoine Rossignol and he and
his family created what is known as the Great Cipher because it remained unsolved
from its initial use until 1890, when French military cryptanalyst, Étienne
Bazeries solved it.[23] An encrypted message from the time of the Man in the Iron
Mask (decrypted just prior to 1900 by Étienne Bazeries) has shed some, regrettably
non-definitive, light on the identity of that real, if legendary and unfortunate,
prisoner.
Outside of Europe, after the Mongols brought about the end of the Islamic Golden
Age, cryptography remained comparatively undeveloped. Cryptography in
Japan seems not to have been used until about 1510, and advanced techniques were
not known until after the opening of the country to the West beginning in the 1860s.

Cryptography from 1800 to World War II

Main article: World War I cryptography


Although cryptography has a long and complex history, it wasn't until the 19th

11
century that it developed anything more than ad hoc approaches to either
encryption or cryptanalysis (the science of finding weaknesses in crypto systems).
Examples of the latter include Charles Babbage's Crimean War era work on
mathematical cryptanalysis of polyalphabetic ciphers, redeveloped and published
somewhat later by the Prussian Friedrich Kasiski. Understanding of cryptography at
this time typically consisted of hard-won rules of thumb; see, for example, Auguste
Kerckhoffs' cryptographic writings in the latter 19th century. Edgar Allan Poe used
systematic methods to solve ciphers in the 1840s. In particular he placed a notice of
his abilities in the Philadelphia paper Alexander's Weekly (Express) Messenger,
inviting submissions of ciphers, of which he proceeded to solve almost all. His success
created a public stir for some months.[24] He later wrote an essay on methods of
cryptography which proved useful as an introduction for novice British cryptanalysts
attempting to break German codes and ciphers during World War I, and a famous
story, The Gold-Bug, in which cryptanalysis was a prominent element.
Cryptography, and its misuse, were involved in the execution of Mata Hari and
in Dreyfus' conviction and imprisonment, both in the early 20th century.
Cryptographers were also involved in exposing the machinations which had led to the
Dreyfus affair; Mata Hari, in contrast, was shot.
In World War I the Admiralty's Room 40 broke German naval codes and played an
important role in several naval engagements during the war, notably in detecting
major German sorties into the North Sea that led to the battles of Dogger
Bank and Jutland as the British fleet was sent out to intercept them. However its
most important contribution was probably in decrypting the Zimmermann Telegram,
a cable from the German Foreign Office sent via Washington to
its ambassador Heinrich von Eckardt in Mexico which played a major part in bringing
the United States into the war.
In 1917, Gilbert Vernam proposed a teleprinter cipher in which a previously prepared
key, kept on paper tape, is combined character by character with the plaintext
message to produce the cyphertext. This led to the development of
electromechanical devices as cipher machines, and to the only unbreakable cipher,
the one time pad.
During the 1920s, Polish naval-officers assisted the Japanese military with code and
cipher development.
Mathematical methods proliferated in the period prior to World War II (notably
in William F. Friedman's application of statistical techniques to cryptanalysis and
cipher development and in Marian Rejewski's initial break into the German Army's
version of the Enigma system in 1932).

World War II cryptography[edit]


See also: World War II cryptography, Cryptanalysis, and List of cryptographers
The Enigma machine was widely used by Nazi Germany; its cryptanalysis by the Allies

11
provided vital Ultra intelligence.
By World War II, mechanical and electromechanical cipher machines were in wide
use, although—where such machines were impractical—code books and manual
systems continued in use. Great advances were made in both cipher design
and cryptanalysis, all in secrecy. Information about this period has begun to be
declassified as the official British 50-year secrecy period has come to an end, as US
archives have slowly opened, and as assorted memoirs and articles have appeared.
Germany[edit]
The Germans made heavy use, in several variants, of an electromechanical rotor
machine known as Enigma.[25] Mathematician Marian Rejewski, at Poland's Cipher
Bureau, in December 1932 deduced the detailed structure of the German Army
Enigma, using mathematics and limited documentation supplied by Captain Gustave
Bertrand of French military intelligence. This was the greatest breakthrough in
cryptanalysis in a thousand years and more, according to historian David Kahn.[citation
needed] Rejewski and his mathematical Cipher Bureau colleagues, Jerzy

Różycki and Henryk Zygalski, continued reading Enigma and keeping pace with the
evolution of the German Army machine's components and encipherment procedures.
As the Poles' resources became strained by the changes being introduced by the
Germans, and as war loomed, the Cipher Bureau, on the Polish General Staff's
instructions, on 25 July 1939, at Warsaw, initiated French and British intelligence
representatives into the secrets of Enigma decryption.
Soon after the Invasion of Poland by Germany on 1 September 1939, key Cipher
Bureau personnel were evacuated southeastward; on 17 September, as the Soviet
Union attacked Poland from the East, they crossed into Romania. From there they
reached Paris, France; at PC Bruno, near Paris, they continued breaking Enigma,
collaborating with British cryptologists at Bletchley Park as the British got up to speed
on breaking Enigma. In due course, the British cryptographers – whose ranks included
many chess masters and mathematics dons such as Gordon Welchman, Max
Newman, and Alan Turing (the conceptual founder of modern computing) –
substantially advanced the scale and technology of Enigma decryption.
German code breaking in World War II also had some success, most importantly
by breaking the Naval Cipher No. 3. This enabled them to track and sink Atlantic
convoys. It was only Ultra intelligence that finally persuaded the admiralty to change
their codes in June 1943. This is surprising given the success of the British Room
40 code breakers in the previous world war.
At the end of the War, on 19 April 1945, Britain's top military officers were told that
they could never reveal that the German Enigma cipher had been broken because it
would give the defeated enemy the chance to say they "were not well and fairly
beaten".[26]
The German military also deployed several teleprinter stream ciphers. Bletchley Park
called them the Fish ciphers, and Max Newman and colleagues designed and

11
deployed the Heath Robinson, and then the world's first programmable digital
electronic computer, the Colossus, to help with their cryptanalysis. The German
Foreign Office began to use the one-time pad in 1919; some of this traffic was read in
World War II partly as the result of recovery of some key material in South America
that was discarded without sufficient care by a German courier.
The Schlüsselgerät 41 was developed late in the war as a more secure replacement
for Enigma, but only saw limited use.
Japan[edit]
A US Army group, the SIS, managed to break the highest security Japanese diplomatic
cipher system (an electromechanical stepping switch machine called Purple by the
Americans) in 1940, before World War II began. The locally developed Purple
machine replaced the earlier "Red" machine used by the Japanese Foreign Ministry,
and a related machine, the M-1, used by Naval attachés which was broken by the U.S.
Navy's Agnes Driscoll. All the Japanese machine ciphers were broken, to one degree
or another, by the Allies.
The Japanese Navy and Army largely used code book systems, later with a separate
numerical additive. US Navy cryptographers (with cooperation from British and Dutch
cryptographers after 1940) broke into several Japanese Navy crypto systems. The
break into one of them, JN-25, famously led to the US victory in the Battle of Midway;
and to the publication of that fact in the Chicago Tribune shortly after the battle,
though the Japanese seem not to have noticed for they kept using the JN-25 system.
Allies[edit]
The Americans referred to the intelligence resulting from cryptanalysis, perhaps
especially that from the Purple machine, as 'Magic'. The British eventually settled on
'Ultra' for intelligence resulting from cryptanalysis, particularly that from message
traffic protected by the various Enigmas. An earlier British term for Ultra had been
'Boniface' in an attempt to suggest, if betrayed, that it might have an individual agent
as a source.
SIGABA is described in U.S. Patent 6,175,625, filed in 1944 but not issued until 2001.
Allied cipher machines used in World War II included the British TypeX and the
American SIGABA; both were electromechanical rotor designs similar in spirit to the
Enigma, albeit with major improvements. Neither is known to have been broken by
anyone during the War. The Poles used the Lacida machine, but its security was
found to be less than intended (by Polish Army cryptographers in the UK), and its use
was discontinued. US troops in the field used the M-209 and the still less secure M-
94 family machines. British SOE agents initially used 'poem ciphers' (memorized
poems were the encryption/decryption keys), but later in the War, they began
to switch to one-time pads.
The VIC cipher (used at least until 1957 in connection with Rudolf Abel's NY spy ring)
was a very complex hand cipher, and is claimed to be the most complicated known to
have been used by the Soviets, according to David Kahn in Kahn on Codes. For the

11
decrypting of Soviet ciphers (particularly when one-time pads were reused),
see Venona project.
Role of women[edit]
The UK and US employed large numbers of women in their code-breaking operation,
with close to 7,000 reporting to Bletchley Park[27] and 11,000 to the separate US Army
and Navy operations, around Washington, DC.[28] By tradition in Japan and Nazi
doctrine in Germany, women were excluded from war work, at least until late in the
war. Even after encryption systems were broken, large amounts of work was needed
to respond to changes made, recover daily key stettings for multiple networks, and
intercept, process, translate, prioritize and analyze the huge volume of enemy
messages generated in a global conflict. A few women, including Elizabeth
Friedman and Agnes Meyer Driscoll, had been major contributors to US code
breaking in the 1930s and the Navy and Army began actively recuiting top graduates
of women's colleges shortly before the attack on Pearl Harbor. Liza Mundy argues
that this disparity in utilizing the talents of women between the Allies and Axis made
a strategic difference in the war.[28]:p.29

Modern cryptography[edit]
Encryption in modern times is achieved by using algorithms that have a key to
encrypt and decrypt information. These keys convert the messages and data into
"digital gibberish" through encryption and then return them to the original form
through decryption. In general, the longer the key is, the more difficult it is to crack
the code. This holds true because deciphering an encrypted message by brute force
would require the attacker to try every possible key. To put this in context, each
binary unit of information, or bit, has a value of 0 or 1. An 8-bit key would then have
256 or 2^8 possible keys. A 56-bit key would have 2^56, or 72 quadrillion, possible
keys to try and decipher the message. With modern technology, cyphers using keys
with these lengths are becoming easier to decipher. DES, an early US Government
approved cypher, has an effective key length of 56 bits, and test messages using that
cypher have been broken by brute force key search. However, as technology
advances, so does the quality of encryption. Since World War II, one of the most
notable advances in the study of cryptography is the introduction of the asymmetric
key cyphers (sometimes termed public-key cyphers). These are algorithms which use
two mathematically related keys for encryption of the same message. Some of these
algorithms permit publication of one of the keys, due to it being extremely difficult to
determine one key simply from knowledge of the other.[29]
Beginning around 1990, the use of the Internet for commercial purposes and the
introduction of commercial transactions over the Internet called for a widespread
standard for encryption. Before the introduction of the Advanced Encryption
Standard (AES), information sent over the Internet, such as financial data, was
encrypted if at all, most commonly using the Data Encryption Standard (DES). This

11
had been approved by NBS (a US Government agency) for its security, after public call
for, and a competition among, candidates for such a cypher algorithm. DES was
approved for a short period, but saw extended use due to complex wrangles over the
use by the public of high quality encryption. DES was finally replaced by the AES after
another public competition organized by the NBS successor agency, NIST. Around the
late 1990s to early 2000s, the use of public-key algorithms became a more common
approach for encryption, and soon a hybrid of the two schemes became the most
accepted way for e-commerce operations to proceed. Additionally, the creation of a
new protocol known as the Secure Socket Layer, or SSL, led the way for online
transactions to take place. Transactions ranging from purchasing goods to online bill
pay and banking used SSL. Furthermore, as wireless Internet connections became
more common among households, the need for encryption grew, as a level of
security was needed in these everyday situations.[30]
Claude Shannon[edit]
Claude E. Shannon is considered by many[weasel words] to be the father of mathematical
cryptography. Shannon worked for several years at Bell Labs, and during his time
there, he produced an article entitled "A mathematical theory of cryptography". This
article was written in 1945 and eventually was published in the Bell System Technical
Journal in 1949.[31] It is commonly accepted that this paper was the starting point for
development of modern cryptography. Shannon was inspired during the war to
address "[t]he problems of cryptography [because] secrecy systems furnish an
interesting application of communication theory". Shannon identified the two main
goals of cryptography: secrecy and authenticity. His focus was on exploring secrecy
and thirty-five years later, G.J. Simmons would address the issue of authenticity.
Shannon wrote a further article entitled "A mathematical theory of communication"
which highlights one of the most significant aspects of his work: cryptography's
transition from art to science.[32]
In his works, Shannon described the two basic types of systems for secrecy. The first
are those designed with the intent to protect against hackers and attackers who have
infinite resources with which to decode a message (theoretical secrecy, now
unconditional security), and the second are those designed to protect against hackers
and attacks with finite resources with which to decode a message (practical secrecy,
now computational security). Most of Shannon's work focused around theoretical
secrecy; here, Shannon introduced a definition for the "unbreakability" of a cipher. If
a cipher was determined "unbreakable", it was considered to have "perfect secrecy".
In proving "perfect secrecy", Shannon determined that this could only be obtained
with a secret key whose length given in binary digits was greater than or equal to the
number of bits contained in the information being encrypted. Furthermore, Shannon
developed the "unicity distance", defined as the "amount of plaintext that…
determines the secret key."[32]
Shannon's work influenced further cryptography research in the 1970s, as the public-

11
key cryptography developers, M. E. Hellman and W. Diffie cited Shannon's research as
a major influence. His work also impacted modern designs of secret-key ciphers. At
the end of Shannon's work with cryptography, progress slowed until Hellman and
Diffie introduced their paper involving "public-key cryptography".[32]
An encryption standard[edit]
The mid-1970s saw two major public (i.e., non-secret) advances. First was the
publication of the draft Data Encryption Standard in the U.S. Federal Register on 17
March 1975. The proposed DES cipher was submitted by a research group at IBM, at
the invitation of the National Bureau of Standards (now NIST), in an effort to develop
secure electronic communication facilities for businesses such as banks and other
large financial organizations. After advice and modification by the NSA, acting behind
the scenes, it was adopted and published as a Federal Information Processing
Standard Publication in 1977 (currently at FIPS 46-3). DES was the first publicly
accessible cipher to be 'blessed' by a national agency such as the NSA. The release of
its specification by NBS stimulated an explosion of public and academic interest in
cryptography.
The aging DES was officially replaced by the Advanced Encryption Standard (AES) in
2001 when NIST announced FIPS 197. After an open competition, NIST
selected Rijndael, submitted by two Belgian cryptographers, to be the AES. DES, and
more secure variants of it (such as Triple DES), are still used today, having been
incorporated into many national and organizational standards. However, its 56-bit
key-size has been shown to be insufficient to guard against brute force attacks (one
such attack, undertaken by the cyber civil-rights group Electronic Frontier
Foundation in 1997, succeeded in 56 hours.[33]) As a result, use of straight DES
encryption is now without doubt insecure for use in new cryptosystem designs, and
messages protected by older cryptosystems using DES, and indeed all messages sent
since 1976 using DES, are also at risk. Regardless of DES' inherent quality, the DES key
size (56-bits) was thought to be too small by some even in 1976, perhaps most
publicly by Whitfield Diffie. There was suspicion that government organizations even
then had sufficient computing power to break DES messages; clearly others have
achieved this capability.
Public key[edit]
The second development, in 1976, was perhaps even more important, for it
fundamentally changed the way cryptosystems might work. This was the publication
of the paper New Directions in Cryptography by Whitfield Diffie and Martin Hellman.
It introduced a radically new method of distributing cryptographic keys, which went
far toward solving one of the fundamental problems of cryptography, key
distribution, and has become known as Diffie–Hellman key exchange. The article also
stimulated the almost immediate public development of a new class of enciphering
algorithms, the asymmetric key algorithms.
Prior to that time, all useful modern encryption algorithms had been symmetric key

11
algorithms, in which the same cryptographic key is used with the underlying
algorithm by both the sender and the recipient, who must both keep it secret. All of
the electromechanical machines used in World War II were of this logical class, as
were the Caesar and Atbash ciphers and essentially all cipher systems throughout
history. The 'key' for a code is, of course, the codebook, which must likewise be
distributed and kept secret, and so shares most of the same problems in practice.
Of necessity, the key in every such system had to be exchanged between the
communicating parties in some secure way prior to any use of the system (the term
usually used is 'via a secure channel') such as a trustworthy courier with a briefcase
handcuffed to a wrist, or face-to-face contact, or a loyal carrier pigeon. This
requirement is never trivial and very rapidly becomes unmanageable as the number
of participants increases, or when secure channels aren't available for key exchange,
or when, as is sensible cryptographic practice, keys are frequently changed. In
particular, if messages are meant to be secure from other users, a separate key is
required for each possible pair of users. A system of this kind is known as a secret key,
or symmetric key cryptosystem. D-H key exchange (and succeeding improvements
and variants) made operation of these systems much easier, and more secure, than
had ever been possible before in all of history.
In contrast, asymmetric key encryption uses a pair of mathematically related keys,
each of which decrypts the encryption performed using the other. Some, but not all,
of these algorithms have the additional property that one of the paired keys cannot
be deduced from the other by any known method other than trial and error. An
algorithm of this kind is known as a public key or asymmetric key system. Using such
an algorithm, only one key pair is needed per user. By designating one key of the pair
as private (always secret), and the other as public (often widely available), no secure
channel is needed for key exchange. So long as the private key stays secret, the public
key can be widely known for a very long time without compromising security, making
it safe to reuse the same key pair indefinitely.
For two users of an asymmetric key algorithm to communicate securely over an
insecure channel, each user will need to know their own public and private keys as
well as the other user's public key. Take this basic scenario: Alice and Bob each have a
pair of keys they've been using for years with many other users. At the start of their
message, they exchange public keys, unencrypted over an insecure line. Alice then
encrypts a message using her private key, and then re-encrypts that result using
Bob's public key. The double-encrypted message is then sent as digital data over a
wire from Alice to Bob. Bob receives the bit stream and decrypts it using his own
private key, and then decrypts that bit stream using Alice's public key. If the final
result is recognizable as a message, Bob can be confident that the message actually
came from someone who knows Alice's private key (presumably actually her if she's
been careful with her private key), and that anyone eavesdropping on the channel
will need Bob's private key in order to understand the message.

11
Asymmetric algorithms rely for their effectiveness on a class of problems in
mathematics called one-way functions, which require relatively little computational
power to execute, but vast amounts of power to reverse, if reversal is possible at all.
A classic example of a one-way function is multiplication of very large prime
numbers. It's fairly quick to multiply two large primes, but very difficult to find the
factors of the product of two large primes. Because of the mathematics of one-way
functions, most possible keys are bad choices as cryptographic keys; only a small
fraction of the possible keys of a given length are suitable, and so asymmetric
algorithms require very long keys to reach the same level of security provided by
relatively shorter symmetric keys. The need to both generate the key pairs, and
perform the encryption/decryption operations make asymmetric algorithms
computationally expensive, compared to most symmetric algorithms. Since
symmetric algorithms can often use any sequence of (random, or at least
unpredictable) bits as a key, a disposable session key can be quickly generated for
short-term use. Consequently, it is common practice to use a long asymmetric key to
exchange a disposable, much shorter (but just as strong) symmetric key. The slower
asymmetric algorithm securely sends a symmetric session key, and the faster
symmetric algorithm takes over for the remainder of the message.
Asymmetric key cryptography, Diffie–Hellman key exchange, and the best known of
the public key / private key algorithms (i.e., what is usually called the RSA algorithm),
all seem to have been independently developed at a UK intelligence agency before
the public announcement by Diffie and Hellman in 1976. GCHQ has released
documents claiming they had developed public key cryptography before the
publication of Diffie and Hellman's paper.[citation needed] Various classified papers were
written at GCHQ during the 1960s and 1970s which eventually led to schemes
essentially identical to RSA encryption and to Diffie–Hellman key exchange in 1973
and 1974. Some of these have now been published, and the inventors (James H. Ellis,
Clifford Cocks, and Malcolm Williamson) have made public (some of) their work.
Hashing[edit]
Hashing is a common technique used in cryptography to encode information quickly
using typical algorithms. Generally, an algorithm is applied to a string of text, and the
resulting string becomes the "hash value". This creates a "digital fingerprint" of the
message, as the specific hash value is used to identify a specific message. The output
from the algorithm is also referred to as a "message digest" or a "check sum".
Hashing is good for determining if information has been changed in transmission. If
the hash value is different upon reception than upon sending, there is evidence the
message has been altered. Once the algorithm has been applied to the data to be
hashed, the hash function produces a fixed-length output. Essentially, anything
passed through the hash function should resolve to the same length output as
anything else passed through the same hash function. It is important to note that
hashing is not the same as encrypting. Hashing is a one-way operation that is used to

11
transform data into the compressed message digest. Additionally, the integrity of the
message can be measured with hashing. Conversely, encryption is a two-way
operation that is used to transform plaintext into cipher-text and then vice versa. In
encryption, the confidentiality of a message is guaranteed.[34]
Hash functions can be used to verify digital signatures, so that when signing
documents via the Internet, the signature is applied to one particular individual.
Much like a hand-written signature, these signatures are verified by assigning their
exact hash code to a person. Furthermore, hashing is applied to passwords for
computer systems. Hashing for passwords began with the UNIX operating system. A
user on the system would first create a password. That password would be hashed,
using an algorithm or key, and then stored in a password file. This is still prominent
today, as web applications that require passwords will often hash user's passwords
and store them in a database.[35]
Cryptography politics[edit]
The public developments of the 1970s broke the near monopoly on high quality
cryptography held by government organizations (see S Levy's Crypto for a journalistic
account of some of the policy controversy of the time in the US). For the first time
ever, those outside government organizations had access to cryptography not readily
breakable by anyone (including governments). Considerable controversy, and conflict,
both public and private, began more or less immediately, sometimes called
the crypto wars. They have not yet subsided. In many countries, for example, export
of cryptography is subject to restrictions. Until 1996 export from the U.S. of
cryptography using keys longer than 40 bits (too small to be very secure against a
knowledgeable attacker) was sharply limited. As recently as 2004,
former FBI Director Louis Freeh, testifying before the 9/11 Commission, called for
new laws against public use of encryption.
One of the most significant people favoring strong encryption for public use was Phil
Zimmermann. He wrote and then in 1991 released PGP (Pretty Good Privacy), a very
high quality crypto system. He distributed a freeware version of PGP when he felt
threatened by legislation then under consideration by the US Government that would
require backdoors to be included in all cryptographic products developed within the
US. His system was released worldwide shortly after he released it in the US, and that
began a long criminal investigation of him by the US Government Justice Department
for the alleged violation of export restrictions. The Justice Department eventually
dropped its case against Zimmermann, and the freeware distribution of PGP has
continued around the world. PGP even eventually became an open Internet standard
(RFC 2440 or OpenPGP).
Modern cryptanalysis[edit]
While modern ciphers like AES and the higher quality asymmetric ciphers are widely
considered unbreakable, poor designs and implementations are still sometimes
adopted and there have been important cryptanalytic breaks of deployed crypto

11
systems in recent years. Notable examples of broken crypto designs include the
first Wi-Fi encryption scheme WEP, the Content Scrambling System used for
encrypting and controlling DVD use, the A5/1 and A5/2 ciphers used in GSM cell
phones, and the CRYPTO1 cipher used in the widely deployed MIFARE Classic smart
cards from NXP Semiconductors, a spun off division of Philips Electronics. All of these
are symmetric ciphers. Thus far, not one of the mathematical ideas underlying public
key cryptography has been proven to be 'unbreakable', and so some future
mathematical analysis advance might render systems relying on them insecure. While
few informed observers foresee such a breakthrough, the key size recommended for
security as best practice keeps increasing as increased computing power required for
breaking codes becomes cheaper and more available. Quantum computers, if ever
constructed with enough capacity, could break existing public key algorithms and
efforts are underway to develop and standardize post-quantum cryptography.
Even without breaking encryption in the traditional sense, side-channel attacks can
be mounted that exploit information gained from the way a computer system is
implemented, such as cache memory usage, timing information, power consumption,
electromagnetic leaks or even sounds emitted. Newer cryptographic algorithms are
being developed that make such attacks more difficult.

11
1
Cryptographers have found evidence of cryptographic-type operations going back
thousands of years. A perfect example of this is in early Egypt, where sets of
nonstandard hieroglyphics were used in inscriptions to avoid certain people from
being able to understand what was written on those inscriptions. Another example
of later in history, the Spartans were known for something very appropriately called
the Spartan scytale, a method
of transmitting a message by wrapping a leather belt around a tapered dowel.
Written across the dowel, the message would be unreadable once it was
unwrapped from the dowel. The belt could then be carried to the recipient, who
would be able to read the message as long as he had a dowel of the same diameter
and taper.

2
Reference http://www.pawlan.com/monica/articles/crypto/

Please just read this lightly and do not memorize anything on Egyptian topic.

Cryptography: The Ancient Art of Secret Messages


The word cryptography comes from the Greek words kryptos meaning hidden
and graphein meaning writing. Cryptography is the study of hidden writing, or the
science of encrypting and decrypting text.
Nineteenth century scholars decrypted ancient Egyptian hieroglyphics when
Napoleon's soldiers found the Rosetta Stone in 1799 near Rosetta, Egypt. Its
inscription praising King Ptolemy V was in three ancient languages: Demotic,
hieroglyphics, and Greek. The scholars who could read ancient Greek, decrypted the
other languages by translating the Greek and comparing the three inscriptions.
In the twentieth century world of computer networks, messages are digitally
encrypted on the sending side and decrypted on the receiving side using
cryptographic services and algorithms. Algorithms are mathematical techniques or
rules that apply a cryptographic service to a message. Cryptographic services
include hashing a message, encrypting and decrypting a message, and signing or

3
verifying the signature on a message. A message digest object uses a hashing
algorithm to make a hash digest of the original message, key pairs use a key
algorithm compatible with the hashing algorithm to encrypt and decrypt the
message, and a signature object uses the key pairs to sign and verify the signature on
a message.
The Java Cryptography Architecture (JCA) framework provides a full range of
cryptographic services and algorithms to keep messages sent over the network
secure. The framework is extensible and interoperable. Not only can you add
cryptographic service implementations by different vendors to the framework, but,
for example, the signature service implementation by one vendor will work
seamlessly with the signature service implementation by another vendor as long as
both vendors' implementations use the same signature algorithm. Given how
implementations can vary from vendor to vendor, the flexibility built into the JCA
framework lets you choose an implementation that best meets your application
requirements.

You will find additional JCA features in Java Cryptography Extension (JCE) in a
separate download. JCE provides ciphers (symmetric, asymmetric, block, and stream),
secure Java streams, and key generation. JCE features are not covered in this article.

3
Reference: http://www.historyofinformation.com/detail.php?id=4168

The Skytale: An Early Greek Cryptographic Device Used in WarfareCirca 650 BCE
The skytale (scytale, σκυτάλη "baton"), a cylinder with a strip of parchment
wrapped around it on which was written a message, was used by the ancient
Greeks and Spartans to communicate secretly during military campaigns. It was first
mentioned by the Greek poet Archilochus (fl. 7th century BCE), but the first clear
indication of its use as a cryptographic device appeared in the writings of the poet
and Homeric scholar, Apollonius of Rhodes, who also served as librarian at the Royal
Library of Alexandria.
Plutarch, writing in the first century CE, provided the first detailed description of the
operation of the skytale:
The dispatch-scroll is of the following character. When the ephors send out an
admiral or a general, they make two round pieces of wood exactly alike in length and
thickness, so that each corresponds to the other in its dimensions, and keep one
themselves, while they give the other to their envoy. These pieces of wood they call
scytalae. Whenever, then, they wish to send some secret and important message,
they make a scroll of parchment long and narrow, like a leathern strap, and wind it

4
round their scytale, leaving no vacant space thereon, but covering its surface all round
with the parchment. After doing this, they write what they wish on the parchment,
just as it lies wrapped about the scytale; and when they have written their message,
they take the parchment off and send it, without the piece of wood, to the
commander. He, when he has received it, cannot otherwise get any meaning out of it,-
-since the letters have no connection, but are disarranged,--unless he takes his own
scytale and winds the strip of parchment about it, so that, when its spiral course is
restored perfectly, and that which follows is joined to that which precedes, he reads
around the staff, and so discovers the continuity of the message. And the parchment,
like the staff, is called scytale, as the thing measured bears the name of the measure.
—Plutarch, Lives (Lysander 19), ed. Bernadotte Perrin (quoted in Wikipedia article on
Scytale, accessed 04-05-2014).From Plutarch's description we might draw the
conclusion that the skytale was used to transmit a transposition cipher. However,
because earlier accounts do not confirm Plutarch's account, and because of the
cryptographic weakness of the device, it was suggested that the skytale was used for
conveying messages in plaintext, and that Plutarch's description is mythological.
Another hypothesis is that the skytale was used for "message authentication rather
than encryption. Only if the sender wrote the message around a scytale of the same
diameter as the receiver's would the receiver be able to read it. It would therefore be
difficult for enemy spies to inject false messages into the communication between
two commanders" (Wikipedia article on Scytale, accessed 08-05-2014).

4
The major advancement developed in this era was the performance of the
algorithm on the numerical value of a letter, rather than the letter itself. Up until
this point, most cryptography was based on substitution ciphers, such as the Caesar
cipher. This was a natural transition into the electronic era, where cryptographic
operations are normally performed on binary values of letters, rather than on the
written letter itself. For example, the alphabet could be written as follows: A = 0, B =
1, C = 2 . . . Z = 25. This was especially integral to the one-time
pad and other cipher methods that were developed during this era. This
represented a major evolution of cryptography that really set the stage for further
developments in later time periods.

5
Reference http://practicalcryptography.com/ciphers/caesar-cipher/

You can play with the above link and see how the ceasar cipher really works.

Caesar Cipher
Introduction
The Caesar cipher is one of the earliest known and simplest ciphers. It is a type of
substitution cipher in which each letter in the plaintext is 'shifted' a certain number
of places down the alphabet. For example, with a shift of 1, A would be replaced by
B, B would become C, and so on. The method is named after Julius Caesar, who
apparently used it to communicate with his generals.

More complex encryption schemes such as the Vigenère cipher employ the Caesar
cipher as one element of the encryption process. The widely known ROT13
'encryption' is simply a Caesar cipher with an offset of 13. The Caesar cipher offers
essentially no communication security, and it will be shown that it can be easily
broken even by hand.

6
Reference. https://www.csoonline.com/article/3235970/what-is-quantum-cryptography-it-
s-no-silver-bullet-but-could-improve-security.html

Quantum cryptography definition


Quantum cryptography, also called quantum encryption, applies principles of
quantum mechanics to encrypt messages in a way that it is never read by anyone
outside of the intended recipient. It takes advantage of quantum’s multiple states,
coupled with its "no change theory," which means it cannot be unknowingly
interrupted.
Performing these tasks requires a quantum computer, which have the immense
computing power to encrypt and decrypt data. A quantum computer could quickly
crack current public-key cryptography.

Why quantum cryptography is important


Companies and governments around the world are in a quantum arms race, the
race to build the first usable quantum computer. The technology promises to make
some kinds of computing problems much, much easier to solve than with today’s

7
classical computers.
[ Prepare to become a Certified Information Security Systems Professional with this
comprehensive online course from PluralSight. Now offering a 10-day free
trial! ]One of those problems is breaking certain types of encryption, particularly the
methods used in today’s public key infrastructure (PKI), which underlies practically all
of today’s online communications. “I’m certainly scared of what can be the result of
quantum computing,” says Michael Morris, CEO at Topcoder, a global network of 1.4
million developers. Topcoder is part of Wipro, a global consulting organization. It’s
also working on finding solutions to quantum computing programming challenges.

“Instead of solving one problem at a time, with quantum computing we can solve
thousands of problems at the same processing speed, with the same processing
power,” Morris says. “Things that would take hundreds of days today could take just
hours on a quantum computer.”

The commercial quantum computers available today are still far from being able to
do that. “The theories have advanced farther than the hardware,” says William
Hurley, IEEE senior member, founder and CEO of Austin-based quantum computing
company Strangeworks. “However, we shouldn’t wait for the hardware to motivate
the switch to post-quantum cryptography.”

Who knows what kind of technology isn’t available on the public market, or is
operated in secret by foreign governments? “My fear is that we won’t know that the
quantum computer capable of doing this even exists until it’s done,” says Topcoder’s
Morris. “My fear is that it happens before we know it’s there.”

Asymmetric versus symmetric encryption


Here’s how encryption works on “traditional” computers: Binary digits (0s and 1s) are
systematically sent from one place to another and then deciphered with a symmetric
(private) or asymmetric (public) key. Symmetric key ciphers like Advanced Encryption
Standard (AES) use the same key for encrypting a message or file, while asymmetric
ciphers like RSA use two linked keys — private and public. The public key is shared,
but the private key is kept secret to decrypt the information.

The first target of encryption-breaking quantum computers will be the weakest link in
the encryption ecosystem: asymmetric encryption. This is PKI, the RSA encryption
standard. Emails, websites, financial transactions and pretty much everything is
protected with asymmetric encryption.

The reason it’s popular is that anyone can encrypt a message by using the intended
recipient’s public key, but only the recipient can decrypt it using the matching private

7
key. The two-key approach relies on the principle that some kinds of mathematical
processes are much easier to do than to undo. You can crack an egg, but putting it
back together is a lot harder.

With symmetric encryption, messages are encrypted and decrypted using the same
key. That makes symmetric encryption less suitable for public communication but
significantly harder to break. “Quantum computers are unlikely to crack symmetric
methods (AES, 3DES, etc.) but are likely to crack public methods, such as ECC and
RSA,” says Bill Buchanan, professor in the School of Computing at Edinburgh Napier
University in Scotland. “The internet has often overcome problems in cracking within
an increase in key sizes, so I do expect a ramp up in key sizes to extend the shelf life
for RSA and ECC.”

How to defend against quantum cryptography


Longer keys are the first line of defense against quantum encryption, and pretty
much everybody is on board with that. In fact, the 1024-bit version of the RSA
encryption standard is no longer regarded as safe by NIST, which recommends 2048
bits as a minimum. Longer keys make encryption slower and more costly, however,
and the key length will have to increase substantially to stay ahead of quantum
computers.

Another option is to use symmetric encryption for the messages themselves, then
use asymmetric encryption just for the keys. This is the idea behind the Transport
Layer Security (TLS) online standard, says Alan Woodward, a professor at the
department of computing at the University of Surrey.

Many researchers are also looking at ways to create new kinds of encryption
algorithms that would still allow public and private keys but be proof against
quantum computers. For example, it’s easy to multiply two prime numbers together
but very difficult to break a large number back up into its prime factors. Quantum
computers can do it, and there are already known quantum techniques that could
solve the factoring problem and many similar approaches, says Woodward.

However, there’s no known quantum method to crack lattice-based encryption,


which uses cryptographic algorithms built around lattices. “Lattice cryptography is
the one that looks to be the favorite at the moment, simply because it’s the most
practical to implement,” he says.
The best solution could be a combination of post-quantum algorithms like lattice-
based encryption for the initial communication to securely exchange keys, then using
symmetric encryption for the main messages.

7
Can we really rely on lattice-based encryption or similar algorithms to be safe? “You
can’t guarantee that your post-quantum algorithm will be secure against a future
quantum computer that uses some unknown quantum algorithm,” says Brian La Cour,
professor and research scientist at the University of Texas.

Quantum key distribution is unhackable, in theory


This is where the laws of quantum physics can come to the rescue. Quantum key
distribution (QKD) is a method of sending encryption keys using some very peculiar
behaviors of subatomic particles that is, in theory at least, completely unhackable.
The land-based version of QKD is a system where photons are sent one at a time
through a fiberoptic line. If anyone is eavesdropping, then, according to the principles
of quantum physics, the polarization of the photons is affected, and the recipient can
tell that the message isn’t secure.
China is furthest ahead with QKD, with dedicated pipes connecting Beijing, Shanghai,
and other cities. There are also networks in Europe. In the United States, the first
commercial QKD network went live this past fall. The Quantum Xchange, connecting
New York City’s financial firms with its data centers in New Jersey, rents space on
existing fiberoptic networks, then uses its own QKD senders and receivers to send the
secure messages on behalf of clients. The company plans to expand to Boston and
Washington, D.C. later in 2019.

However, the technology is extremely slow and requires expensive equipment to


send and receive the individual photons. According to John Prisco, CEO and president
of Quantum Xchange, a customer would need to buy a transmitter and a receiver,
each of which costs in the neighborhood of $100,000. “It’s not too terribly different
from other high-speed fiber optics communication equipment,” he says. “And the
price will come down over time as more companies provide the hardware.”

The big breakthrough last year was that QKD systems no longer require special pipes,
says Woodward. “Now it looks like they’ll be able to use existing fiber networks, so
they don’t have to lay new fiber.”

Then there’s the satellite-based approach. This one uses the principle of
entanglement, which Einstein called “spooky action at a distance” and refused to
believe was real. Turns out, it is real, and China has had a quantum communication
satellite up and working for a couple of years now.

Entanglement isn’t about instantaneous communications that break the speed of


light speed limit, says Woodward. The way that it works is that two particles become
entangled so that they have the same state, and then one of these particles is sent to
someone else. When the recipient looks at the particle, it’s guaranteed to be the

7
same state as its twin.
If one of those particles changes, it doesn’t mean that the other particle instantly
changes to match — it’s not a communication system. Plus, the state of the two
entangled particles, while identical, is also random. “So, you can’t send a message,”
says Woodward, “but you can send an encryption key, because what you really want
in a key is a sequence of random digits.”

Now that the sender and the receiver both have the same random key, they can then
use it to send messages using symmetric encryption over traditional channels. “China
has leapfrogged everyone with this satellite,” says Woodward. “Everyone said it
couldn’t be done, that passing through the atmosphere would drop it out of
superposition, but the Chinese have been able to do it.” To receive the signals,
companies would need to put something that looks like a telescope on their rooftops,
he says, and then install some processing equipment.

Neither ground-based nor satellite-based quantum key distribution is practical for


general use since both require very specialized and expensive equipment. It could,
however, be useful for securing the most critical and sensitive communications.

The limits of quantum key distribution


If the integrity of the keys can be perfectly guaranteed by QKD, does that mean that
unhackable communications are within our reach?
Not so fast.
“Most hackers, when they break into things, they hardly go head-on,” says
Woodward. “They go around the side, and I suspect that's where you'll find problems
with these implementations.” Today’s attackers, while they could, in theory, listen in
to traffic over fiberoptic lines, typically don’t do that.
There are far easier ways to read the messages, such as getting to the messages
before they are encrypted or after they are decrypted or using man-in-the-middle
attacks.
Plus, QKD requires the use of relays. Unless the sender and the recipient build a pipe
that goes directly between their two offices, and the distance is short enough that
the messages don’t degrade — about 60 miles or less with current technology —
there will be plenty of opportunities for hackers. QKD networks will need repeaters
when messages travel long distances. “You can imagine that those repeaters are
going to become weak points,” says Woodward. “Someone could hack in and get the
key.”
In addition, QKD networks will need to be able to route messages, and that means
routers and hubs, each of which is also a potential point of vulnerability. “Physicists
can say, this is absolutely secure,” says Woodward, “but there’s a danger in that, in
thinking that just because you’re using QKD that you’re secure. Sure, the laws of

7
physics apply, but there might be ways around them.”
Besides the security problems, it’s not realistic to expect that every internet user will
have access to an QKD endpoint anywhere in the near future. That means, except for
the most sensitive, high-value communications, better encryption algorithms are the
way to go.
When will quantum cryptography become available?
So how much time do we have to get those algorithms in place? When are the
quantum computers getting here? Nobody knows, says Woodward, since very
significant engineering challenges still need to be overcome, and that could take
years — or decades — to solve. The technology is still in its infancy, he says. “The
quantum computer I play with over ihe internet via IBM now has 20 qubits,” he says.
“Google is talking about 50 qubits.”
Cracking today’s standard RSA encryption would take thousands of qubits. Adding
those qubits isn’t easy because they’re so fragile. Plus, quantum computers today
have extremely high error rates, requiring even more qubits for error correction. “I
teach a class on quantum computing,” says University of Texas’s La Cour. “Last
semester, we had access to one of IBM’s 16-qubit machines. I was intending to do
some projects with it to show some cool things you could do with a quantum
computer.”

That didn’t work out, he says. “The device was so noisy that if you did anything
complicated enough to require 16 qubits, the result was pure garbage.”
Once that scalability problem is solved, we’ll be well on our way to having usable
quantum computers, he says, but it’s impossible to put a timeframe on it. “It’s like
saying back in the '70s, if you can solve the magnetic confinement problem, how far
away is fusion?”
La Cour guesses that we’re probably decades away from the point at which quantum
computers can be used to break today’s RSA encryption. There’s plenty of time to
upgrade to newer encryption algorithms — except for one thing.

“People are worried about things that are encrypted today staying secure several
decades in the future,” La Cour says. Even if companies upgrade their encryption
technology as new algorithms come along and go back and re-encrypt all the old files
that they’ve stored, it’s impossible to know where all your old messages have gone.
“If emails go out and are intercepted, there’s now this warehouse of messages
somewhere where someone is waiting for a quantum computer to come along and
break them all,” he says. “People are really concerned about that.”

7
Covered in detail in the previous note as they relate to CISSP Domain 3o

1
Reference : https://en.wikipedia.org/wiki/History_of_cryptography

2
Plaintext or cleartext: This is the message or data in its natural format and in
readable form. Plaintext is human readable and is extremely vulnerable from a
confidentiality perspective. Plaintext is the message or data that has not been
turned into a secret.

Ciphertext or cryptogram: This is the altered form of a plaintext message so as to be


unreadable for anyone except the intended recipients. In other words, it has been
turned into a secret. An attacker seeing ciphertext would be unable to easily read
the message or to determine its content. Also referred to as the message that has
been turned into a secret.

3
Cryptosystem: This represents the entire cryptographic operation and system. This
typically includes the algorithm, key, and key management functions, together with
the services that can be provided through cryptography. The cryptosystem is the
complete set of applications that allows sender and receiver to communicate using
cryptography systems.

4
Algorithm: An algorithm is a mathematical function that is used in the encryption
and decryption processes. It may be quite simple or extremely complex. Also
defined as the set of instructions by which encryption and decryption is done.

5
Encryption: This is the process and act of converting the message from its plaintext
to ciphertext. Sometimes this is also referred to as enciphering. The two terms are
sometimes used interchangeably in the literature and have similar meanings.

Decryption: This is the reverse process from encryption. It is the process of


converting a ciphertext message back into
plaintext through the use of the cryptographic algorithm and key (cryptovariable)
that was used to do the original
encryption. This term is also used interchangeably with the term deciphering.

6
Key or cryptovariable: The input that controls the operation of the cryptographic
algorithm. It determines the behavior of the algorithm and permits the reliable
encryption and decryption of the message. There are both secret and public keys
used in cryptographic algorithms.

7
Non-repudiation: The inability to deny. In cryptography, it is a security service by
which evidence is maintained so that the sender and the recipient of data cannot
deny having participated in the communication. There are two flavors of non-
repudiation, “nonrepudiation of origin” means the sender cannot deny having sent
a particular message, and “nonrepudiation of delivery’” where the receiver cannot
say that they have received a different message than the one that they actually
did receive.

1
Cryptanalysis: The study of techniques for attempting to defeat cryptographic
techniques and, more generally, information
security services.

2
Cryptology: The science that deals with hidden, disguised, or encrypted
communications. It embraces communications security and communications
intelligence.

3
Hash function: A hash function is a one-way mathematical operation that reduces a
message or data file into a smaller
fixed length output, or hash value. By comparing the hash value computed by the
sender with the hash value computed by the receiver over the original file,
unauthorized changes to the file can be detected, assuming they both used the
same hash function. Ideally, there should never be more than one unique hash for a
given input and one hash exclusively for a given input.

4
Hash function: A hash function is a one-way mathematical operation that reduces a
message or data file into a smaller
fixed length output, or hash value. By comparing the hash value computed by the
sender with the hash value computed by the receiver over the original file,
unauthorized changes to the file can be detected, assuming they both used the
same hash function. Ideally, there should never be more than one unique hash for a
given input and one hash exclusively for a given input.

1
Reference https://www.securityinnovationeurope.com/blog/page/whats-the-difference-
between-hashing-and-encrypting

Hashing and encrypting are two words that are often used interchangeably, but
incorrectly so.
Do you understand the difference between the two, and the situations in which you
should use one over the other? In today's post I investigate the key differences
between hashing and encrypting, and when each one is appropriate.

HASHING - WHAT IS IT?


A hash is a string or number generated from a string of text. The resulting string or
number is a fixed length, and will vary widely with small variations in input. The
best hashing algorithms are designed so that it's impossible to turn a hash back into
its original string.

POPULAR ALGORITHMS
MD5 - MD5 is the most widely known hashing function. It produces a 16-byte hash
value, usually expressed as a 32 digit headecimal number. Recently a few

2
vulnerabilities have been discovered in MD5, and rainbow tables have been
published which allow people to reverse MD5 hashes made without good salts.

SHA - There are three different SHA algorithms -- SHA-0, SHA-1, and SHA-2. SHA-0 is
very rarely used, as it has contained an error which was fixed with SHA-1. SHA-1 is
the most commonly used SHA algorithm, and produces a 20-byte hash value.

SHA-2 consists of a set of 6 hashing algorithms, and is considered the strongest. SHA-
256 or above is recommended for situations where security is vital. SHA-256
produces 32-byte hash values.

WHEN SHOULD HASHING BE USED?

Hashing is an ideal way to store passwords, as hashes are inherently one-way in their
nature. By storing passwords in hash format, it's very difficult for someone with
access to the raw data to reverse it (assuming a strong hashing algorithm and
appropriate salt has been used to generate it).

When storing a password, hash it with a salt, and then with any future login
attempts, hash the password the user enters and compare it with the stored hash. If
the two match up, then it's virtually certain that the user entering the password
entered the right one.

Hashing is great for usage in any instance where you want to compare a value with a
stored value, but can't store its plain representation for security reasons. Other use
cases could be checking the last few digits of a credit card match up with user input
or comparing the hash of a file you have with the hash of it stored in a database to
make sure that they're both the same.

ENCRYPTION - WHAT IS IT?


Encryption turns data into a series of unreadable characters, that aren't of a fixed
length. The key difference between encryption and hashing is that encrypted strings
can be reversed back into their original decrypted form if you have the right key.

There are two primary types of encryption, symmetric key encryption and public key
encryption. In symmetric key encryption, the key to both encrypt and decrypt is
exactly the same. This is what most people think of when they think of encryption.

Public key encryption by comparison has two different keys, one used to encrypt the
string (the public key) and one used to decrypt it (the private key). The public key is is
made available for anyone to use to encrypt messages, however only the intended

2
recipient has access to the private key, and therefore the ability to decrypt messages.

POPULAR ALGORITHMS
AES - AES is the "gold standard" when it comes to symmetric key encryption, and is
recommended for most use cases, with a key size of 256 bits. Learn more about AES.

PGP - PGP is the most popular public key encryption algorithm.

WHEN SHOULD ENCRYPTION BE USED?


Encryption should only ever be used over hashing when it is a necessity to decrypt
the resulting message. For example, if you were trying to send secure messages to
someone on the other side of the world, you would need to use encryption rather
than hashing, as the message is no use to the receiver if they cannot decrypt it. If the
raw value doesn't need to be known for the application to work correctly, then
hashing should always be used instead, as it is more secure.

If you have a use case where you have determined that encryption is necessary, you
then need to choose between symmetric and public key encryption. Symmetric
encryption provides improved performance, and is simpler to use, however the key
needs to be known by both the person/software/system encrypting and decrypting
data.

If you were communicating with someone on the other side of the world, you'd need
to find a secure way to send them the key before sharing your secure messages. If
you already had a secure way to send someone an encryption key, then it stands to
reason you would send your secure messages via that channel too, rather than using
symmetric encryption in the first place.

Many people work around this shortcoming of symmetric encryption by initially


sharing an encryption key with someone using public key encryption, then symmetric
encryption from that point onwards -- eliminating the challenge of sharing the key
securely.

2
Key space: This represents the total number of possible values of keys in a
cryptographic algorithm or other security measure, such as a password. For
example, a 20-bit key would have a key space of 1,048,576. A 2-bit key would have a
key space of 4.

3
Initialization vector (IV): A non-secret binary vector used as the initializing input
algorithm for the encryption of a plaintext
block sequence to increase security by introducing additional cryptographic
variance and to synchronize cryptographic
equipment. Typically referred to as a “random starting point,” or random number
that starts the process.

4
Encoding: The action of changing a message into another format through the use of
a code. This is often done by taking
a plaintext message and converting it into a format that can be transmitted via radio
or some other medium, and it is usually used for message integrity instead of
secrecy. An example would be to convert a message to Morse code.

5
Decoding: The reverse process from encoding, converting the encoded message
back into its plaintext format.

6
Substitution: The process of exchanging one letter or byte for another. An example
is the Caesar cipher, where each letter was shifted by 3 characters. An “A” was
represented by a “D,” a “B” was represented by an “E,” a “C” was represented by an
“F,” and so on.

7
Transposition or permutation: The process of reordering the plaintext to hide the
message, but keeping the same letters.

8
1
Confusion: Provided by mixing or changing the key values used during the repeated
rounds of encryption. When the key is modified for each round, it provides added
complexity that the attacker would encounter.

2
Diffusion: Provided by mixing up the location of the plaintext throughout the
ciphertext. Through transposition, the location of the first character of the plaintext
may change several times during the encryption process, and this makes the
cryptanalysis process much more difficult.

3
Avalanche effect: An important consideration in all cryptography used to design
algorithms where a minor change in either the key or the plaintext will have a
significant large change in the resulting ciphertext. This is also a feature of a strong-
hashing algorithm.

4
5
Key clustering: When different encryption keys generate the same ciphertext from
the same plaintext message.

6
Synchronous: Each encryption or decryption request is performed immediately.

7
Asynchronous: Encrypt/Decrypt requests are processed in queues. A key benefit of
asynchronous cryptography is utilization of hardware devices and multiprocessor
systems for cryptographic acceleration.

8
9
10
Digital signatures: These provide authentication of a sender and integrity of a
sender’s message. A message is input into a hash function. Then, the hash value is
encrypted using the private key of the sender. The result of these two steps yields a
digital signature. The receiver can verify the digital signature by decrypting the hash
value using the signer’s public key, then perform the same hash computation over
the message and then compare the hash values for an exact match. If the hash
values are the same, then the signature is valid.

11
Symmetric: This is a term used in cryptography to indicate that the same key is
required to encrypt and decrypt. The word “symmetric” means “the same,” and we
are obviously referring to the key that is required at both ends to encrypt and
decrypt. Symmetric key cryptography has the fundamental problem of secure key
distribution.

12
1
Symmetric: This is a term used in cryptography to indicate that the same key is
required to encrypt and decrypt. The word “symmetric” means “the same,” and we
are obviously referring to the key that is required at both ends to encrypt and
decrypt. Symmetric key cryptography has the fundamental problem of secure key
distribution.

2
Asymmetric: This word means “not the same.” This is a term used in cryptography
in which two different but mathematically related keys are used where one key is
used to encrypt and another is used to decrypt.

3
A good read so , just read it through to re-enforce your concepts inshAllah.
http://books.gigatux.nl/mirror/securitytools/ddu/ch09lev1sec1.html

Don’t go down the rabbit hole as we will cover all of these concepts later in the notes
inshAllah.

4
Digital certificate: A digital certificate is an electronic document that contains the
name of an organization or individual, the
business address, the digital signature of the certificate authority issuing the
certificate, the certificate holder’s public key, a serial number, and the expiration
date. The certificate is used to identify the certificate holder and the associated
public key when conducting electronic transactions.

5
Certificate authority (CA): This is an entity trusted by one or more users as an
authority in a network that issues, revokes, and manages digital certificates that
prove the authenticity of public keys belonging to certain individuals or entities.

1
Registration authority (RA): This performs certificate registration services on behalf
of a CA. The RA, a single-purpose server, is responsible for the accuracy of the
information contained in a certificate request. The RA is also expected to perform
user validation before issuing a certificate request.

2
Work factor: This represents the time and effort required to break a protective
measure, or in cryptography, the time and
effort required to break a cryptography algorithm.

3
What is a block cipher?
A block cipher is an encryption algorithm that encrypts a fixed size of n-bits of data -
known as a block - at one time. The usual sizes of each block are 64 bits, 128 bits,
and 256 bits. So for example, a 64-bit block cipher will take in 64 bits of plaintext
and encrypt it into 64 bits of ciphertext. In cases where bits of plaintext is shorter
than the block size, padding schemes are called into play. Majority of the symmetric
ciphers used today are actually block ciphers. DES, Triple DES, AES, IDEA, and
Blowfish are some of the commonly used encryption algorithms that fall under this
group.

Popular block ciphers


DES - DES, which stands for Data Encryption Standard, used to be the most popular
block cipher in the world and was used in several industries. It's still popular today,
but only because it's usually included in historical discussions of encryption
algorithms. The DES algorithm became a standard in the US in 1977. However, it's
already been proven to be vulnerable to brute force attacks and other cryptanalytic
methods. DES is a 64-bit cipher that works with a 64-bit key. Actually, 8 of the 64
bits in the key are parity bits, so the key size is technically 56 bits long.

4
3DES - As its name implies, 3DES is a cipher based on DES. It's practically DES that's
run three times. Each DES operation can use a different key, with each key being 56
bits long. Like DES, 3DES has a block size of 64 bits. Although 3DES is many times
stronger than DES, it is also much slower (about 3x slower). Because many
organizations found 3DES to be too slow for many applications, it never became the
ultimate successor of DES. That distinction is reserved for the next cipher in our list -
AES.

AES - A US Federal Government standard since 2002, AES or Advanced Encryption


Standard is arguably the most widely used block cipher in the world. It has a block
size of 128 bits and supports three possible key sizes - 128, 192, and 256 bits. The
longer the key size, the stronger the encryption. However, longer keys also result in
longer processes of encryption. For a discussion on encryption key lengths,
read Choosing Key Lengths for Encrypted File Transfers.

Blowfish - This is another popular block cipher (although not as widely used as AES).
It has a block size of 64 bits and supports a variable-length key that can range from 32
to 448 bits. One thing that makes blowfish so appealing is that Blowfish is
unpatented and royalty-free.

Twofish - Yes, this cipher is related to Blowfish but it's not as popular (yet). It's a 128-
bit block cipher that supports key sizes up to 256 bits long.

What is a stream cipher?


A stream cipher is an encryption algorithm that encrypts 1 bit or byte of plaintext at a
time. It uses an infinite stream of pseudorandom bits as the key. For a stream cipher
implementation to remain secure, its pseudorandom generator should be
unpredictable and the key should never be reused. Stream ciphers are designed to
approximate an idealized cipher, known as the One-Time Pad.

The One-Time Pad, which is supposed to employ a purely random key, can potentially
achieve "perfect secrecy". That is, it's supposed to be fully immune to brute force
attacks. The problem with the one-time pad is that, in order to create such a cipher,
its key should be as long or even longer than the plaintext. In other words, if you have
500 MegaByte video file that you would like to encrypt, you would need a key that's
at least 4 Gigabits long.

Clearly, while Top Secret information or matters of national security may warrant the
use of a one-time pad, such a cipher would just be too impractical for day-to-day
public use. The key of a stream cipher is no longer as long as the original message.

4
Hence, it can no longer guarantee "perfect secrecy". However, it can still achieve a
strong level of security.

Popular stream ciphers


RC4 - RC4, which stands for Rivest Cipher 4, is the most widely used of all stream
ciphers, particularly in software. It's also known as ARCFOUR or ARC4. RC4 steam
chiphers have been used in various protocols like WEP and WPA (both security
protocols for wireless networks) as well as in TLS. Unfortunately, recent studies have
revealed vulnerabilities in RC4, prompting Mozilla and Microsoft to recommend that
it be disabled where possible. In fact, RFC 7465 prohibits the use of RC4 in all versions
of TLS.

These recent findings will surely allow other stream ciphers (e.g. SALSA,
SOSEMANUK, PANAMA, and many others, which already exist but never gained the
same popularity as RC4) to emerge and possibly take its place.

4
5
6
1
2
What is a block cipher?
A block cipher is an encryption algorithm that encrypts a fixed size of n-bits of data -
known as a block - at one time. The usual sizes of each block are 64 bits, 128 bits,
and 256 bits. So for example, a 64-bit block cipher will take in 64 bits of plaintext
and encrypt it into 64 bits of ciphertext. In cases where bits of plaintext is shorter
than the block size, padding schemes are called into play. Majority of the symmetric
ciphers used today are actually block ciphers. DES, Triple DES, AES, IDEA, and
Blowfish are some of the commonly used encryption algorithms that fall under this
group.

Popular block ciphers


DES - DES, which stands for Data Encryption Standard, used to be the most popular
block cipher in the world and was used in several industries. It's still popular today,
but only because it's usually included in historical discussions of encryption
algorithms. The DES algorithm became a standard in the US in 1977. However, it's
already been proven to be vulnerable to brute force attacks and other cryptanalytic
methods. DES is a 64-bit cipher that works with a 64-bit key. Actually, 8 of the 64
bits in the key are parity bits, so the key size is technically 56 bits long.

3
3DES - As its name implies, 3DES is a cipher based on DES. It's practically DES that's
run three times. Each DES operation can use a different key, with each key being 56
bits long. Like DES, 3DES has a block size of 64 bits. Although 3DES is many times
stronger than DES, it is also much slower (about 3x slower). Because many
organizations found 3DES to be too slow for many applications, it never became the
ultimate successor of DES. That distinction is reserved for the next cipher in our list -
AES.

AES - A US Federal Government standard since 2002, AES or Advanced Encryption


Standard is arguably the most widely used block cipher in the world. It has a block
size of 128 bits and supports three possible key sizes - 128, 192, and 256 bits. The
longer the key size, the stronger the encryption. However, longer keys also result in
longer processes of encryption. For a discussion on encryption key lengths,
read Choosing Key Lengths for Encrypted File Transfers.

Blowfish - This is another popular block cipher (although not as widely used as AES).
It has a block size of 64 bits and supports a variable-length key that can range from 32
to 448 bits. One thing that makes blowfish so appealing is that Blowfish is
unpatented and royalty-free.

Twofish - Yes, this cipher is related to Blowfish but it's not as popular (yet). It's a 128-
bit block cipher that supports key sizes up to 256 bits long.

3
Statistical Analysis
•Knowing % of occurrences of different letters (e.g. e occurs 13% of time in the document
19% of times)
•Knowing commonly occurring two and three letter combinations (e.g. in, it, the, ion, ing,
•If some knowledge about the content is available it is even easier to crack

4
Reference https://en.wikipedia.org/wiki/Initialization_vector

In cryptography, an initialization vector (IV) or starting variable (SV)[1] is a fixed-size


input to a cryptographic primitive that is typically required to
be random or pseudorandom. Randomization is crucial for encryption schemes to
achieve semantic security, a property whereby repeated usage of the scheme under
the same key does not allow an attacker to infer relationships between segments of
the encrypted message. For block ciphers, the use of an IV is described by
the modes of operation. Randomization is also required for other primitives, such
as universal hash functions and message authentication codes based thereon.

Some cryptographic primitives require the IV only to be non-repeating, and the


required randomness is derived internally. In this case, the IV is commonly called
a nonce (number used once), and the primitives are described as stateful as
opposed to randomized. This is because the IV need not be explicitly forwarded to a
recipient but may be derived from a common state updated at both sender and
receiver side. (In practice, a short nonce is still transmitted along with the message
to consider message loss.) An example of stateful encryption schemes is

5
the counter mode of operation, which uses a sequence number as a nonce.

The size of the IV is dependent on the cryptographic primitive used; for block ciphers,
it is generally the cipher's block size. Ideally, for encryption schemes, the
unpredictable part of the IV has the same size as the key to
compensate time/memory/data tradeoff attacks.[2][3][4][5] When the IV is chosen at
random, the probability of collisions due to the birthday problem must be taken into
account. Traditional stream ciphers such as RC4 do not support an explicit IV as input,
and a custom solution for incorporating an IV into the cipher's key or internal state is
needed. Some designs realized in practice are known to be insecure;
the WEP protocol is a notable example, and is prone to related-IV attacks.

5
Reference https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle

Read this article to build an understanding and only try to understand the concept and its
definitions. That is the total scope you need to know for the exam.

6
The average amount of effort or work required to break an encryption system is
referred to as the work factor. That is to say, decrypting a message without having
the entire encryption key or to find a secret key given all or part of a ciphertext
would also be referred to as the work factor of the cryptographic system. Typically,
the work factor is measured in some units such as hours of computing time on one
or more given computer systems or a cost in dollars of breaking the encryption. If
the work factor is
sufficiently high, the encryption system is considered to be

7
Reference https://en.wikipedia.org/wiki/Substitution_cipher

In cryptography, a substitution cipher is a method of encrypting by which units


of plaintext are replaced with ciphertext, according to a fixed system; the "units"
may be single letters (the most common), pairs of letters, triplets of letters,
mixtures of the above, and so forth. The receiver deciphers the text by performing
the inverse substitution.

Substitution ciphers can be compared with transposition ciphers. In a transposition


cipher, the units of the plaintext are rearranged in a different and usually quite
complex order, but the units themselves are left unchanged. By contrast, in a
substitution cipher, the units of the plaintext are retained in the same sequence in
the ciphertext, but the units themselves are altered.
There are a number of different types of substitution cipher. If the cipher operates
on single letters, it is termed a simple substitution cipher; a cipher that operates on
larger groups of letters is termed polygraphic. A monoalphabetic cipher uses fixed
substitution over the entire message, whereas a polyalphabetic cipher uses a
number of substitutions at different positions in the message, where a unit from

8
the plaintext is mapped to one of several possibilities in the ciphertext and vice versa.

Read the remaining article to build an understanding of substitution cipher.

8
Reference https://en.wikipedia.org/wiki/Transposition_cipher

Read the article to build a good understanding of transposition cipher

In cryptography, a transposition cipher is a method of encryption by which the


positions held by units of plaintext (which are commonly characters or groups of
characters) are shifted according to a regular system, so that
the ciphertext constitutes a permutation of the plaintext. That is, the order of the
units is changed (the plaintext is reordered). Mathematically a bijective function is
used on the characters' positions to encrypt and an inverse function to decrypt.

9
Reference https://vivadifferences.com/monoalphabetic-cipher-vs-polyalphabetic-cipher-5-
basic-difference-plus-example/

Read this paper carefully as it will build your understanding of the topic in good detail.

Monoalphabetic Cipher. A monoalphabetic cipher is a substitution cipher in which the


cipher alphabet is fixed through the encryption process. All of the substitution ciphers we
have seen prior to this handout are monoalphabetic; these ciphers are highly susceptible to
frequency analysis.

Polyalphabetic Cipher. A polyalphabetic cipher is a substitution cipher in which the cipher


alphabet changes during the encryption process.

10
Reference https://vivadifferences.com/monoalphabetic-cipher-vs-polyalphabetic-cipher-5-
basic-difference-plus-example/

Read this paper carefully as it will build your understanding of the topic in good detail.

Monoalphabetic Cipher. A monoalphabetic cipher is a substitution cipher in which the


cipher alphabet is fixed through the encryption process. All of the substitution ciphers we
have seen prior to this handout are monoalphabetic; these ciphers are highly susceptible to
frequency analysis.

Polyalphabetic Cipher. A polyalphabetic cipher is a substitution cipher in which the cipher


alphabet changes during the encryption process.

11
Reference https://en.wikipedia.org/wiki/One-time_pad

In cryptography, the one-time pad (OTP) is an encryption technique that cannot


be cracked, but requires the use of a one-time pre-shared key the same size as, or
longer than, the message being sent. In this technique, a plaintext is paired with a
random secret key (also referred to as a one-time pad). Then, each bit or character
of the plaintext is encrypted by combining it with the corresponding bit or character
from the pad using modular addition. If the key is (1) truly random, (2) at least as
long as the plaintext, (3) never reused in whole or in part, and (4) kept
completely secret, then the resulting ciphertext will be impossible to decrypt or
break.[1][2] It has also been proven that any cipher with the property of perfect
secrecy must use keys with effectively the same requirements as OTP keys.[3] Digital
versions of one-time pad ciphers have been used by nations for
critical diplomatic and military communication, but the problems of secure key
distribution have made them impractical for most applications.

First described by Frank Miller in 1882,[4][5] the one-time pad was re-invented in
1917. On July 22, 1919, U.S. Patent 1,310,719 was issued to Gilbert Vernam for

12
the XOR operation used for the encryption of a one-time pad.[6] Derived from
his Vernam cipher, the system was a cipher that combined a message with a key read
from a punched tape. In its original form, Vernam's system was vulnerable because
the key tape was a loop, which was reused whenever the loop made a full cycle. One-
time use came later, when Joseph Mauborgne recognized that if the key tape were
totally random, then cryptanalysis would be impossible.[7]

The "pad" part of the name comes from early implementations where the key
material was distributed as a pad of paper, allowing the current top sheet to be torn
off and destroyed after use. For concealment the pad was sometimes so small that a
powerful magnifying glass was required to use it. The KGB used pads of such size that
they could fit in the palm of a hand,[8] or in a walnut shell.[9] To increase security, one-
time pads were sometimes printed onto sheets of highly flammable nitrocellulose, so
that they could easily be burned after use.
There is some ambiguity to the term "Vernam cipher" because some sources use
"Vernam cipher" and "one-time pad" synonymously, while others refer to any
additive stream cipher as a "Vernam cipher", including those based on
a cryptographically secure pseudorandom number generator (CSPRNG).[10]

12
Reference https://en.wikipedia.org/wiki/Steganography

Please read this article and build an understanding well on this topic.

Steganography (/ˌstɛɡəˈnɒɡrəfi/ (listen) STEG-ə-NOG-rə-fee) is the practice of


concealing a file, message, image, or video within another file, message, image, or
video. The word steganography combines the Greek words steganos (στεγᾰνός),
meaning "covered or concealed", and graphe (γραφή) meaning "writing".
The first recorded use of the term was in 1499 by Johannes Trithemius in
his Steganographia, a treatise on cryptography and steganography, disguised as a
book on magic. Generally, the hidden messages appear to be (or to be part of)
something else: images, articles, shopping lists, or some other cover text. For
example, the hidden message may be in invisible ink between the visible lines of a
private letter. Some implementations of steganography that lack a shared secret are
forms of security through obscurity, and key-dependent steganographic schemes
adhere to Kerckhoffs's principle.[1]
The advantage of steganography over cryptography alone is that the intended

1
secret message does not attract attention to itself as an object of scrutiny. Plainly
visible encrypted messages, no matter how unbreakable they are, arouse interest and
may in themselves be incriminating in countries in which encryption is illegal.[2]
Whereas cryptography is the practice of protecting the contents of a message alone,
steganography is concerned both with concealing the fact that a secret message is
being sent and its contents.
Steganography includes the concealment of information within computer files. In
digital steganography, electronic communications may include steganographic coding
inside of a transport layer, such as a document file, image file, program or protocol.
Media files are ideal for steganographic transmission because of their large size. For
example, a sender might start with an innocuous image file and adjust the color of
every hundredth pixel to correspond to a letter in the alphabet. The change is so
subtle that someone who is not specifically looking for it is unlikely to notice the
change.

1
Reference https://www.ssl2buy.com/wiki/symmetric-vs-asymmetric-encryption-what-are-
differences

Symmetric vs. Asymmetric Encryption – What are differences?


Information security has grown to be a colossal factor, especially with modern
communication networks, leaving loopholes that could be leveraged to devastating effects.
This article presents a discussion on two popular encryption schemes that can be used to
tighten communication security in Symmetric and Asymmetric Encryption. In principle, the
best way to commence this discussion is to start from the basics first. Thus, we look at the
definitions of algorithms and key cryptographic concepts and then dive into the core part of
the discussion where we present a comparison of the two techniques.

Algorithms
An algorithm is basically a procedure or a formula for solving a data snooping problem. An
encryption algorithm is a set of mathematical procedure for performing encryption on
data. Through the use of such an algorithm, information is made in the cipher text and
requires the use of a key to transforming the data into its original form. This brings us to
the concept of cryptography that has long been used in information security in

2
communication systems.

Cryptography
Cryptography is a method of using advanced mathematical principles in storing and
transmitting data in a particular form so that only those whom it is intended can read and
process it. Encryption is a key concept in cryptography – It is a process whereby a message
is encoded in a format that cannot be read or understood by an eavesdropper. The technique
is old and was first used by Caesar to encrypt his messages using Caesar cipher. A plain text
from a user can be encrypted to a ciphertext, then send through a communication channel
and no eavesdropper can interfere with the plain text. When it reaches the receiver end, the
ciphertext is decrypted to the original plain text.

Cryptography Terms

Encryption: It is the process of locking up information using cryptography. Information that


has been locked this way is encrypted.
Decryption: The process of unlocking the encrypted information using cryptographic
techniques.
Key: A secret like a password used to encrypt and decrypt information. There are a few
different types of keys used in cryptography.
Steganography: It is actually the science of hiding information from people who would snoop
on you. The difference between steganography and encryption is that the would-be snoopers
may not be able to tell there’s any hidden information in the first place.
Symmetrical Encryption
This is the simplest kind of encryption that involves only one secret key to cipher and
decipher information. Symmetrical encryption is an old and best-known technique. It uses a
secret key that can either be a number, a word or a string of random letters. It is a blended
with the plain text of a message to change the content in a particular way. The sender and
the recipient should know the secret key that is used to encrypt and decrypt all the
messages. Blowfish, AES, RC4, DES, RC5, and RC6 are examples of symmetric encryption. The
most widely used symmetric algorithm is AES-128, AES-192, and AES-256.
The main disadvantage of the symmetric key encryption is that all parties involved have to
exchange the key used to encrypt the data before they can decrypt it.
Asymmetrical Encryption
Asymmetrical encryption is also known as public key cryptography, which is a relatively new
method, compared to symmetric encryption. Asymmetric encryption uses two keys to
encrypt a plain text. Secret keys are exchanged over the Internet or a large network. It
ensures that malicious persons do not misuse the keys. It is important to note that anyone
with a secret key can decrypt the message and this is why asymmetrical encryption uses two
related keys to boosting security. A public key is made freely available to anyone who might
want to send you a message. The second private key is kept a secret so that you can only
know.
A message that is encrypted using a public key can only be decrypted using a private key,

2
while also, a message encrypted using a private key can be decrypted using a public key.
Security of the public key is not required because it is publicly available and can be passed
over the internet. Asymmetric key has a far better power in ensuring the security of
information transmitted during communication.
Asymmetric encryption is mostly used in day-to-day communication channels, especially
over the Internet. Popular asymmetric key encryption algorithm includes EIGamal, RSA, DSA,
Elliptic curve techniques, PKCS.
Asymmetric Encryption in Digital Certificates
To use asymmetric encryption, there must be a way of discovering public keys. One typical
technique is using digital certificates in a client-server model of communication. A certificate
is a package of information that identifies a user and a server. It contains information such as
an organization’s name, the organization that issued the certificate, the users’ email address
and country, and users public key.
When a server and a client require a secure encrypted communication, they send a query
over the network to the other party, which sends back a copy of the certificate. The other
party’s public key can be extracted from the certificate. A certificate can also be used to
uniquely identify the holder.
SSL/TLS uses both asymmetric and symmetric encryption, quickly look at digitally signed SSL
certificates issued by trusted certificate authorities (CAs).
COMODO
CERTIFICATES
SYMANTEC
CERTIFICATES
GLOBALSIGN
CERTIFICATES
GEOTRUST
CERTIFICATES
THAWTE
CERTIFICATES
RAPIDSSL
CERTIFICATES
ALPHASSL
CERTIFICATES
Difference Between Symmetric and Asymmetric Encryption
Symmetric encryption uses a single key that needs to be shared among the people who need
to receive the message while asymmetrical encryption uses a pair of public key and a private
key to encrypt and decrypt messages when communicating.
Symmetric encryption is an old technique while asymmetric encryption is relatively new.
Asymmetric encryption was introduced to complement the inherent problem of the need to
share the key in symmetrical encryption model, eliminating the need to share the key by
using a pair of public-private keys.
Asymmetric encryption takes relatively more time than the symmetric encryption.

2
Conclusion
When it comes to encryption, the latest schemes may necessarily the best fit. You should
always use the encryption algorithm that is right for the task at hand. In fact, as cryptography
takes a new shift, new algorithms are being developed in a bid to catch up with the
eavesdroppers and secure information to enhance confidentiality. Hackers are bound to
make it tough for experts in the coming years, thus expect more from the cryptographic
community!

2
Symmetrical Encryption
This is the simplest kind of encryption that involves only one secret key to cipher
and decipher information. Symmetrical encryption is an old and best-known
technique. It uses a secret key that can either be a number, a word or a string of
random letters. It is a blended with the plain text of a message to change the
content in a particular way. The sender and the recipient should know the secret
key that is used to encrypt and decrypt all the messages. Blowfish, AES, RC4, DES,
RC5, and RC6 are examples of symmetric encryption. The most widely used
symmetric algorithm is AES-128, AES-192, and AES-256.

The main disadvantage of the symmetric key encryption is that all parties involved
have to exchange the key used to encrypt the data before they can decrypt it.

3
Asymmetrical Encryption
Asymmetrical encryption is also known as public key cryptography, which is a
relatively new method, compared to symmetric encryption. Asymmetric encryption
uses two keys to encrypt a plain text. Secret keys are exchanged over the Internet or
a large network. It ensures that malicious persons do not misuse the keys. It is
important to note that anyone with a secret key can decrypt the message and this is
why asymmetrical encryption uses two related keys to boosting security. A public
key is made freely available to anyone who might want to send you a message. The
second private key is kept a secret so that you can only know.
A message that is encrypted using a public key can only be decrypted using a
private key, while also, a message encrypted using a private key can be decrypted
using a public key. Security of the public key is not required because it is publicly
available and can be passed over the internet. Asymmetric key has a far better
power in ensuring the security of information transmitted during communication.

Asymmetric encryption is mostly used in day-to-day communication channels,


especially over the Internet. Popular asymmetric key encryption algorithm includes
EIGamal, RSA, DSA, Elliptic curve techniques, PKCS.

4
5
Reference https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation Read about
block cipher to clear up your concept further inshAllah.

In cryptography, a block cipher mode of operation is an algorithm that uses a block


cipher to provide information security such as confidentiality or authenticity.[1] A
block cipher by itself is only suitable for the secure cryptographic transformation
(encryption or decryption) of one fixed-length group of bits called a block.[2] A mode
of operation describes how to repeatedly apply a cipher's single-block operation to
securely transform amounts of data larger than a block.[3][4][5]
Most modes require a unique binary sequence, often called an initialization
vector (IV), for each encryption operation. The IV has to be non-repeating and, for
some modes, random as well. The initialization vector is used to ensure
distinct ciphertexts are produced even when the same plaintext is encrypted
multiple times independently with the same key.[6] Block ciphers may be capable of
operating on more than one block size, but during transformation the block size is
always fixed. Block cipher modes operate on whole blocks and require that the last
part of the data be padded to a full block if it is smaller than the current block
size.[2] There are, however, modes that do not require padding because they

6
effectively use a block cipher as a stream cipher.
Historically, encryption modes have been studied extensively in regard to their error
propagation properties under various scenarios of data modification. Later
development regarded integrity protection as an entirely separate cryptographic goal.
Some modern modes of operation combine confidentiality and authenticity in an
efficient way, and are known as authenticated encryption modes.[7]

Electronic Codebook (ECB)


The simplest of the encryption modes is the Electronic Codebook (ECB) mode
(named after conventional physical codebooks[10]). The message is divided into
blocks, and each block is encrypted separately.

The disadvantage of this method is a lack of diffusion. Because ECB encrypts


identical plaintext blocks into identical ciphertext blocks, it does not hide data
patterns well. In some senses, it doesn't provide serious message confidentiality, and
it is not recommended for use in cryptographic protocols at all.

A striking example of the degree to which ECB can leave plaintext data patterns in the
ciphertext can be seen when ECB mode is used to encrypt a bitmap image which uses
large areas of uniform color. While the color of each individual pixel is encrypted, the
overall image may still be discerned, as the pattern of identically colored pixels in the
original remains in the encrypted version.

Cipher Block Chaining (CBC)[edit]

In CBC mode, each block of plaintext is XORed with the previous ciphertext block
before being encrypted. This way, each ciphertext block depends on all plaintext
blocks processed up to that point. To make each message unique, an initialization
vector must be used in the first block.

CBC has been the most commonly used mode of operation. Its main drawbacks are
that encryption is sequential (i.e., it cannot be parallelized), and that the message
must be padded to a multiple of the cipher block size. One way to handle this last
issue is through the method known as ciphertext stealing. Note that a one-bit change
in a plaintext or initialization vector (IV) affects all following ciphertext blocks.

Decrypting with the incorrect IV causes the first block of plaintext to be corrupt but
subsequent plaintext blocks will be correct. This is because each block is XORed with
the ciphertext of the previous block, not the plaintext, so one does not need to
decrypt the previous block before using it as the IV for the decryption of the current

6
one. This means that a plaintext block can be recovered from two adjacent blocks of
ciphertext. As a consequence, decryption can be parallelized. Note that a one-bit
change to the ciphertext causes complete corruption of the corresponding block of
plaintext, and inverts the corresponding bit in the following block of plaintext, but
the rest of the blocks remain intact. This peculiarity is exploited in different padding
oracle attacks, such as POODLE.

Explicit Initialization Vectors[12] takes advantage of this property by prepending a


single random block to the plaintext. Encryption is done as normal, except the IV
does not need to be communicated to the decryption routine. Whatever IV
decryption uses, only the random block is "corrupted". It can be safely discarded and
the rest of the decryption is the original plaintext.

6
Electronic Codebook (ECB)
The simplest of the encryption modes is the Electronic Codebook (ECB) mode
(named after conventional physical codebooks[10]). The message is divided into
blocks, and each block is encrypted separately.

The disadvantage of this method is a lack of diffusion. Because ECB encrypts


identical plaintext blocks into identical ciphertext blocks, it does not hide data
patterns well. In some senses, it doesn't provide serious message confidentiality,
and it is not recommended for use in cryptographic protocols at all.

A striking example of the degree to which ECB can leave plaintext data patterns in
the ciphertext can be seen when ECB mode is used to encrypt a bitmap
image which uses large areas of uniform color. While the color of each
individual pixel is encrypted, the overall image may still be discerned, as the pattern
of identically colored pixels in the original remains in the encrypted version.

7
Read this article further to clear up your understanding on ECB mode inshAllah

https://medium.com/asecuritysite-when-bob-met-alice/electronic-code-book-ecb-and-
cipher-block-chaining-cbc-e3309d704917

8
Cipher Block Chaining (CBC)[edit]

In CBC mode, each block of plaintext is XORed with the previous ciphertext block
before being encrypted. This way, each ciphertext block depends on all plaintext
blocks processed up to that point. To make each message unique, an initialization
vector must be used in the first block.

CBC has been the most commonly used mode of operation. Its main drawbacks are
that encryption is sequential (i.e., it cannot be parallelized), and that the message
must be padded to a multiple of the cipher block size. One way to handle this last
issue is through the method known as ciphertext stealing. Note that a one-bit
change in a plaintext or initialization vector (IV) affects all following ciphertext
blocks.

Decrypting with the incorrect IV causes the first block of plaintext to be corrupt but
subsequent plaintext blocks will be correct. This is because each block is XORed
with the ciphertext of the previous block, not the plaintext, so one does not need to
decrypt the previous block before using it as the IV for the decryption of the current

9
one. This means that a plaintext block can be recovered from two adjacent blocks of
ciphertext. As a consequence, decryption can be parallelized. Note that a one-bit
change to the ciphertext causes complete corruption of the corresponding block of
plaintext, and inverts the corresponding bit in the following block of plaintext, but
the rest of the blocks remain intact. This peculiarity is exploited in different padding
oracle attacks, such as POODLE.

Explicit Initialization Vectors[12] takes advantage of this property by prepending a


single random block to the plaintext. Encryption is done as normal, except the IV
does not need to be communicated to the decryption routine. Whatever IV
decryption uses, only the random block is "corrupted". It can be safely discarded and
the rest of the decryption is the original plaintext.

Another way of looking at CBC is that CBC encryption mode was invented in IBM in 1976.
This mode is about adding XOR each plaintext block to the ciphertext block that was
previously produced. The result is then encrypted using the cipher algorithm in
the usual way. As a result, every subsequent ciphertext block depends on
the previous one. The first plaintext block is added XOR to a random initialization
vector (commonly referred to as IV). The vector has the same size as a plaintext
block.
Encryption in CBC mode can only be performed by using one thread. Despite this
disadvantage, this is a very popular way of using block ciphers. CBC mode is used
in many applications.

During decrypting of a ciphertext block, one should add XOR the output data
received from the decryption algorithm to the previous ciphertext block. Because
the receiver knows all the ciphertext blocks just after obtaining the encrypted
message, he can decrypt the message using many threads simultaneously.

If one bit of a plaintext message is damaged (for example because of some earlier
transmission error), all subsequent ciphertext blocks will be damaged and it will be
never possible to decrypt the ciphertext received from this plaintext. As opposed
to that, if one ciphertext bit is damaged, only two received plaintext blocks will
be damaged. It might be possible to recover the data.

A message that is to be encrypted using the CBC mode, should be extended till
the size that is equal to an integer multiple of a single block length (similarly, as in the
case of using the ECB mode).

9
10
Read the above article one ore time to ensure that your concepts are all clear as it relates
to block ciphers and its modes.

11
Reference https://en.wikipedia.org/wiki/Stream_cipher

A stream cipher is a symmetric key cipher where plaintext digits are combined with
a pseudorandom cipher digit stream (keystream). In a stream cipher,
each plaintext digit is encrypted one at a time with the corresponding digit of the
keystream, to give a digit of the ciphertext stream. Since encryption of each digit is
dependent on the current state of the cipher, it is also known as state cipher. In
practice, a digit is typically a bit and the combining operation is an exclusive-
or (XOR).
The pseudorandom keystream is typically generated serially from a random seed
value using digital shift registers. The seed value serves as the cryptographic key for
decrypting the ciphertext stream. Stream ciphers represent a different approach to
symmetric encryption from block ciphers. Block ciphers operate on large blocks of
digits with a fixed, unvarying transformation. This distinction is not always clear-cut:
in some modes of operation, a block cipher primitive is used in such a way that it
acts effectively as a stream cipher. Stream ciphers typically execute at a higher
speed than block ciphers and have lower hardware complexity. However, stream
ciphers can be susceptible to serious security problems if used incorrectly

12
(see stream cipher attacks); in particular, the same starting state (seed) must never
be used twice.

12
The CFB mode is similar to the CBC mode described above. The main difference is
that one should encrypt ciphertext data from the previous round (so not the
plaintext block) and then add the output to the plaintext bits. It does not affect
the cipher security but it results in the fact that the same encryption algorithm (as
was used for encrypting plaintext data) should be used during the decryption
process.

If one bit of a plaintext message is damaged, the corresponding ciphertext block


and all subsequent ciphertext blocks will be damaged. Encryption in CFB mode can
be performed only by using one thread.
On the other hand, as in CBC mode, one can decrypt ciphertext blocks using many
threads simultaneously. Similarly, if one ciphertext bit is damaged, only two
received plaintext blocks will be damaged.
As opposed to the previous block cipher modes, the encrypted message doesn't
need to be extended till the size that is equal to an integer multiple of a single block
length.

13
Reference https://searchsecurity.techtarget.com/definition/ciphertext-feedback

Ciphertext feedback (CFB) is a mode of operation for a block cipher. In contrast to


the cipher block chaining (CBC) mode, which encrypts a set number of bits
of plaintext at a time, it is at times desirable to encrypt and transfer some plaintext
values instantly one at a time, for which ciphertext feedback is a method. Like
cipher block chaining, ciphertext feedback also makes use of an initialization vector
(IV). CFB uses a block cipher as a component of a random number generator. In CFB
mode, the previous ciphertext block is encrypted and the output is XORed (see XOR)
with the current plaintext block to create the current ciphertext block. The XOR
operation conceals plaintext patterns. Plaintext cannot be directly worked on unless
there is retrieval of blocks from either the beginning or end of the ciphertext.
The entropy that results can be implemented as a stream cipher. In fact, CFB is
primarily a mode to derive some characteristics of a stream cipher from a block
cipher. In common with CBC mode, changing the IV to the same plaintext block
results in different output. Though the IV need not be secret, some applications
would see this desirable. Chaining dependencies are similar to CBC, in that
reordering ciphertext block sequences alters decryption output, as decryption of

14
one block depends on the decryption of the preceding blocks.

14
Algorithms that work in the OFB mode create keystream bits that are used
for encryption subsequent data blocks. In this regard, the way of working of
the block cipher becomes similar to the way of working of a typical stream cipher.

Because of the continuous creation of keystream bits, both encryption


and decryption can be performed using only one thread at a time. Similarly, as in
the CFB mode, both data encryption and decryption uses the same cipher
encryption algorithm.

If one bit of a plaintext or ciphertext message is damaged (for example because of


a transmission error), only one corresponding ciphertext or respectively plaintext
bit is damaged as well. It is possible to use various correction algorithms to restore
the previous value of damaged parts of the received message.
The biggest drawback of OFB is that the repetition of encrypting the initialization
vector may produce the same state that has occurred before. It is an unlikely
situation but in such a case the plaintext will start to be encrypted by the same data
as previously.

15
Reference https://searchsecurity.techtarget.com/definition/output-feedback

In cryptography, output feedback (OFB) is a mode of operation for a block cipher. It


has some similarities to the ciphertext feedback mode in that it permits encryption
of differing block sizes, but has the key difference that the output of the encryption
block function is the feedback (instead of the ciphertext). The XOR (exclusive OR)
value of each plaintext block is created independently of both the plaintext and
ciphertext. It is this mode that is used when there can be no tolerance for error
propagation, as there are no chaining dependencies. Like the ciphertext feedback
mode, it uses an initialization vector (IV). Changing the IV in the same plaintext
block results in different ciphertext.
In terms of error correction, output feedback can tolerate ciphertext bit errors, but
is incapable of self-synchronization after losing ciphertext bits, as it disturbs the
synchronization of the aligning keystream. A problem with output feedback is that
the plaintext can be easily altered, but using a digital signature scheme can
overcome this problem.

16
Using the CTR mode makes block cipher way of working similar to a stream cipher.
As in the OFB mode, keystream bits are created regardless of content of encrypting
data blocks. In this mode, subsequent values of an increasing counter are added to
a nonce value (the nonce means a number that is unique: number used once) and
the results are encrypted as usual. The nonce plays the same role as initialization
vectors in the previous modes.

It is one of the most popular block ciphers modes of operation. Both encryption
and decryption can be performed using many threads at the same time.
If one bit of a plaintext or ciphertext message is damaged, only one corresponding
output bit is damaged as well. Thus, it is possible to use various correction
algorithms to restore the previous value of damaged parts of received messages.
The CTR mode is also known as the SIC mode (Segment Integer Counter).

17
18
Reference https://searchsecurity.techtarget.com/definition/Data-Encryption-Standard

The Data Encryption Standard (DES) is an outdated symmetric-key method of


data encryption.
DES works by using the same key to encrypt and decrypt a message, so both the
sender and the receiver must know and use the same private key. Once the go-to,
symmetric-key algorithm for the encryption of electronic data, DES has been
superseded by the more secure Advanced Encryption Standard (AES) algorithm.
Originally designed by researchers at IBM in the early 1970s, DES was adopted by
the U.S. government as an official Federal Information Processing Standard (FIPS) in
1977 for the encryption of commercial and sensitive yet unclassified government
computer data. It was the first encryption algorithm approved by the U.S.
government for public disclosure. This ensured that DES was quickly adopted by
industries such as financial services, where the need for strong encryption is high.
The simplicity of DES also saw it used in a wide variety of embedded systems, smart
cards, SIM cards and network devices requiring encryption like modems, set-top
boxes and routers.
DES key length and brute-force attacks

1
The Data Encryption Standard is a block cipher, meaning a cryptographic key and
algorithm are applied to a block of data simultaneously rather than one bit at a time.
To encrypt a plaintext message, DES groups it into 64-bit blocks. Each block is
enciphered using the secret key into a 64-bit ciphertext by means of permutation and
substitution. The process involves 16 rounds and can run in four different modes,
encrypting blocks individually or making each cipher block dependent on all the
previous blocks. Decryption is simply the inverse of encryption, following the same
steps but reversing the order in which the keys are applied. For any cipher, the most
basic method of attack is brute force, which involves trying each key until you find
the right one. The length of the key determines the number of possible keys -- and
hence the feasibility -- of this type of attack. DES uses a 64-bit key, but eight of those
bits are used for parity checks, effectively limiting the key to 56-bits. Hence, it would
take a maximum of 2^56, or 72,057,594,037,927,936, attempts to find the correct
key.
Even though few messages encrypted using DES encryption are likely to be subjected
to this kind of code-breaking effort, many security experts felt the 56-bit key length
was inadequate even before DES was adopted as a standard. (There have always
been suspicions that interference from the NSA weakened IBM's original algorithm).
Even so, DES remained a trusted and widely used encryption algorithm through the
mid-1990s. However, in 1998, a computer built by the Electronic Frontier
Foundation (EFF) decrypted a DES-encoded message in 56 hours. By harnessing the
power of thousands of networked computers, the following year EFF cut the
decryption time to 22 hours.
Apart from providing backwards compatibility in some instances, reliance today upon
DES for data confidentiality is a serious security design error in any computer system
and should be avoided. There are much more secure algorithms available, such as
AES. Much like a cheap suitcase lock, DES will keep the contents safe from honest
people, but it won't stop a determined thief.
Successors to DES
Encryption strength is directly tied to key size, and 56-bit key lengths have become
too small relative to the processing power of modern computers. So in 1997,
the National Institute of Standards and Technology (NIST) announced an initiative to
choose a successor to DES; in 2001, it selected the Advanced Encryption Standard as
a replacement. The Data Encryption Standard (FIPS 46-3) was officially withdrawn in
May 2005, though Triple DES (3DES) is approved through 2030 for sensitive
government information. 3DES performs three iterations of the DES algorithm; if
keying option number one is chosen, a different key is used each time to increase the
key length to 168 bits. However, due to the likelihood of a meet-in-the-middle attack,
the effective security it provides is only 112 bits. 3DES encryption is obviously slower
than plain DES.
Legacy of DES

1
Despite having reached the end of its useful life, the arrival of the Data Encryption
Standard served to promote the study of cryptography and the development of new
encryption algorithms. Until DES, cryptography was a dark art confined to the realms
of military and government intelligence organizations. The open nature of DES meant
academics, mathematicians and anyone interested in security could study how the
algorithm worked and try to crack it. As with any popular and challenging puzzle, a
craze -- or in this case, a whole industry -- was born.

1
As we’ve seen, the main problem with DES is that the key is too short to provide
adequate protection against brute force attacks. Increasing the key length is an
effective defense against a brute force attack. Ways to improve the DES algorithm’s
resistance to a brute force attack have been developed by the industry. These
efforts are referred to as Double DES and Triple DES.

Double-DES refers to the use of two DES encryptions with two separate keys,
effectively doubling the size of the DES key from 56 bits to 112 bits. This dramatic
increase in key size much more than doubles the strength of the cipher. Each
increase of a single bit effectively doubles the number of keys in the keyspace. This
means that a 57-bit key space is twice as large as a 56-bit key space. A 58-bit key is
four times as big, etc. This would seem like a vast improvement in strength against
brute force; however, there is an attack on Double-DES that reduces its effective
number of keys to about the same number in DES. This attack is known as the
meet-in-the-middle attack, and it reduces the strength of Double-DES to almost the
same as DES.

2
A very effective attack against double DES is based on doing a brute force attack
against known plaintext. This attack is known as the meet man-in-the-middle attack.
The attacker would encrypt the plaintext using all possible keys and create a table
containing all possible results. This intermediate cipher is referred to as “m” for this
discussion. This would mean encrypting using all 2 to the power of 56 possible keys.
The table would then be sorted according to the values of “m.” The attacker would
then decrypt the ciphertext using all possible keys until he found a match with the
value of “m.” This would result in a true strength of double DES of approximately 2
to the power of 57 (twice the strength of DES but not strong enough to be
considered effective) instead of the 2
to the power of 112 as originally hoped. Triple

3
The defeat of double DES resulted in the adoption of another
improvement in how the DES algorithm could be modified to stand up better
against brute force attacks. This improvement is known as Triple DES. Triple DES is
much more secure, so much so that although attacks on it have been proposed, the
data requirements of these have made them impractical. With Triple DES, there are
three DES encryptions with either three or two different and separate keys that are
used.

Managing three keys is more difficult, thus, many implementations


will use the two-key method that reduces the key management
requirement. The various ways of using Triple DES include the
following:

• DES-EEE3: three DES encryptions with three different keys


• DES-EDE3: three DES operations in the sequence encrypt decrypt- encrypt with
three different keys
• DES-EEE2 and DES-EDE2: same as the previous formats except that the first and
third operations use the same key

4
Reference https://searchsecurity.techtarget.com/definition/Advanced-Encryption-Standard

Please read this thoroughly and clear your concept on AES.

The Advanced Encryption Standard, or AES, is a symmetric block cipher chosen by


the U.S. government to protect classified information and is implemented in
software and hardware throughout the world to encrypt sensitive data.

The National Institute of Standards and Technology (NIST) started development of


AES in 1997 when it announced the need for a successor algorithm for the Data
Encryption Standard (DES), which was starting to become vulnerable to brute-force
attacks.
This new, advanced encryption algorithm would be unclassified and had to be
"capable of protecting sensitive government information well into the next
century," according to the NIST announcement of the process for development of
an advanced encryption standard algorithm. It was intended to be easy to
implement in hardware and software, as well as in restricted environments (for
example, in a smart card) and offer good defenses against various attack

5
techniques.
AES features
The selection process for this new symmetric key algorithm was fully open to public
scrutiny and comment; this ensured a thorough, transparent analysis of the designs
submitted.
NIST specified the new advanced encryption standard algorithm must be a block
cipher capable of handling 128 bit blocks, using keys sized at 128, 192, and 256 bits;
other criteria for being chosen as the next advanced encryption standard algorithm
included:
Security: Competing algorithms were to be judged on their ability to resist attack, as
compared to other submitted ciphers, though security strength was to be considered
the most important factor in the competition.
Cost: Intended to be released under a global, nonexclusive and royalty-free basis, the
candidate algorithms were to be evaluated on computational and memory efficiency.
Implementation: Algorithm and implementation characteristics to be evaluated
included the flexibility of the algorithm; suitability of the algorithm to be
implemented in hardware or software; and overall, relative simplicity of
implementation.
Choosing AES algorithms
Fifteen competing symmetric key algorithm designs were subjected to preliminary
analysis by the world cryptographic community, including the National Security
Agency (NSA). In August 1999, NIST selected five algorithms for more extensive
analysis. These were:
MARS, submitted by a large team from IBM Research
RC6, submitted by RSA Security
Rijndael, submitted by two Belgian cryptographers, Joan Daemen and Vincent Rijmen
Serpent, submitted by Ross Anderson, Eli Biham and Lars Knudsen
Twofish, submitted by a large team of researchers from Counterpane Internet
Security, including noted cryptographer Bruce Schneier
Implementations of all of the above were tested extensively
in ANSI C and Java languages for speed and reliability in encryption and
decryption; key and algorithm setup time; and resistance to various attacks, both in
hardware- and software-centric systems. Members of the global cryptographic
community conducted detailed analyses (including some teams that tried to break
their own submissions).
After much feedback, debate and analysis, the Rijndael cipher -- a mash of the
Belgian creators' last names Daemen and Rijmen -- was selected as the proposed
algorithm for AES in October 2000 and published by NIST as U.S. FIPS PUB 197. The
Advanced Encryption Standard became effective as a federal government standard in
2002. It is also included in the International Organization for Standardization
(ISO)/International Electrotechnical Commission (IEC) 18033-3 standard, which

5
specifies block ciphers for the purpose of data confidentiality.
In June 2003, the U.S. government announced that AES could be used to protect
classified information, and it soon became the default encryption algorithm for
protecting classified information as well as the first publicly accessible and open
cipher approved by the NSA for top-secret information. The NSA chose AES as one of
the cryptographic algorithms to be used by its Information Assurance Directorate to
protect national security systems.
Its successful use by the U.S. government led to widespread use in the private sector,
leading AES to become the most popular algorithm used in symmetric
key cryptography. The transparent selection process helped create a high level of
confidence in AES among security and cryptography experts. AES is more secure than
its predecessors -- DES and 3DES -- as the algorithm is stronger and uses longer key
lengths. It also enables faster encryption than DES and 3DES, making it ideal for
software applications, firmware and hardware that require either low latency or high
throughput, such as firewalls and routers. It is used in many protocols such as Secure
Sockets Layer (SSL)/Transport Layer Security (TLS) and can be found in most modern
applications and devices that need encryption functionality.
How AES encryption works
AES comprises three block ciphers: AES-128, AES-192 and AES-256. Each cipher
encrypts and decrypts data in blocks of 128 bits using cryptographic keys of 128-,
192- and 256-bits, respectively. The Rijndael cipher was designed to accept additional
block sizes and key lengths, but for AES, those functions were not adopted.
Symmetric (also known as secret-key) ciphers use the same key for encrypting and
decrypting, so the sender and the receiver must both know -- and use -- the
same secret key. All key lengths are deemed sufficient to protect classified
information up to the "Secret" level with "Top Secret" information requiring either
192- or 256-bit key lengths. There are 10 rounds for 128-bit keys, 12 rounds for 192-
bit keys and 14 rounds for 256-bit keys -- a round consists of several processing steps
that include substitution, transposition and mixing of the input plaintext and
transform it into the final output of ciphertext.
The AES encryption algorithm defines a number of transformations that are to be
performed on data stored in an array. The first step of the cipher is to put the data
into an array; after which the cipher transformations are repeated over a number of
encryption rounds. The number of rounds is determined by the key length, with 10
rounds for 128-bit keys, 12 rounds for 192-bit keys and 14 rounds for 256-bit keys.
The first transformation in the AES encryption cipher is substitution of data using a
substitution table; the second transformation shifts data rows, the third mixes
columns. The last transformation is a simple exclusive or (XOR) operation performed
on each column using a different part of the encryption key -- longer keys need more
rounds to complete.
AES encryption transforms array data by shuffling rows and columns, and substitutions based

5
on the encryption key.
Attacks on AES encryption
Research into attacks on AES encryption has continued since the standard was
finalized in 2000. Various researchers have published attacks against reduced-round
versions of the Advanced Encryption Standard.
Margaret Rouse asks:
How could the selection process used by NIST for the Advanced Encryption
Standard be improved?
Join the Discussion
In 2005, cryptographer Daniel J. Bernstein published a paper, "Cache-timing attacks
on AES," in which he demonstrated a timing attack on AES capable of achieving a
"complete AES key recovery from known-plaintext timings of a network server on
another computer."
A research paper published in 2011, titled "Biclique Cryptanalysis of the Full AES," by
researchers Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger,
demonstrated that by using a technique called a biclique attack, they could recover
AES keys faster than a brute-force attack by a factor of between three and five,
depending on the cipher version. However, even this attack does not threaten the
practical use of AES due to its high-computational complexity.
AES has proven to be a reliable cipher, and the only practical successful attacks
against AES have leveraged side-channel attacks on weaknesses found in the
implementation or key management of specific AES-based encryption products.
Side-channel attacks exploit flaws in the way a cipher has been implemented rather
than brute force or theoretical weaknesses in a cipher. The Browser Exploit Against
SSL/TLS (BEAST) browser exploit against the TLS v1.0 protocol is a good example; TLS
can use AES to encrypt data, but due to the information that TLS exposes, attackers
managed to predict the initialization vector block used at the start of the encryption
process.

5
Reference https://searchsecurity.techtarget.com/definition/Rijndael

Rijndael (pronounced rain-dahl) is the algorithm that has been selected by the U.S.
National Institute of Standards and Technology (NIST) as the candidate for the
Advanced Encryption Standard (AES). It was selected from a list of five finalists, that
were themselves selected from an original list of more than 15 submissions.
Rijndael will begin to supplant the Data Encryption Standard (DES) - and later Triple
DES - over the next few years in many cryptography applications. The algorithm was
designed by two Belgian cryptologists, Vincent Rijmen and Joan Daemen, whose
surnames are reflected in the cipher's name. Rijndael has its origins in Square, an
earlier collaboration between the two cryptologists.
The Rijndael algorithm is a new generation symmetric block cipher that
supports key sizes of 128, 192 and 256 bits, with data handled in 128-bit blocks -
however, in excess of AES design criteria, the block sizes can mirror those of the
keys. Rijndael uses a variable number of rounds, depending on key/block sizes, as
follows:
9 rounds if the key/block size is 128 bits
11 rounds if the key/block size is 192 bits

6
13 rounds if the key/block size is 256 bits
Rijndael is a substitution linear transformation cipher, not requiring a Feistel network.
It use triple discreet invertible uniform transformations (layers). Specifically, these
are: Linear Mix Transform; Non-linear Transform and Key Addition Transform. Even
before the first round, a simple key addition layer is performed, which adds to
security. Thereafter, there are Nr-1 rounds and then the final round. The
transformations form a State when started but before completion of the entire
process.
The State can be thought of as an array, structured with 4 rows and the column
number being the block length divided by bit length (for example, divided by 32). The
cipher key similarly is an array with 4 rows, but the key length divided by 32 to give
the number of columns. The blocks can be interpreted as unidimensional arrays of 4-
byte vectors.
The exact transformations occur as follows: the byte subtransformation is nonlinear
and operates on each of the State bytes independently - the invertible S-box
(substitution table) is made up of 2 transformations. The shiftrow transformation
sees the State shifted over variable offsets. The shift offset values are dependent on
the block length of the State. The mixcolumn transformation sees the State columns
take on polynomial characteristics over a Galois Field values (28), multiplied x4 + 1
(modulo) with a fixed polynomial. Finally, the roundkey transform is XORed to the
State. The key schedule helps the cipher key determine the round keys through key
expansion and round selection.
Overall, the structure of Rijndael displays a high degree of modular design, which
should make modification to counter any attack developed in the future much
simpler than with past algorithm designs.
Was the best choice really Rijndael?
Our Cryptography expert, Borys Pawliw comments: "The AES selection was always
going to be a compromise, balancing various factors such as overall security,
performance, and efficiency. As such, it was unlikely that the selection of any one
algorithm would receive unanimous praise from all quarters. Rijndael's selection has
been criticized by some because the algorithm does not appear to be as secure as
some of the other choices.This criticism is valid theoretically, but does not mean that
data secured using this algorithm is going to be unacceptably vulnerable to attack.
Although Rijndael may not have been the most secure algorithm from an academic
viewpoint, defenders claim that it is more than likely secure enough for all
applications in the real world and can be enhanced by simply adding more rounds.
Attacks on the algorithm have succeeded only in an extremely limited environment
and, while interesting from a mathematical viewpoint, appear to have little
consequence in the real world."

6
Reference https://searchsecurity.techtarget.com/definition/International-Data-Encryption-
Algorithm

DEA (International Data Encryption Algorithm) is an encryption algorithm developed


at ETH in Zurich, Switzerland. It uses a block cipher with a 128-bit key, and is
generally considered to be very secure. It is considered among the best publicly
known algorithms. In the several years that it has been in use, no practical attacks
on it have been published despite of a number of attempts to find some. IDEA is
patented in the United States and in most of the European countries. The patent is
held by Ascom-Tech. Non-commercial use of IDEA is free. Commercial licenses can
be obtained by contacting Ascom-Tech.

7
CAST was developed in 1996 by Carlisle Adams and Stafford Tavares. CAST-128 can
use keys between 40 and 128 bits in length and will do between 12 and 16 rounds
of operations related to substitutions and transpositions, depending on key length.
CAST-128 is a Feistel-type block cipher with 64-bit blocks. CAST-256 was submitted
as an unsuccessful candidate for the AES competition. CAST-256 operates on 128-
bit blocks and with keys of 128, 192, 160, 224, and 256 bits. It performs 48 rounds
and is described in RFC 2612.

8
All of the algorithms in SAFER are patent-free. The algorithms were developed by
James Massey and work on either 64-bit input blocks (SAFER-SK64) or 128-bit
blocks (SAFER-SK128). A variation of SAFER is used as a block cipher in Bluetooth.

9
Blowfish is another example of a symmetric algorithm developed by Bruce Schneier.
It is considered to be an extremely fast cipher, and one of its extremely useful
advantages is that it requires very little system memory. It is also a Feistel-type
cipher in that it divides the input blocks into two halves and then uses them in XORs
against each other. However, it varies from the traditional Feistel ciphers in that
Blowfish does work against both halves, not just one. The Blowfish algorithm
operates with variable key sizes, from 32 up to 448 bits on 64-bit input and output
blocks.

10
Twofish was one of the finalists for the AES competition mentioned earlier. It is an
adapted version of Blowfish developed by a team of cryptographers led by Bruce
Schneier. It can operate with keys of 128, 192, or 256 bits on blocks of 128 bits. Just
like DES, it performs 16 rounds during the encryption and decryption process.

11
RC5 is a fast block cipher designed by Ron Rivest. The algorithm was designed to be
used in existing security products and in a number of internet protocols. It was
explicitly designed to be simple to implement in software, therefore, the algorithm
does not support any type of bit permutations. Rivest designed a lengthy sub-key
generation phase into the algorithm to make brute force key searching substantially
more difficult without slowing down conventional one-key uses of RC5. Today’s RC5
is a parameterized algorithm with a variable block size, a variable key size, and a
variable number of rounds. Allowable choices for the block size are 32, 64, and 128
bits. The number of rounds can range anywhere from 0 to 255, while the key can
range from 0 bits to 2040 bits in size. There are three routines in RC5: key
expansion, encryption, and decryption. In the key expansion routine, the user-
provided secret key is expanded to fill a key table whose size depends on the
number of rounds. The key table is then used in both encryption and decryption.
The encryption routine consists of three primitive operations: integer addition,
bitwise XOR, and variable rotation.

12
RC6 is a block cipher based on RC5 and, just like its predecessor, it is a variable
parameterized algorithm where the block size, the key size, and the number of
rounds are variable. The upper limit on the key size is 2040 bits, which experts
agree should certainly make it strong for quite a few years. When RC6 was
designed, they built two new features in it compared to RC5. The first is the
inclusion of integer multiplication and the use of four 4-bit working registers instead
of two 2-bit working registers. Integer multiplication is used to increase the
diffusion achieved per round so that fewer rounds are needed and the speed of the
cipher can be increased.

13
RC4, a stream-based cipher, was developed in 1987 by Ron Rivest for RSA Data
Security and has become the most widely used stream cipher, being deployed, for
example, in WEP and Secure Socket Layer/Transport Layer Security (SSL/TLS). RC4
can use a variable length key ranging from 8 to 2,048 bits (1 to 256 bytes) and a
period of greater than 10,100. This means that in implementations, it is possible to
ensure that the keystream should not repeat for at least that length. If RC4 is used
with a key length of at least 128 bits, there are currently no practical ways to attack
it. Confusion exists in the industry as to the weakness in WEP in regards to WEP
using RC4 and its weakness. The published successful attacks against the use of RC4
in WEP applications are actually related to problems with the implementation of
the algorithm, not the algorithm itself.

14
Reference https://searchsecurity.techtarget.com/definition/asymmetric-cryptography

Asymmetric cryptography, also known as public key cryptography, uses public and
private keys to encrypt and decrypt data. The keys are simply large numbers that
have been paired together but are not identical (asymmetric). One key in the pair
can be shared with everyone; it is called the public key. The other key in the pair is
kept secret; it is called the private key. Either of the keys can be used to encrypt a
message; the opposite key from the one used to encrypt the message is used for
decryption.
Many protocols like SSH, OpenPGP, S/MIME, and SSL/TLS rely on asymmetric
cryptography for encryption and digital signature functions. It is also used in
software programs, such as browsers, which need to establish a secure connection
over an insecure network like the Internet or need to validate a digital signature.
Encryption strength is directly tied to key size and doubling the key length delivers
an exponential increase in strength, although it does impair performance. As
computing power increases and more efficient factoring algorithms are discovered,
the ability to factor larger and larger numbers also increases.

1
For asymmetric encryption to deliver confidentiality, integrity, authenticity and non-
repudiability, users and systems need to be certain that a public key is authentic, that
it belongs to the person or entity claimed and that it has not been tampered with nor
replaced by a malicious third party. There is no perfect solution to this public key
authentication problem. A public key infrastructure (PKI) -- where trusted certificate
authorities certify ownership of key pairs and certificates -- is the most common
approach, but encryption products based on the Pretty Good Privacy (PGP) model --
including OpenPGP -- rely on a decentralized authentication model called a web of
trust, which relies on individual endorsements of the link between user and public
key.
How asymmetric encryption works
Asymmetric encryption algorithms use a mathematically-related key pair for
encryption and decryption; one is the public key and the other is the private key. If
the public key is used for encryption, the related private key is used for decryption
and if the private key is used for encryption, the related public key is used for
decryption.
Only the user or computer that generates the key pair has the private key. The public
key can be distributed to anyone who wants to send encrypted data to the holder of
the private key. It's impossible to determine the private key with the public one.
The two participants in the asymmetric encryption workflow are the sender and the
receiver. First, the sender obtains the receiver's public key. Then the plaintext is
encrypted with the asymmetric encryption algorithm using the recipient's public key,
creating the ciphertext. The ciphertext is then sent to the receiver, who decrypts the
ciphertext with his private key so he can access the sender's plaintext.
Because of the one-way nature of the encryption function, one sender is unable to
read the messages of another sender, even though each has the public key of the
receiver.
Examples of asymmetric cryptography
RSA (Rivest-Shamir-Adleman) -- the most widely used asymmetric algorithm -- is
embedded in the SSL/TSL protocols which is used to provide communications
security over a computer network. RSA derives its security from the computational
difficulty of factoring large integers that are the product of two large prime numbers.
Multiplying two large primes is easy, but the difficulty of determining the original
numbers from the product -- factoring -- forms the basis of public key cryptography
security. The time it takes to factor the product of two sufficiently large primes is
considered to be beyond the capabilities of most attackers, excluding nation-state
actors who may have access to sufficient computing power. RSA keys are typically
1024- or 2048-bits long, but experts believe that 1024-bit keys could be broken in the
near future, which is why government and industry are moving to a minimum key
length of 2048-bits.
Elliptic Curve Cryptography (ECC) is gaining favor with many security experts as an

1
alternative to RSA for implementing public key cryptography. ECC is a public key
encryption technique based on elliptic curve theory that can create faster, smaller,
and more efficient cryptographic keys. ECC generates keys through the properties of
the elliptic curve equation.
To break ECC, one must compute an elliptic curve discrete logarithm, and it turns out
that this is a significantly more difficult problem than factoring. As a result, ECC key
sizes can be significantly smaller than those required by RSA yet deliver equivalent
security with lower computing power and battery resource usage making it more
suitable for mobile applications than RSA.
Uses of asymmetric cryptography
The typical application for asymmetric cryptography is authenticating data through
the use of digital signatures. Based on asymmetric cryptography, digital signatures
can provide assurances of evidence to the origin, identity and status of an electronic
document, transaction or message, as well as acknowledging informed consent by
the signer.
To create a digital signature, signing software -- such as an email program -- creates a
one-way hash of the electronic data to be signed. The user's private key is then used
to encrypt the hash, returning a value that is unique to the hashed data. The
encrypted hash, along with other information such as the hashing algorithm, forms
the digital signature. Any change in the data, even to a single bit, results in a different
hash value.
This attribute enables others to validate the integrity of the data by using the signer's
public key to decrypt the hash. If the decrypted hash matches a second computed
hash of the same data, it proves that the data hasn't changed since it was signed. If
the two hashes don't match, the data has either been tampered with in some way --
indicating a failure of integrity -- or the signature was created with a private key that
doesn't correspond to the public key presented by the signer -- indicating a failure of
authentication.
A digital signature also makes it difficult for the signing party to deny having signed
something -- the property of non-repudiation. If a signing party denies a valid digital
signature, their private key has either been compromised or they are being
untruthful. In many countries, including the United States, digital signatures have the
same legal weight as more traditional forms of signatures.
Asymmetric cryptography can be applied to systems in which many users may need
to encrypt and decrypt messages, such as encrypted email, in which a public key can
be used to encrypt a message, and a private key can be used to decrypt it.
The SSL/TSL cryptographic protocols for establishing encrypted links between
websites and browsers also make use of asymmetric encryption.
Additionally, Bitcoin and other cryptocurrencies rely on asymmetric cryptography as
users have public keys that everyone can see and private keys that are kept
secret. Bitcoin uses a cryptographic algorithm to ensure that only the legitimate

1
owners can spend the funds.
In the case of the Bitcoin ledger, each unspent transaction output (UTXO) is typically
associated with a public key. So if user X, who has an UTXO associated with his public
key, wants to send the money to user Y, user X uses his private key to sign a
transaction that spends the UTXO and creates a new UTXO that's associated with
user Y's public key.
Asymmetric vs. symmetric cryptography
The main difference between these two methods of encryption is that asymmetric
encryption algorithms makes use of two different but related keys -- one key to
encrypt the data and another key to decrypt it -- while symmetric encryption uses the
same key to perform both the encryption and decryption functions.
Another difference between asymmetric and symmetric encryption is the length of
the keys. In symmetric cryptography, the length of the keys -- which is randomly
selected -- are typically set at 128-bits or 256-bits, depending on the level of security
that's needed.
However, in asymmetric encryption, there has to be a mathematical relationship
between the public and private keys. Because hackers can potentially exploit this
pattern to crack the encryption, asymmetric keys need to be much longer to offer the
same level of security. The difference in the length of the keys is so pronounced that
a 2048-bit asymmetric key and a 128-bit symmetric key provide just about an
equivalent level of security.
Additionally, asymmetric encryption is slower than symmetric encryption, which has
a faster execution speed.
History of asymmetric cryptography
Whitfield Diffie and Martin Hellman, researchers at Stanford University, first publicly
proposed asymmetric encryption in their 1977 paper, "New Directions in
Cryptography." The concept had been independently and covertly proposed by James
Ellis several years earlier, while he was working for the Government Communications
Headquarters (GCHQ), the British intelligence and security organization. The
asymmetric algorithm as outlined in the Diffie-Hellman paper uses numbers raised to
specific powers to produce decryption keys. Diffie and Hellman had initially teamed
up in 1974 to work on solving the problem of key distribution problem.
The RSA algorithm, which was based on the work of Diffie, was named after its three
inventors -- Ronald Rivest, Adi Shamir and Leonard Adleman. They invented the RSA
algorithm in 1977, and published it in Communications of the ACM in 1978.
Today, RSA is the standard asymmetric encryption algorithm and it's used in many
areas, including TLS/SSL, SSH, digital signatures and PGP.
Benefits and disadvantages of asymmetric cryptography
The benefits of asymmetric cryptography include:
the key distribution problem is eliminated because there's no need for exchanging
keys.

1
security is increased as the private keys don't ever have to be transmitted or revealed
to anyone.
the use of digital signatures is enabled so that a recipient can verify that a message
comes from a particular sender.
it allows for non-repudiation so the sender can't deny sending a message.
Disadvantages include:
it's a slow process compared to symmetric crytography, so it's not appropriate for
decrypting bulk messages.
if an individual loses his private key, he can't decrypt the messages he receives.
since the public keys aren't authenticated, no one really knows if a public key belongs
to the person specified. Consequently, users have to verify that their public keys
belong to them.
if a hacker identifies a person's private key, the attacker can read all of that
individual's messages.

1
2
3
Reference

The RSA algorithm is the basis of a cryptosystem -- a suite of cryptographic


algorithms that are used for specific security services or purposes -- which enables
public key encryption and is widely used to secure sensitive data, particularly when
it is being sent over an insecure network such as the internet.
RSA was first publicly described in 1977 by Ron Rivest, Adi Shamir and Leonard
Adleman of the Massachusetts Institute of Technology, though the 1973 creation of
a public key algorithm by British mathematician Clifford Cocks was kept classified by
the U.K.'s GCHQ until 1997.
Public key cryptography, also known as asymmetric cryptography, uses two different
but mathematically linked keys -- one public and one private. The public key can be
shared with everyone, whereas the private key must be kept secret.
In RSA cryptography, both the public and the private keys can encrypt a message;
the opposite key from the one used to encrypt a message is used to decrypt it. This
attribute is one reason why RSA has become the most widely used
asymmetric algorithm: It provides a method to assure the confidentiality, integrity,
authenticity, and non-repudiation of electronic communications and data storage.

4
Many protocols like Secure Shell, OpenPGP, S/MIME, and SSL/TLS rely on RSA for
encryption and digital signature functions. It is also used in software programs --
browsers are an obvious example, as they need to establish a secure connection over
an insecure network, like the internet, or validate a digital signature. RSA signature
verification is one of the most commonly performed operations in network-
connected systems.
Why the RSA algorithm is used
RSA derives its security from the difficulty of factoring large integers that are the
product of two large prime numbers. Multiplying these two numbers is easy, but
determining the original prime numbers from the total -- or factoring -- is considered
infeasible due to the time it would take using even today's supercomputers.
The public and private key generation algorithm is the most complex part of RSA
cryptography. Two large prime numbers, p and q, are generated using the Rabin-
Miller primality test algorithm. A modulus, n, is calculated by multiplying p and q.
This number is used by both the public and private keys and provides the link
between them. Its length, usually expressed in bits, is called the key length.

Alan Way explains how RSA public key


encryption works
The public key consists of the modulus n and a public exponent, e, which is normally
set at 65537, as it's a prime number that is not too large. The e figure doesn't have to
be a secretly selected prime number, as the public key is shared with everyone.
The private key consists of the modulus n and the private exponent d, which is
calculated using the Extended Euclidean algorithm to find the multiplicative inverse
with respect to the totient of n.
Read on or watch the video below for a more detailed explanation of how the RSA
algorithm works.
How does the RSA algorithm work?
Alice generates her RSA keys by selecting two primes: p=11 and q=13. The modulus is
n=p×q=143. The totient is n ϕ(n)=(p−1)x(q−1)=120. She chooses 7 for her RSA public
key e and calculates her RSA private key using the Extended Euclidean algorithm,
which gives her 103.
Bob wants to send Alice an encrypted message, M, so he obtains her RSA public key
(n, e) which, in this example, is (143, 7). His plaintext message is just the number 9
and is encrypted into ciphertext, C, as follows:
Me mod n = 97 mod 143 = 48 = C
When Alice receives Bob's message, she decrypts it by using her RSA private key (d,
n) as follows:
Cd mod n = 48103 mod 143 = 9 = M
To use RSA keys to digitally sign a message, Alice would need to create a hash -- a
message digest of her message to Bob -- encrypt the hash value with her RSA private

4
key, and add the key to the message. Bob can then verify that the message has been
sent by Alice and has not been altered by decrypting the hash value with her public
key. If this value matches the hash of the original message, then only Alice could have
sent it -- authentication and non-repudiation -- and the message is exactly as she
wrote it -- integrity.
Alice could, of course, encrypt her message with Bob's RSA public key --
confidentiality -- before sending it to Bob. A digital certificate contains information
that identifies the certificate's owner and also contains the owner's public key.
Certificates are signed by the certificate authority that issues them, and they can
simplify the process of obtaining public keys and verifying the owner.

RSA security
RSA security relies on the computational difficulty of factoring large integers. As
computing power increases and more efficient factoring algorithms are discovered,
the ability to factor larger and larger numbers also increases.

Encryption strength is directly tied to key size, and doubling key length can deliver an
exponential increase in strength, although it does impair performance. RSA keys are
typically 1024- or 2048-bits long, but experts believe that 1024-bit keys are no longer
fully secure against all attacks. This is why the government and some industries are
moving to a minimum key length of 2048-bits.
Barring an unforeseen breakthrough in quantum computing, it will be many years
before longer keys are required, but elliptic curve cryptography (ECC) is gaining favor
with many security experts as an alternative to RSA to implement public key
cryptography. It can create faster, smaller and more efficient cryptographic keys.
Modern hardware and software are ECC-ready, and its popularity is likely to grow, as
it can deliver equivalent security with lower computing power and battery resource
usage, making it more suitable for mobile apps than RSA. Finally, a team of
researchers, which included Adi Shamir, a co-inventor of RSA, has successfully created
a 4096-bit RSA key using acoustic cryptanalysis; however, any encryption algorithm is
vulnerable to attack.

4
5
My own definition:

Diffie–Hellman is a key negotiation algorithm and does not provide for message
confidentiality. It is used to enable two entities to exchange or negotiate a secret
symmetric key that will be used subsequently for message encryption using
symmetric key cryptography. The Diffie–Hellman algorithm can be extremely useful
for applications such as PKI and others where the generation of symmetric session
keys are required. It is often referred to as a session key negotiation
algorithm. Diffie–Hellman is based on discrete logarithm hard math problem.

Diffie–Hellman can be summarized as follows: It is a key agreement protocol


whereby two parties, without any prior arrangements, can agree upon a secret
symmetric key that is known only to them. This secret key can then be used, for
example, to encrypt further communications between the parties from that point
on using symmetric key cryptography.
The Diffie–Hellman key agreement requires that both the sender and recipient of a
message have their private and
public key pairs.

1
By combining one’s private key and the other party’s public key, both parties can
compute the same shared secret number that ends up being the symmetric session
key. A “session key” is a symmetric key that is used only for that particular session.

1
The ElGamal cryptographic algorithm is based on the work of Diffie–Hellman, but it
also includes the ability to provide message confidentiality and digital signature
services, not just session key negotiation. Although not technically correct, some
people refer to ElGamal as a combination of the Diffie–Hellman and RSA algorithms.
The ElGamal algorithm is based on the same mathematical functions of discrete
logs.

2
The elliptic curve algorithm has the highest strength per bit of key length of any of
the asymmetric algorithms. The ability to use much shorter keys for elliptic curve
cryptography (ECC) implementations provides savings on computational power,
bandwidth, and storage. This makes ECC especially beneficial for implementations
in smart cards, wireless, and other similar application areas where those elements
may be lacking. Elliptic curve algorithms provide confidentiality, digital signatures,
and message authentication services. The excitement of ECC is that elliptic curve
group discrete log techniques have not seen significant improvement over the past
number of years. This is obviously good news for elliptic methods because it allows
us to use reduced key sizes to provide the same level of security as
traditional public key cryptography methods.

3
4
Hybrid cryptography is where we use the advantages of both symmetric and
asymmetric key cryptography. As you remember, symmetric is very fast but
problematic in the way of key distribution. Asymmetric, on the other hand, is very
slow but solves the problem of key distribution. Why not use both for what they are
each good at? This is referred to as a hybrid cryptography system. A hybrid system
operates as shown. The message itself is encrypted with a symmetric key, SK, and is
sent to the recipient. To allow the recipient to have the symmetric key required for
decryption, the symmetric key is encrypted with the public key of the recipient and
sent to the recipient. The recipient then decrypts the symmetric key with their
private key that no one else has. This provides the symmetric key to the recipient
only. The symmetric key can then be used to decrypt the message.

5
An important part of electronic commerce and computerized transactions today is
the assurance that a transmitted message or data has not been modified, is indeed
from the person that the sender claims to be, and that the message was received by
the correct party. This is accomplished through cryptographic functions that
perform in several manners, depending on the business needs and level of trust
between the parties and systems.

The point is this, when receiving messages over untrusted networks such as the
internet, it is very important to ensure the integrity of the message. Integrity means
receiving exactly what was sent, without modification. The principle of integrity
assures that nothing changed without detection. In cryptography, this principle can
be referred to as message authentication.

Message authentication can be achieved using Message Digest security features.


Message digests come in two flavors: keyed and non-keyed.

Non-keyed message digests are made without a secret key and are called Message
Integrity Codes (MICs). Most asymmetric key digital signature schemes use non-

6
keyed message digests. Keyed message digests, known as Message Authentication
Codes (MACs), combine a message digest and a secret key. MACs require the sender
and the receiver to share a secret key ahead of time to be able to address integrity
properly. It is important to realize that the word “keyed” does not mean that the
message digest is signed (private key encrypted), instead, it means that the digest is
encrypted with a secret symmetric key.

6
A message digest is a small representation of a larger message produced by a
hashing algorithm. A message digest is used to ensure the integrity of information
and does not address confidentiality of the message.

7
A MAC (also known as a cryptographic checksum) is a small block of data that is
generated using a secret key and then appended to the message. When the
message is received, the recipient can generate their own MAC using the secret key,
and thereby know that the message has not changed either accidentally or
intentionally in transit. It is important to remember that this assurance is only as
strong as the trust the two parties have that no one else has access to the secret
symmetric key. A MAC is a small representation of a message and needs to have the
following characteristics:

• A MAC is much smaller than the message generating it.


• Given a MAC, it is impractical to compute the message that generated it.
• Given a MAC and the message that generated it, it is impractical to find another
message generating the same MAC.

8
Hashed MAC’ing implements a freely available hash algorithm (such as SHA-1 or
MD5) as a component within the MAC implementation. This allows ease of the
replacement of the hashing module if a new hash function ever becomes necessary.

The use of proven cryptographic hash algorithms also provides assurance of the
security of HMAC implementations. HMACs work by adding a secret key value to
the hash input function along with the source message. The HMAC operation
provides cryptographic strength similar to a hashing algorithm, except that it now
has the additional protection of a secret key and still operates nearly as rapidly as a
standard hash operation.

9
Hashing is defined as using a hashing algorithm to produce a message digest that
can be used to address integrity. The hash function accepts an input message of any
length and generates, through a one-way operation, a fixed-length output called a
message digest. The difference between what we discussed above is that a hashing
algorithm generates the message digest but does not use a secret key. There are
several ways to use message digests in communications, depending on the need for
the confidentiality of the message, the authentication of the source, the speed of
processing, and the choice of encryption algorithms. The requirements for a hash
function are that they must provide some assurance that the message has not
changed without detection and that it would be impractical to find any two
messages with the same message digest value.

Examples of very popular hashing algorithms are SHA-1 and MD5.

Five Key Properties of a Hash Function

1. Uniformly distributed: The hash output value should not be predictable.


2. Collision resistant: Difficult to find a second input value that would hash to the

10
same value as another input, and difficult to find any two inputs that hash to the
same value.
3. Difficult to invert: Should be one way, should not be able to derive the original
message by reversing the hash.
4. Computed on the entire message: The hash algorithm should use the entire
message to produce the digest.
5. Deterministic: Given an input x, it must always generate the same hash value, y.

10
Reference https://searchsecurity.techtarget.com/definition/MD5

The MD5 hashing algorithm is a one-way cryptographic function that accepts a


message of any length as input and returns as output a fixed-length digest value to
be used for authenticating the original message.

The MD5 hash function was originally designed for use as a secure cryptographic
hash algorithm for authenticating digital signatures. MD5 has been deprecated for
uses other than as a non-cryptographic checksum to verify data integrity and detect
unintentional data corruption.

Although originally designed as a cryptographic message authentication


code algorithm for use on the internet, MD5 hashing is no longer considered
reliable for use as a cryptographic checksum because researchers have
demonstrated techniques capable of easily generating MD5 collisions on

11
commercial off-the-shelf computers.

Ronald Rivest, founder of RSA Data Security and institute professor at MIT, designed
MD5 as an improvement to a prior message digest algorithm, MD4. Describing it
in Internet Engineering Task Force RFC 1321, "The MD5 Message-Digest Algorithm,"
he wrote:

The algorithm takes as input a message of arbitrary length and produces as output a
128-bit 'fingerprint' or 'message digest' of the input. It is conjectured that it is
computationally infeasible to produce two messages having the same message digest,
or to produce any message having a given pre-specified target message digest. The
MD5 algorithm is intended for digital signature applications, where a large file must
be 'compressed' in a secure manner before being encrypted with a private (secret) key
under a public-key cryptosystem such as RSA.

The IETF suggests MD5 hashing can still be used for integrity protection, noting
"Where the MD5 checksum is used inline with the protocol solely to protect against
errors, an MD5 checksum is still an acceptable use." However, it added that "any
application and protocol that employs MD5 for any purpose needs to clearly state the
expected security services from their use of MD5.“

Message digest algorithm characteristics


Message digests, also known as hash functions, are one-way functions; they accept a
message of any size as input, and produce as output a fixed-length message digest.

MD5 is the third message digest algorithm created by Rivest. All three (the others
are MD2 and MD4) have similar structures, but MD2 was optimized for 8-bit
machines, in comparison with the two later formulas, which are optimized for 32-bit
machines. The MD5 algorithm is an extension of MD4, which the critical review found
to be fast, but possibly not absolutely secure. In comparison, MD5 is not quite as fast
as the MD4 algorithm, but offered much more assurance of data security.

How MD5 works


The MD5 message digest hashing algorithm processes data in 512-bit blocks, broken
down into 16 words composed of 32 bits each. The output from MD5 is a 128-bit
message digest value.

Computation of the MD5 digest value is performed in separate stages that process
each 512-bit block of data along with the value computed in the preceding stage. The
first stage begins with the message digest values initialized using
consecutive hexadecimal numerical values. Each stage includes four message digest

11
passes which manipulate values in the current data block and values processed from
the previous block. The final value computed from the last block becomes the MD5
digest for that block.

MD5 security
The goal of any message digest function is to produce digests that appear to
be random. To be considered cryptographically secure, the hash function should
meet two requirements: first, that it is impossible for an attacker to generate a
message matching a specific hash value; and second, that it is impossible for an
attacker to create two messages that produce the same hash value.

MD5 hashes are no longer considered cryptographically secure, and they should not
be used for cryptographic authentication.

In 2011, the IETF published RFC 6151, "Updated Security Considerations for the MD5
Message-Digest and the HMAC-MD5 Algorithms," which cited a number of recent
attacks against MD5 hashes, especially one that generated hash collisions in a minute
or less on a standard notebook and another that could generate a collision in as little
as 10 seconds on a 2.66 GHz Pentium 4 system. As a result, the IETF suggested that
new protocol designs should not use MD5 at all, and that the recent research attacks
against the algorithm "have provided sufficient reason to eliminate MD5 usage in
applications where collision resistance is required such as digital signatures."

11
The original SHA was developed by NIST in the United States in 1993 and was issued
as Federal Information Processing Standard (FIPS) 180. A revised version (FIPS 180-
1) was issued in 1995 as SHA-1 (RFC 3174) with some improvements. SHA was
based on the previous MD4 algorithm, whereas SHA-1 follows the logic of the MD5
hashing algorithm described above. SHA-1 operates on 512-bit blocks. The output
hash, or message digest, is 160 bits in length. The processing includes four rounds
of operations of 20 steps each. As in MD5, recently, there have been several attacks
described against the SHA-1 algorithm to try and find collisions, despite it being
considered to be considerably stronger than MD5. NIST has issued FIPS 180-4 that
recognizes SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA
512/256 as a part of the Secure Hash Standard. The output lengths of the digests of
these vary from 160 to 512 bits, typically identified by the number written after the
SHA letters.

12
HAVAL is a hashing algorithm with a variable length output message digest. It
combines a variable length output with a variable number of rounds of operations
on 1,024-bit input blocks. The output message digest may be 128, 160, 192, 224, or
256 bits, and the number of rounds may vary from three to five. That gives 15
possible combinations of operations. HAVAL’s claim to fame is it can operate 60
percent faster than MD5 when only three rounds are used and is just as fast as MD5
when it does five rounds of operation.

13
The original algorithm (RIPEMD-128) has the same vulnerabilities as MD4 and MD5
and led to the improved RIPEMD-160 version. The output for RIPEMD-160 is 160
bits, and it operates similarly to MD5 on 512-bit blocks. It does twice the processing
of SHA-1, performing five paired rounds of 16 steps each for 160 operations. As
with any other hashing
algorithm, the benefit of increasing the size of the message digest output is to
provide better protection against collisions, where two different messages produce
the same message digest value.

14
Typically, attacks against hashing functions takes the form of finding collisions.
There are two primary ways to attack hash functions:
• Brute force
• Cryptanalysis

Over the past number of years, extensive research has been done on attacks on
various hashing algorithms, such as MD-5 and SHA-1. Both algorithms are
susceptible to cryptographic attacks. A brute force attack relies on finding a
weakness in the hashing algorithm that would allow an attacker to reconstruct the
original message from the hash value (defeat the one-way property of a hash
function), find another message with the same hash value, or find any pair of
messages with the same hash value (called collision resistance).

1
The Birthday Paradox/Birthday Attack
The birthday paradox is an interesting and surprising mathematical condition that
describes the ease of finding two people with the same birthday (month and day)
from a group of people. If one considers that there are 365 possible birthdays (not
including leap years and assuming that birthdays are spread evenly across all
possible dates), then one would expect to need to have roughly 183 people
together to have a 50 percent
probability that two of those people share the same birthday.

But if you work it out mathematically, once there are more than 23 people together
in a room, there is a greater than 50 percent probability that two of them share the
same birthday. The reason that this is mathematically correct is that if you consider
that in a
group of 23 people, there are 253 different pairings described by the formula: (n(n −
1)/2). The probability increases to the point where once 100 people are together,
the chance of two of them having the same birthday is actually greater than 99.99
percent.
This is referred to as the birthday paradox.

2
So why is this discussion about birthdays and the birthday paradox important while
discussing attacks against hashing algorithms?

The answer is that the likelihood of finding a collision for two messages and their
hash values may be a lot easier than may have been believed, just in the same way as
the birthday paradox. The mathematics behind this would be very similar to the
statistics of
finding two people with the same birthday. As we have seen, a most important
consideration for evaluating the strength of a hashing algorithm must be its
resistance to collisions. The probability of finding a collision for a 160-bit hash can be
estimated at either 2 raised to the power of 160 or 2 raised to the power of 160/2,
depending on the level of collision resistance needed. This approach is relevant
because a hash is a representation of the message and not the message itself. As part
of an attack, the
attacker does not want to find an identical message, the attacker wants to find out
how to:

• Change the message contents to what the attacker wants it to read and still have
the same digest value

• Cast some doubt on the authenticity of the original message by demonstrating


that another message has the same value as the original

The hashing algorithm must be resistant to a birthday-type attack that would allow
the attacker to feasibly accomplish his goals.

2
Digital Signatures – Non-repudiation
Non-repudiation
Non-repudiation is the inability to deny. The word “repudiation” is defined as the
ability to deny, so “non-repudiation” means the inability to deny. In cryptography,
non-repudiation is a service that ensures the sender cannot deny a message was
actually sent and the integrity of the message is intact, and the receiver cannot say
that they’ve received a different message than the one that was actually received.
Non-repudiation is achieved through digital signatures and PKI. The process is this:
the message is signed using the sender’s private key. When the recipient receives
the message, they may use the sender’s public key to validate the signature. While
this proves the integrity of the message, it does not explicitly define the ownership
of the original private key used to sign the message. For non-repudiation to be
valid, a CA must have an association between the private key and the sender that
proves the authenticity of the private key belonging to the entity having signed the
message.

3
We will study this in further detail during our ISO 27001 and PCI DSS audit courses
inshAllah.

Cryptographic Lifecycle
All cryptographic functions, systems and implementations have a useful life. In
cryptography, the word “broken” typically means different things, depending on the
application. A cryptographic function or implementation is considered broken or no
longer effective when one of the following conditions is met:
For a hashing function:
• Collisions or hashes can be reliably reproduced in an economically feasible
fashion without the original source.
• When an implementation of a hashing function allows a side channel attack. A
side channel attack in cryptography is defined as targeting the weakness of the
“implementation” of the algorithm and not the algorithm itself.

For an encryption system:


• A cipher is decoded without access to the key in an economically feasible
fashion.

4
• When an implementation of an encryption system allows for the unauthorized
disclosure of information in an economically feasible fashion.
• When a private key has been compromised in asymmetric key cryptography.

4
5
6
Cryptography is considered in most countries to be on par with munitions, a military
tool, and may be managed through laws written to control the distribution of
military equipment. Some countries do not allow any cryptographic tools to be used
by their citizens, and others have laws that control the use of cryptography, usually
based
on key length and strength of algorithms. This is because in cryptography, the key
length is one of the most understandable methods of gauging the strength of a
cryptosystem.
International export controls may be employed by governments to limit the
shipment of products containing strong cryptography to countries that the
government feels are trustworthy enough to use in a friendly way. Most countries’
concern over their national security related to cryptography is established as
specific technologies that would be detrimental to their national defense and,
therefore, need to be controlled through export regulations. As a result of these
export controls, many vendors market two versions of their products, one version
that may have strong encryption and the other version that may have weaker
encryption that is sold in other countries.

7
We will cover this in further detail during our ISO 27001 and PCI DSS courses inshAllah.

8
One important aspect of key management is to ensure that the same key used in
encrypting a given message by a sender is the same key used to decrypt the
message by the intended receiver. The problem is how to exchange the proper keys
or other needed information so that no one else can obtain, or deduce a copy. This
is referred to as the key distribution problem. One solution is to protect the
symmetric session key with a special purpose long-term use key called a key
encrypting key (KEK); therefore, KEKs can be used as part of key distribution or key
exchange processes.

In cryptography, the process of using a KEK to protect session keys is appropriately


called key wrapping. Key wrapping uses symmetric ciphers to securely encrypt (thus
encapsulating) a plaintext key along with any associated integrity information and
data. Key wrapping can be used when protecting session keys in untrusted storage
or when sending over an untrusted transport mechanism. Key wrapping or
encapsulation using a KEK can be accomplished using either symmetric or
asymmetric ciphers. If the cipher is a symmetric KEK, both the sender and the
receiver will need a copy of the same key. If using an asymmetric cipher with public
and private key properties to encapsulate a session key, both the sender and the

1
receiver will need each other’s public keys. In today’s applications, protocols such as
SSL, PGP, and S/MIME use the services of KEKs to provide session key confidentiality,
integrity, and sometimes to authenticate the binding of the session key originator
and the session key itself to make sure the session key came from the real sender
and not someone pretending to be authorized individuals.

Key Distribution
Key distribution is one of the most important aspects of key management. As we
have discussed, secure key distribution is the most important issue with symmetric
key cryptography. Key distribution is the process of getting a key from the point of its
generation to the point of its intended use. This problem is
more difficult in symmetric key algorithms, where it is necessary to protect the key
from disclosure in the process. This step must be performed using a channel separate
from the one in which the traffic moves. Keys can be distributed in a number of ways.
For example, two people who wish to perform secure key
exchange can use a medium other than that through which secure messages will be
sent. This is called out-of-band key exchange. Even though out of band is the secure
way to distribute symmetric keys, this concept is not very scalable beyond a few
people and becomes very difficult as the number of people involved grows.
Asymmetric key encryption provides a means to allow members of a group to
conduct secure transactions spontaneously. The receiver’s public key certificate,
which contains the receiver’s public key, is retrieved by the sender from the key
server and is used as part of a public key encryption scheme, such as S/MIME, PGP, or
even SSL to encrypt a message and send it. The digital certificate is the medium that
contains the public key of each member of the group and makes the key portable,
scalable, and easier to manage than an out-of-band method of key exchange.

Key Storage and Destruction


All keys need to be protected against modification, and secret and private keys need
to be protected against unauthorized disclosure. Methods for protecting stored keys
include trusted, tamperproof hardware security modules, passphrase protected
smart cards, key wrapping the session keys using long-term storage KEKs, splitting
cipher keys and storing in physically separate storage locations, and protecting keys
using strong passwords and passphrases, key expiry, and the like. Keys may be
protected by the integrity of the storage mechanism itself. For example, the
mechanism can be designed so that once the key is installed, it cannot be observed
from outside the encryption mechanism itself. Indeed, some key storage devices are
designed to self-destruct when subjected to forces that might disclose the key.
Alternatively, the key can be stored in an encrypted form so that knowledge of the
stored form does not disclose information about the behavior of the device under
the key. To guard against a long-term cryptanalytic attack, every key must have an

1
expiration date after which it is no longer valid. The key length must be long enough
to make the chances of cryptanalysis before key expiration extremely small. The
validity period for a key pair may also depend on the circumstances in which the key
needs
to be used. Keys must be disposed of and destroyed in such a way as to resist
disclosure. At the end of a key lifecycle, it must be properly destroyed as to avoid the
reconstruction of that key, and the purpose must be to make it impossible to
regenerate or reconstruct the key.

1
Cryptanalysis is defined as the study of techniques for attempting to defeat
cryptographic methods and techniques and, more generally, information security
services protected or achieved by cryptography. Since in cryptography, the key is the
only element that provides security, cryptanalysis is generally all about finding or
deducing what the key is.

2
Reference https://searchsecurity.techtarget.com/definition/brute-force-cracking
A brute force attack is a trial and error method used by application programs to
decode encrypted data such as passwords or Data Encryption Standard (DES) keys,
through exhaustive effort (using brute force) rather than employing intellectual
strategies. Just as a criminal might break into, or "crack" a safe by trying many
possible combinations, a brute force attacking application proceeds through all
possible combinations of legal characters in sequence.

3
Reference https://searchsecurity.techtarget.com/tip/The-ABCs-of-ciphertext-exploits-and-
other-cryptography-attacks

Ciphertext-onIy attack
The ciphertext-only attack is one of the most difficult cryptography attacks because
the attacker has so little information to start with. All the attacker starts with is
some unintelligible data that he suspects may be an important encrypted message.
The attack becomes simpler when the attacker is able to gather several pieces of
ciphertext and thereby look for trends or statistical data that would help in the
attack. Adequate encryption is defined as encryption that is strong enough to make
brute force attacks impractical because there is a higher work factor than the
attacker wants to invest into the attack. Moore’s law states that available computing
power doubles every 18 months. Experts suggest this advance may be slowing;
however, encryption strength considered adequate today will probably not be
sufficient a few years from now due to advances in CPU and CPU technologies and
new attack techniques. Security professionals should consider this when defining
encryption requirements.

4
Reference https://searchsecurity.techtarget.com/tip/The-ABCs-of-ciphertext-exploits-and-
other-cryptography-attacks

Known plaintext
For a known plaintext attack, the attacker has access to both the ciphertext and the
plaintext versions of the same message. The goal of this type of attack is to find the
link -- the cryptographic key that was used to encrypt the message. Once the key
has been found, the attacker would then be able to decrypt all messages that had
been encrypted using that key. In some cases, the attacker may not have an exact
copy of the message; if the message was known to be an e-commerce transaction,
the attacker knows the format of such transactions even though he does not know
the actual values in the transaction.

5
Reference https://searchsecurity.techtarget.com/tip/The-ABCs-of-ciphertext-exploits-and-
other-cryptography-attacks

Chosen plaintext
To execute the chosen attacks, the attacker knows the algorithm used for the
encrypting, or even better, he may have access to the machine used to do the
encryption and is trying to determine the key. This may happen if a workstation
used for encrypting messages is left unattended. Now the attacker can run chosen
pieces of plaintext through the algorithm and see what the result is. This may assist
in a known plaintext attack. An adaptive chosen plaintext attack is where the
attacker can modify the chosen input files to see what effect that would have on
the resulting ciphertext.

6
Reference https://searchsecurity.techtarget.com/tip/The-ABCs-of-ciphertext-exploits-and-
other-cryptography-attacks

Chosen ciphertext
This is similar to the chosen plaintext attack in that the attacker has access to the
decryption device or software and is attempting to defeat the cryptographic
protection by decrypting chosen pieces of ciphertext to discover the key. An
adaptive chosen ciphertext would be the same, except that the attacker can modify
the ciphertext prior to putting it through the algorithm. Asymmetric cryptosystems
are vulnerable to chosen ciphertext attacks. For example, the RSA algorithm is
vulnerable to this type of attack. The attacker would select a section of plaintext,
encrypt it with the victim’s public key, then decrypt the ciphertext to get the
plaintext back. Although this does not yield any new information to the attacker, the
attacker can exploit properties of RSA by selecting blocks of data, when processed
using the victim’s private key, yields information that can he used in cryptanalysis.
The weakness with asymmetric encryption in chosen ciphertext attacks can be
mitigated by including a random padding in the plaintext before encrypting the
data. Security vendor RSA Security recommends modifying the plaintext by using a

7
process called optimal asymmetric encryption padding (OAEP). RSA encryption with
OAEP is defined in PKCS #1 v2.1.

7
Linear cryptanalysis
This is a known plaintext attack that uses linear approximations to describe the
behavior of the block cipher. Linear cryptanalysis is a known plaintext attack and
uses a linear approximation to describe the behavior of the block cipher. Given
sufficient pairs of plaintext and corresponding ciphertext, one can obtain bits of
information about the key, and increased amounts of data will usually give a higher
probability of success. There have been a variety of enhancements and
improvements to the basic attack. For example, there is an attack called differential
-- linear cryptanalysis, which combines elements of differential cryptanalysis with
those of linear cryptanalysis.

8
Differential cryptanalysis
Also called a side-channel attack, this more complex attack is executed by
measuring the exact execution times and power required by the crypto device to
perform the encryption or decryption. By measuring this, it is possible to determine
the value of the key and the algorithm used.

9
Implementation attacks
Implementation attacks are some of the most common and popular attacks against
cryptographic systems due to their ease and reliance on system elements outside of
the algorithm. The main types of implementation attacks include:
Side-channel attacks are passive attacks that rely on a physical attribute of the
implementation such as power consumption/emanation. These attributes are
studied to determine the secret key and the algorithm function. Some examples of
popular side channels include timing analysis and electromagnetic differential
analysis.
Fault analysis attempts to force the system into an error state to gain erroneous
results. By forcing an error, gaining the results and comparing it with known good
results, an attacker may learn about the secret key and the algorithm.
Probing attacks attempt to watch the circuitry surrounding the cryptographic
module in hopes that the complementary components will disclose information
about the key or the algorithm. Additionally, new hardware may be added to the
cryptographic module to observe and inject information.

10
11
Replay attack
This attack is meant to disrupt and damage processing by the attacker, through the
resending of repeated files to the host. If there are no checks such as time-
stamping, use of one-time tokens or sequence verification codes in the receiving
software, the system might process duplicate files.

12
Algebraic
Algebraic attacks are a class of techniques that rely for their success on block
ciphers exhibiting a high degree of mathematical structure. For instance, it is
conceivable that a block cipher might exhibit a group structure. If this were the
case, it would then mean that encrypting a plaintext under one key and then
encrypting the result under another key would always be equivalent to single
encryption under some other single key. If so, then the block cipher would be
considerably weaker, and tile use of multiple encryption cycles would offer no
additional security over single encryption.

1
Rainbow table
Hash functions map plaintext into a hash. Because the hash function is a one-way
process, one should not be able to determine the plaintext from the hash itself. To
determine a given plaintext from its hash, refer to these two ways to do that:
1. Hash each plaintext until matching hash is found; or
2. Hash each plaintext, but store each generated hash in a table that can used as a
look up table so hashes do not need to be generated again. A rainbow table is a
lookup table of sorted hash outputs. The idea here is that storing precomputed
hash values in a rainbow table that one can later refer to saves time and computer
resources when attempting to decipher tile plaintext from its hash value.

2
Frequency analysis
This attack works closely with several other types of attacks. It is especially useful
when attacking a substitution cipher where the statistics of the plaintext language
are known. In English, for example, some letters will appear more often than others
will, allowing an attacker to assume that those letters may represent an E or S.

3
Factoring Attack
This attack is aimed at the RSA algorithm specifically. Because that algorithm uses
the product of large prime numbers to generate the public and private keys, this
attack attempts to find the private key through solving the factoring of these public
keys.

4
Dictionary attack
The dictionary attack is used most commonly against password files. It exploits the
poor habits of users who choose simple passwords based on natural words. The
dictionary attack merely encrypts all of the words in a dictionary and then checks
whether the resulting hash matches an encrypted password stored in the SAM file
or other password file.

5
6
This attack has been successful against certain cryptography implementations. If the
random number generator used by
cryptosystems is too predictable, it may give attackers the ability to guess or predict
the random numbers that are very critical in setting up initialization vectors in
cryptography systems. With this information in hand, the attacker is much more
likely to run a successful attack.

7
Most cryptosystems will use temporary files to perform their calculations. If these
files are not deleted and overwritten, they may be compromised and lead an
attacker to the message in plaintext.

8
Social engineering for key discovery
This is the most common type of attack and usually the most successful. All
cryptography attacks rely to some extent on humans to implement and operate.
Unfortunately, this is one of the greatest vulnerabilities and has led to some of the
greatest compromises of a nation’s or organization’s secrets or intellectual property.
Through coercion, bribery or befriending people in positions of responsibility, spies
or competitors are able to gain access to systems without having any technical
expertise.

9
Physical security plans and infrastructure are often designed, implemented, and
operated by physical security specialists in larger organizations. Physical security
infrastructure is typically controlled outside of IT or IT security control in larger
organizations. However, the CISSP MUST understand physical security fundamentals
in order to do the following:
• Assess the risk reduction value of physical security controls
• Communicate physical security needs to physical security managers
• Identify risks to Information Security due to physical security weaknesses

While the CISSP may never actually design or implement physical security in a larger
organization, they may very well be required to implement physical security
elements in smaller organizations. It is also vital for the CISSP to understand the
impact of either good or bad physical security as it impacts information system
security, regardless of organization size.
A role of the CISSP in some cases may be to translate information security needs or
requirements in such a way that the physical security or facilities operators can
understand those needs in their terms.

1
Apply Security Principals to Site and Facility Design
Physical design should support confidentiality, integrity, and availability of
information systems and must consider human safety and external factors as well.
Physical security at the facility level does support confidentiality, integrity, and
availability at the information system level. Facility design absolutely supports
system availability and can have a particularly high impact on continuity of
operations and disaster recovery.

Physical Design that Supports Confidentiality, Integrity, and Availability (CIA)

Physical design elements can protect information systems from unauthorized


access. It can enable auditing or observation of sensitive physical access areas, such
as server rooms or wiring infrastructure, and either complement or simplify the
information system controls that must be applied to achieve adequate overall
security. Facilities management ensures robust services (e.g., power, cooling) to
information systems and provides backup or redundant capabilities.

2
Physical Design that Supports Human Safety
Some physical design elements directly support human safety. It is important to
ensure the controls remain in place as security controls are applied. In some cases,
physical security restrictions could imperil human safety and that must be avoided.
For example, physical access restrictions could impede building evacuation during an
emergency and must be designed to allow rapid exit while still protecting against
improper entry. In other cases, facility modifications done to support information
systems could necessitate additional human safety controls to be installed. This might
include additional emergency alarms (audible, visible), new or updated egress routes,
or additional safety equipment. Information systems and their support elements
(e.g., UPS, HVAC) consume large amounts of power and the power terminals that are
often located with the equipment. This may necessitate emergency power shutoff
switches (big red button on the wall) or equipment shutoffs to ensure electrical
accidents are minimized. Additionally, equipment lockouts for power may be
advisable. These are manual or physical lock latches that physically lock circuit
breakers or switches in the off position while staff are exposed to power cabling.

2
We will cover the following standards in the CODS later covering the key considerations for
building a data center or a secure facility:

ISO/IEC TS 22237-1:2018 Information technology — Data centre facilities and


infrastructures — Part 1: General concepts
ISO/IEC TS 22237-2:2018 Information technology — Data centre facilities and
infrastructures — Part 2: Building construction
ISO/IEC TS 22237-3:2018 Information technology — Data centre facilities and
infrastructures — Part 3: Power distribution
ISO/IEC TS 22237-4:2018 Information technology — Data centre facilities and
infrastructures — Part 4: Environmental control
ISO/IEC TS 22237-5:2018 Information technology — Data centre facilities and
infrastructures — Part 5: Telecommunications cabling infrastructure
ISO/IEC TS 22237-6:2018 Information technology — Data centre facilities and
infrastructures — Part 6: Security systems
ISO/IEC TS 22237-7:2018 Information technology — Data centre facilities and
infrastructures — Part 7: Management and operational information

3
The following list includes top level design considerations for physical security and
facilities:
• Personnel policy and procedure
• Personnel screening
• Workplace violence prevention
• Response protocols and training
• Mail screening
• Shipping and receiving
• Property ID and tracking
• Parking and site security
• Site and building access control
• Video surveillance
• Internal access control
• Infrastructure protection
• Onsite redundancy
• Structural protections

Implement and Manage Physical Security


To implement effective physical security, a physical risk assessment consistent with
the Risk Assessment described in Domain 1 should be conducted. It should consider
potential human action, natural disaster, industrial accident, equipment failure, and
so forth. As in information security, a set of layered physical protections and
countermeasures for identified physical risks must be developed so that the
protections are commensurate with the risk assessment. For example, the physical
and facility controls associated with a foreign embassy level of protection would be
very different from those needed to mitigate the physical risks associated with a
small remote office of a commercial business.

One important consideration is that physical risk controls will impact information
system design. For example, weak physical controls may necessitate more complex
information system protections to compensate, while strong physical protections may
lower the overall risk of an information system and allow for less costly or
complicated controls to be applied at the information system level. Just as
information system controls must be monitored for effectiveness, physical controls
must also be monitored and tested for effectiveness. This is especially true for
controls associated with human safety, continuity of operations, disaster recovery,
and emergency backups.

3
Perimeter Security Controls
The layers of perimeter controls that may exist are shown above in the picture. This
model is based on a campus or multi-structure type site, but it can be applied to a
single building or facility. In cases where an organization is located on a single floor
or office space within a larger facility, there may be limited control over the
perimeter security controls, but they should still be evaluated for effectiveness and
any positive or negative impacts.

4
Surrounding areas concerns include the following:

Roadways: Roads close to or adjacent to the site.

Waterways: Adjacent or crossing the site. This may include navigable waterways or
small drainage features if they impact the site security.

Geography: Terrain of the site in terms of potential visibility limits, concealment


opportunities, or natural barriers to entry.

Lines of sight: Areas where visibility is limited by features or structures is a concern.

5
Associated considerations include the following:

• Is the facility visible from roads?


• Is there a potential for vehicle borne threats?
• Where are the vehicular and pedestrian access points?
• Is there adequate fencing, or impassible perimeter landscaping (natural fence)?

Areas to assess for site entry and exit points include the following:

Vehicular: Are vehicular access points protected against credible vehicular threats?

Public/customer/visitor: Are there separate entry controls for public, customer, or


visitor access?

Staff/employee: Do staff or employees have dedicated controlled access points?

Delivery/truck: Is there a delivery or truck entrance, and how is it controlled?

6
Pedestrian: Are there controlled pedestrian entry points to the site?

Considerations for site entry and exit:

Access controls: What are the access controls to enter or leave the site—badge,
proximity card, guard monitored?

Surveillance: Is there sufficient surveillance capability to cover site entry and exit
points?

Lighting: Is lighting sufficient to allow humans or video systems to adequately make


subject identification in all light conditions?

Intrusion detection: Are sensors or intrusion detection devices installed on


unattended or unmonitored access points?
Barriers/traffic control: Are barriers in place or available for traffic control at any or all
of the vehicular access points?

At larger sites, there may be external facilities that include the following:
• Parking structures/lots
• Utilities components
• Electric transformers/lines
• Telecommunications
• Landscaping

For these consider the following:


Lighting:
• Does the lighting provide sufficient lighting under all conditions for human and/or
video identification of subjects?
• Does the lighting limit shadow areas or areas of no visibly during darkness?
• Surveillance: Does surveillance cover areas where security or human safety is a
concern?
• Intrusion detection: Are alarms or sensors installed in unattended external
buildings or facilities?
• Lines of sight: Are lines of site sufficient and dead space eliminated?

Operational Facilities are the following:


• Where employees work
• Where IT operates

6
For these consider the following:

Exterior lighting and surveillance: Appropriate to


expected threats. Lighting is of sufficient brightness and
coverage to limit shadows and make human or video
identification of subjects possible.

Building materials: Appropriate for the level of security


required.

Doors, windows, walls: Are of the appropriate type and


security level to mitigate expected risks.

Entry/exit points and access controls: Unattended access


conditions, guard monitoring, video monitoring.

Staff/employee entrance: Is there a staff only entrance,


and how is it controlled? Attended, unattended?

Public/customer entrance: Is there a public or customer


entrance with different security needs from the staff
entrance?

Delivery entrance: Is there a loading dock or delivery facility?

Sensors/intrusion detection: Have sensors or alarms been installed on doors and


windows?

Typical perimeter control types:

Lighting
o Bright enough to cover target areas
o Limits shadow areas
o Sufficient for operation of cameras, must be coordinated with
camera plan

Surveillance/Camera
o Narrow focus for critical areas
o Wide focus for large areas
o IR/low light in unlit areas
o Monitored and/or recorded

6
o Dummy cameras

Intrusion Detection
o Cut/break sensors
o Sound/audio sensors
o Motion sensors

Barriers
o Fixed barriers to prevent ramming
o Fixed barriers to slow speeds
o Deployable barriers to block access ways
o Fencing/Security landscaping
o Slows and deters
o Should not impede monitoring

Building Material security examples:


o High-security glass
o Steel/composite doors
o Steel telecommunications conduit
o Secure walls
o True floor to ceiling walls (wall continues above drop ceiling)
o Anchored framing material
o Solid walls/in wall barriers

Lock security examples:


o Available in varying grades
o Physical key locks
o Mechanical combination locks
o Electronic combination locks
o Biometric locks
o Magnetic locks
o Magnetic strip card locks
o Proximity card locks
o Multi-factor locks (e.g., card + pin)

6
Controls for human safety
o Visible and audible alarms, fire suppression, response plans/training,
emergency shutoffs

Controls to manage access


o Door locks (e.g., magnetic, card key, mechanical key,
combination lock)
o Access point security (e.g., mantraps, limited ingress, alarmed emergency
egress)
o Multifactor access (e.g., key card + pin for room entry)

Internal monitoring
o Physical access control system/monitor (e.g., records key card use)
o Video surveillance/cameras
o Radio Frequency (RF) monitoring

7
8
9
Rooms in the facility where multiple computer assets are installed and operate.
Server rooms have similar security and environmental protections to wiring closets.
However, they may have higher human traffic, and it is critical that access point
security and access monitoring is in place. When server room space is shared with
other organizational units or even other businesses, it can be critical to employ rack
or equipment level locking.

Power, surge protection, and uninterruptible power supplies (UPS) must tailored to
the operating equipment and of sufficient capacity. As equipment is modified or
replaced, power concerns must be readdressed to ensure capacities are not
exceeded. Human safety becomes an issue with power levels in most server rooms
and emergency shutoffs, and non-conductive hooks/gloves become important for
human safety. Non-conductive personal protective equipment or hooks can be used
to disengage equipment from a power source or safely disengage a human from a
live power source without endangering another human. Appropriate training may
also be necessary to ensure staff respond appropriately to electrical emergencies by
cutting power and/or safely resolving the emergency. For server rooms, appropriate
fire detection/suppression must be considered (e.g., sprinkler is inappropriate for

10
electrical fires) based on the size of the room, typical human occupation, egress
routes, and risk of damage to equipment. Server rooms are typically maintained at a
higher level of physical security than the rest of the facility.

10
Media storage facilities may be onsite and offsite from the main facility. If onsite
with the main facility, backup copies should ideally be stored offsite and
fireproof/waterproof containers should be employed. Offsite storage should
duplicate critical media stored onsite and retain the ability to recover critical
information. Media typically contains sensitive historical data that likely still
requires protection. Some media types may support encryption while others do not.
If sensitive data is stored on unencrypted media access, control must be strictly
limited and monitored. Some organizations may limit access to dedicated archivists.
Temperature and humidity should be consistent with media storage requirements
of the particular media in the facility. As media types evolve, this must be
continually reassessed but must be maintained consistently with the needs of all
stored media. Fire protection should
be in place at both room and container levels.

11
Evidence storage facilities or rooms are special-access areas with strictly limited
access and may be aggressively monitored. They will typically contain individual
lockers or secure containers for each investigation or investigator assigned to the
facility. This is to ensure evidence accountability and chain of custody is maintained
at all times to prove evidence has not been modified or tampering has not
occurred. Evidence is protected against damage or theft, and appropriate
environmental protections should be commensurate with
evidence types stored (e.g., paper, digital, media).

12
Restricted area security applies to any spaces or rooms within the facility where
highly sensitive work occurs or information is stored. This includes secure facilities
and classified workspaces. These spaces typically have extremely high access
control protections and logging of all access, and they may include audio
protections against eavesdropping such as white noise machines. They may also
include enhanced visual screening from exterior spaces or have no windows at all.
In the most extreme cases, they may include protection against the detection of
electromagnetic emissions from equipment.

13
Power
• Redundant power input from utilities
• Redundant transformers/power deliver
• Backup generators
• Battery backups
• Dual power infrastructure within data centers
• Backup sources must be tested/exercised
• Backup sources must be sized appropriately and upgraded when load increases

14
Telecommunications
• Multiple service provider inputs
• Redundant communication channels/mechanisms
• Redundancy on key equipment (eliminate single points of
• failure)

15
16
17
Fiber Distributed Data Interface (FDDI)

© 2018 Al-Nafi. All Rights Reserved. 1


© 2018 Al-Nafi. All Rights Reserved. 2
© 2018 Al-Nafi. All Rights Reserved. 3
Asymmetric Digital Subscriber line (ADSL)

© 2018 Al-Nafi. All Rights Reserved. 4


Rate-Adaptive DSL (RADSL)
• The upstream transmission rate is automatically tuned based on the
quality of the line and adjustments made on the modem.

© 2018 Al-Nafi. All Rights Reserved. 5


Symmetric Digital Subscriber Line (SDSL)
• Uses the same rates for upstream and downstream transmissions.

© 2018 Al-Nafi. All Rights Reserved. 6


Very High Bit Rate DSL (VDSL)
• Supports much higher transmission rates than other DSL
technologies, such as 52Mbps downstream and 2Mbps upstream.

© 2018 Al-Nafi. All Rights Reserved. 7


© 2018 Al-Nafi. All Rights Reserved. 8
Cable Modem Data-Over-Cable Service Interface
Specifications (DOCSIS)

© 2018 Al-Nafi. All Rights Reserved. 9


Broadband over Powerline (BPL)
• BPL is the delivery of broadband over the existing low- and medium
voltage electric power distribution network. BPL speeds are
comparable to DSL and cable modem speeds. BPL can be provided to
homes using existing electrical connections and outlets.

© 2018 Al-Nafi. All Rights Reserved. 10


Wi-Fi (Wireless LAN IEEE 802.11x)

© 2018 Al-Nafi. All Rights Reserved. 11


Bluetooth (Wireless Personal Area Network
IEEE 802.15)

© 2018 Al-Nafi. All Rights Reserved. 12


WiMAX (Broadband Wireless Access IEEE
802.16)

© 2018 Al-Nafi. All Rights Reserved. 13


Satellite internet

© 2018 Al-Nafi. All Rights Reserved. 14


Satellite internet continued…

© 2018 Al-Nafi. All Rights Reserved. 15


Satellite internet continued…

© 2018 Al-Nafi. All Rights Reserved. 16


Space X satellite

© 2018 Al-Nafi. All Rights Reserved. 17


Cellular Network

© 2018 Al-Nafi. All Rights Reserved. 18


Threats and countermeasures

© 2018 Al-Nafi. All Rights Reserved. 19


Threats and countermeasures continued…

© 2018 Al-Nafi. All Rights Reserved. 20


Threats and countermeasures continued…

© 2018 Al-Nafi. All Rights Reserved. 21


Threats and countermeasures continued…

© 2018 Al-Nafi. All Rights Reserved. 22


The data-link layer prepares the packet that it receives from the network layer to be
transmitted as frames on the network. This layer ensures that the information that
it exchanges with its peers is error-free. If the data-link layer detects an error in a
frame, it will request that its peer resend that frame. The data-link layer converts
information from
the higher layers into bits in the format that is expected for each networking
technology, such as Ethernet, Token Ring, etc. Using hardware addresses, this layer
transmits frames to devices that are physically connected only.
There are two sub layers within the data-link layer:
• Media Access Control (MAC) Layer
• Logical Link Control (LLC) Layer

1
At this layer, a 48-bit (12-digit hexadecimal) address is defined that represents
the physical address “burned-in” or chemically etched into each Network
Interface Card (NIC). The first three
octets (MM:MM:MM or MM-MM-MM) are the ID number of the hardware
manufacturer. Manufacturer ID numbers are assigned by the Institute of
Electrical and Electronics Engineers (IEEE). The last three octets (SS:SS:SS or
SS-SS-SS) make up the serial number for the device that is assigned by the
manufacturer. The Ethernet and ATM technologies supported on devices use
the MAC-48 address space. IPv6 uses the EUI-64 address space.

2
This layer is concerned with sending frames to the next link on a local area
network.

3
Address Resolution Protocol (ARP) is used at the MAC layer to provide for
direct communication between two devices within the same LAN segment.
Sending devices will resolve IP addresses to MAC addresses of target devices
to communicate.

4
Fibre Channel is a high-speed serial interface using either optical or electrical
connections (i.e., the physical layer) at data rates currently up to 2Gbits/s with
a growth path to 10Gbits/s. FCoE is a lightweight encapsulation protocol and
lacks the reliable data transport of the TCP layer. Therefore, FCoE must
operate on DCB-enabled Ethernet and use lossless traffic classes to prevent
Ethernet frame loss under congested network conditions. FCoE on a DCB
network mimics the lightweight nature of native FC protocols and media. It
does not incorporate TCP or even IP protocols. This means that FCoE is a layer
2 (non-routable) protocol just like FC. FCoE is only for short-haul
communication within a data center.

5
Multiprotocol Label Switching (MPLS) is a wide area networking protocol that
operates at both layer 2 and 3 and does “label switching.” The first device
does a routing lookup, just like before, but instead of finding a next-hop, it
finds the final destination router. And it finds a predetermined path from
“here” to that final router. The router applies a “label” based on this
information. Future routers use the label to route the traffic without needing
to perform any additional IP lookups. At the final destination router, the label
is removed, and the packet is delivered via normal IP routing. RFC 3031
defines the MPLS label switching architecture.

Why MPLS is used:


• Implementing Traffic Engineering which provides an ability to control
where and how the traffic is routed on your network.
• Implementing Multiple Service Networks, which provides the ability to
deliver data transport services as well as IP routing services across the
same packets switched network architecture.
• Improving Network Resiliency with MPLS fast Reroute, which provides the

6
ability to organizations which are choosing Software Defined Wide area
network or SD WAN. We will cover this later inshAllah.

6
The Point-to-Point Protocol (PPP) provides a standard method for transporting
multiprotocol datagrams over point-to-point links. PPP is
comprised of three main components:

1. A method for encapsulating multiprotocol datagrams


2. A Link Control Protocol (LCP) for establishing, configuring, and testing the
data-link connection
3. A family of Network Control Protocols (NCPs) for establishing and
configuring different network-layer protocols

7
Bridges are layer 2 devices that filter traffic between segments based on MAC
addresses. In addition, they amplify signals to facilitate physically
larger networks. A basic bridge filters out frames that are destined for another
segment. Bridges can connect LANs with unlike media types,
such as connecting an Unshielded Twisted Pair (UTP) segment with a segment
that uses coaxial cable. Bridges do not reformat frames, such
as converting a Token Ring frame to Ethernet. This means that only identical
layer 2 architectures can relate to a simple bridge (e.g., Ethernet to Ethernet,
etc.).

Network administrators can use translator bridges to connect dissimilar layer


2 architectures, such as Ethernet to Token Ring. Other specialized
bridges filter outgoing traffic based on the destination MAC address. Bridges
do not prevent an intruder from intercepting traffic on the local
segment. A common type of bridge for many organizations is a wireless bridge
based upon one of the IEEE 802.11 standards. While wireless
bridges offer compelling efficiencies, they can pose devastating security issues

8
to organizations by effectively making all traffic crossing the
bridge visible to anyone connected to the LAN.

Switches The most common type of switches used today in the LAN operate at
layer 2. A switch establishes a collision domain per port, enabling more efficient
transmissions with CSMA/CD logic within Ethernet. Switches are the core
device used today to build LANs. There are many security features offered
within switches today, such as port blocking, port authentication, MAC filtering,
and virtual local area networks (VLAN), to name a few. Layer 3 switches are
switch, router combinations and are capable of making “switching decisions”
based on either the MAC or IP address.

8
Virtual local area networks (VLANs) allow network administrators to use
switches to create software-based LAN segments that can be defined based
on factors other than physical location. Devices that share a VLAN
communicate through switches, without being routed to other sub-networks,
which reduces overhead due to router latency (as routers become faster, this
is less of an advantage).

Furthermore, broadcasts are not forwarded outside of a VLAN, which reduces


congestion due to broadcasts. Because VLANs are not restricted to the
physical location of devices, they help make networks easier to manage.
When a user or group of users changes their physical location, network
administrators can simply change the membership of ports within a VLAN.
Likewise, when additional devices must communicate with
members of a VLAN, it is easy to add new ports to a VLAN. VLANs can be
configured based on switch port, IP subnet, MAC address, and protocols.
It is important to remember that VLANs do not guarantee a network’s
security. At first glance, it may seem that traffic cannot be intercepted

9
because communication within a VLAN is restricted to member devices.
However, there are attacks that allow a malicious user to see traffic from other
VLANs (so-called VLAN hopping). Therefore, a VLAN can be created so that
engineers can efficiently share confidential documents,
but the VLAN does not significantly protect the documents from unauthorized
access.

9
10
11
The network layer moves data between networks as packets by means of logical
addressing schemes.

1
In many cases, computer transmission methodology reflects some of the
norms that happen in a verbal conversation. Typically, if you want to have a
private conversation with an individual, you will take that person aside and
speak one-to-one.

A unicast is a one-to-one communication between hosts.


If you need to let a group within a crowd of people know about a matter, you
can open your announcement with a relevant statement to capture that
groups attention within the crowd.

A multicast is a one-to-many communication between hosts. If there is


something that everyone within a crowd of people should know, such as the
need to escape a fire, you wouldn’t walk up to each individual and tell them
one at a time, you would shout it out for all to hear.

A broadcast is a one-to-all communication between hosts. A host can send a


broadcast to everyone on its network or sub-network. Depending on the

2
network topology, the broadcast could have anywhere from one to tens of
thousands of recipients. Like a person standing on a soapbox, this is a noisy
method of communication. Typically, only one or two destination hosts are
interested in the broadcast; the other recipients waste resources to process the
transmission. However, there are productive uses for broadcasts. Consider a
router that knows a device’s IP address but must determine the device’s media
access control (MAC) address. The router will broadcast an Address Resolution
Protocol (ARP) request asking for the device’s MAC address.

Multicasting was designed to deliver a stream to only interested hosts. Radio


broadcasting is a typical analogy for multicasting. To select a specific radio
show, you tune a radio to the broadcasting station. Likewise, to receive a
desired multicast, you join the corresponding multicast group. Multicast agents
are used to route multicast traffic over networks and administer multicast
groups. Each network and sub-network that supports multicasting must have at
least one multicast agent. Hosts use Internet Group Management Protocol
(IGMP) to tell a local multicast agent that it wants to join a specific multicast
group. Multicast agents also route multicasts to local hosts that are members of
the multicast’s group and relay multicasts to neighboring agents. When a host
wants to leave a multicast group, it sends an IGMP message to a local multicast
agent. Multicasts do not use reliable sessions; therefore, the multicasts are
transmitted as best effort with no guarantee that datagrams are received.

2
The Internet Protocol (IP) is the dominant protocol that operates at the OSI
Network Layer 3. IP is responsible for addressing packets so that they can be
transmitted from the source to the destination hosts. Because it is an
unreliable protocol, it does not guarantee delivery. IP will subdivide the
message into fragments when they are too large for a packet. Hosts are
distinguished by the IP addresses. The address is expressed as four octets
separated by a dot (.), for example, 216.12.146.140. Each octet may have a
value between 0 and 255. However, 0 and 255 are not used for hosts. 255 is
used for broadcast addresses, and the 0’s meaning depends on the context in
which it is used. Each address is subdivided into two parts: the network
number and the host. The network number assigned by an external
organization, such as the Internet Corporation for Assigned Names and
Numbers (ICANN), represents the organization’s network. The host represents
the network interface within the network. The part of the address that
represents the network number defines the network’s class. Class A network
used the leftmost octet as the network number, Class B used the leftmost two
octets, etc. The part of the address that is not used as the network number is

3
used to specify the host. For example, the address 216.12.146.140 represents a
Class C network. Therefore, the network portion of the address is represented
by the 216.12.146, and the unique host address within the network block is
represented by 140. 127, which is the Class A network address block, is
reserved for a computer’s loopback address. Usually, the address 127.0.0.1 is
used. The loopback address is used to provide a mechanism for self diagnosis
and troubleshooting at the machine level. This mechanism allows a network
administrator to treat a local machine as if it were a remote machine, and ping
the network interface to establish whether it is operational.

To ease network administration, networks are typically subdivided into subnets.


Because subnets cannot be distinguished with the addressing scheme
discussed so far, a separate mechanism, the subnet mask, is used to define the
part of the address that is used for the subnet. Bits in the subnet mask are 1
when the corresponding bits in the address are used for the subnet. The
remaining bits in the mask are 0. For example, if the leftmost three octets (24
bits) are used to distinguish subnets, the subnet mask is 11111111 11111111
11111111 00000000. A string of 32 1s and 0s is very unwieldy, so the mask is
usually converted to decimal notation: 255.255.255.0. Alternatively, the mask is
expressed with a slash (/) followed by the number of 1s in the mask. The above
mask would be written as /24.

3
IPv6 is a modernization of IPv4 that includes the following:

• A much larger address field: IPv6 addresses are 128 bits, which supports
2128 hosts. Suffice it to say that we will not run out of addresses.

• Improved security: IPSec can be implemented in IPv6. This will help ensure
the integrity and confidentiality of IP packets and allow communicating
partners to authenticate with each other.

• Improved quality of service (QoS): This will help services obtain an


appropriate share of a network’s bandwidth.

4
5
The ICMP is used for the exchange of control messages between hosts and
gateways and is used for diagnostic tools such as ping and traceroute. ICMP
can be leveraged for malicious behavior, including man-in-the-middle and
denial-of-service attacks.

6
IGMP is used to manage multicasting groups that are a set of hosts anywhere
on a network that are listening for a transmission. Multicast agents administer
multicast groups, and hosts send IGMP messages to local agents to join and
leave groups.

7
Open Shortest Path First (OSPF) is an interior gateway routing protocol
developed for IP networks based on the shortest path first or link-state
algorithm. A link-state algorithm can keep track of a total “cost” to calculate
the most efficient way of moving information from a source to destination.
While a distance vector protocol, such as Routing Information Protocol (RIP),
will basically use the number of hops or count of links between networks to
determine the best path, a link-state algorithm can surmise the most efficient
path by knowing the connecting speed, congestion of the link, availability of
the link, and the total hops to determine what might be the best path. A
longer hop count could be the shortest path if all other measurements are
superior to a path with a shorter hop count.

Routers use link-state algorithms to send routing information to all nodes in


an internetwork by calculating the shortest path to each node based on a
topography of the internet constructed by each node. Each router sends that
portion of the routing table (keeps track of routes to network destinations)
that describes the state of its own links, and it also sends the complete

8
routing structure (topography). The advantage of shortest path first algorithms
is that their use results in smaller, more frequent updates everywhere. They
converge quickly, thus preventing such problems as routing loops and Count-to-
Infinity (when routers continuously increment the hop count to a network). The
disadvantage of shortest path first algorithms is that they require substantial
amounts of CPU power and memory.

8
Routers route packets to other networks and are commonly referred to as the
Gateway. They read the IP destination in received packets, and based on the
router’s view of the network, it determines the next device on the network
(the next hop) to send the packet. If the destination address is not on a
network that is directly connected to the router, it will send the packet to the
gateway of last resort, another connected router, and rely on that router to
establish a path. Routers can be used to interconnect different technologies
and change the architecture. For example, connecting a Token Ring and
Ethernet networks to the same router would allow IP Ethernet packets to be
forwarded to a Token Ring network. Routers are most commonly used today
to connect LANs to
WANs. To build a network, you need switches for the LAN and a router to
connect the LAN to the WAN. The most basic security that can be performed
at layer 3 on a router is an access control list (ACL) that can define permitted
and denied source and destination addresses and ports or services.

9
Routers and firewalls are devices that enforce administrative security policies by
filtering incoming traffic based on a set of rules. While a firewall should always be
placed at internet gateways, there are also internal network considerations and
conditions where a firewall would be employed, such as network zoning.
Additionally, firewalls are also threat management appliances with a variety of
other security services embedded, such as proxy services and
intrusion prevention services (IPS) that seek to monitor and alert proactively at the
network perimeter.

10
11
12
13
The transport layer delivers end-to-end services through segments transmitted in a
stream of data and controls streams of data to relieve congestion through elements
that include quality of service (QoS).

1
The Transmission Control Protocol (TCP) provides connection-oriented data
management and reliable data transfer.

2
The UDP provides connectionless data transfer without error detection and
correction. UDP uses port numbers in a similar fashion to TCP. As a connectionless
protocol, UDP is useful for attacks as there is no state for routers or firewalls to
observe and monitor.

3
Well-Known Ports: Ports 0–1023

• These ports are related to the common protocols that are utilized in the
underlying management of Transport Control Protocol/Internet Protocol (TCP/IP)
system (Domain Name Service (DNS), Simple Mail Transfer Protocol (SMTP), etc.)

Registered Ports: Ports 1024–49151

• These ports typically accompany non-system applications associated with


vendors and developers.

Dynamic or Private Ports: Ports 49152–65535

• Whenever a service is requested that is associated with Well-Known or


Registered Ports those services will respond with a dynamic port.

4
5
The session layer provides a logical persistent connection between peer hosts. The
session layer is responsible for creating, maintaining, and tearing down the session.

Technology and Implementation


Session layer protocols include the following:
• PAP – password authentication protocol
• PPTP – Point-to-Point Tunneling Protocol
• RPC – remote procedure call protocol

RPCs represent the ability to allow for the executing of objects across hosts with a
client sending a set of instructions to an application residing on a different host on
the network. It is important to note that RPC does not in fact provide any services
on its own; instead, it provides a brokering service by providing (basic)
authentication and a way to address the actual service.

6
7
The presentation layer maintains that communications delivered between sending
and receiving computer systems are in a common and discernable system format.

8
To provide a reliable syntax, systems processing at the presentation layer will use
American Standard Code for Information Interchange (ASCII) or Extended Binary
Coded Decimal Interchange Code (EBCDIC) to translate from Unicode. In 2016 the
W3C Internationalization Working Group estimated that 86 percent of all web pages
sampled showed that they are using UTF 8 Unicode character encoding. It further
states, “Not only are people using UTF-8 for their pages, but Unicode encodings are
the basis of the Web itself. All browsers use Unicode internally, and convert all other
encodings to Unicode for processing. As do all search engines. All modern operating
systems also use Unicode internally. It has become part of the fabric of the Web.”

9
Translation services are also necessary when considering that different computer
platforms (Macintosh and Windows personal computers) may exist within the same
network and could be sharing data. The presentation layer is needed to translate
the output from unlike systems to similar formats.

10
Data conversion or bit order reversal and compression are other functions of the
presentation layer. As an example, an MPEG-1 Audio Layer-3 (MP3) is a standard
audio encoding and compression algorithm that creates a file with a bitrate of
128kbit/s. The Waveform Audio File Format (WAVE) with Linear PCM bitstream is
another standard audio encoding and compression that creates a file with a bitrate
of 44.1khz. The compression for both formats is accomplished at the presentation
layer. If a tool is used to convert one format into another, this is also accomplished
at the presentation layer.

11
Encryption services such as TLS/SSL are managed below, above, and within the
presentation layer. At times, the encoding capabilities that are resident at the
presentation layer are inappropriately conflated with a specific set of cryptographic
services. Abstract Syntax Notation (ASN.1) is an ISO standard that addresses the
issue of representing, encoding, transmitting, and decoding data structures. The
transfer of data entities between two points of communication could appear as
nonsensical or encoding if a nonparticipating (eavesdropping) third party wasn’t
aware of the standard being used in transmission.

12
13
The presentation layer maintains that communications delivered between sending
and receiving computer systems are in a common and discernable system format.

1
To provide a reliable syntax, systems processing at the presentation layer will use
American Standard Code for Information Interchange (ASCII) or Extended Binary
Coded Decimal Interchange Code (EBCDIC) to translate from Unicode. In 2016 the
W3C Internationalization Working Group estimated that 86 percent of all web pages
sampled showed that they are using UTF 8 Unicode character encoding. It further
states, “Not only are people using UTF-8 for their pages, but Unicode encodings are
the basis of the Web itself. All browsers use Unicode internally, and convert all other
encodings to Unicode for processing. As do all search engines. All modern operating
systems also use Unicode internally. It has become part of the fabric of the Web.”

2
Translation services are also necessary when considering that different computer
platforms (Macintosh and Windows personal computers) may exist within the same
network and could be sharing data. The presentation layer is needed to translate
the output from unlike systems to similar formats.

3
Data conversion or bit order reversal and compression are other functions of the
presentation layer. As an example, an MPEG-1 Audio Layer-3 (MP3) is a standard
audio encoding and compression algorithm that creates a file with a bitrate of
128kbit/s. The Waveform Audio File Format (WAVE) with Linear PCM bitstream is
another standard audio encoding and compression that creates a file with a bitrate
of 44.1khz. The compression for both formats is accomplished at the presentation
layer. If a tool is used to convert one format into another, this is also accomplished
at the presentation layer.

4
Encryption services such as TLS/SSL are managed below, above, and within the
presentation layer. At times, the encoding capabilities that are resident at the
presentation layer are inappropriately conflated with a specific set of cryptographic
services. Abstract Syntax Notation (ASN.1) is an ISO standard that addresses the
issue of representing, encoding, transmitting, and decoding data structures. The
transfer of data entities between two points of communication could appear as
nonsensical or encoding if a nonparticipating (eavesdropping) third party wasn’t
aware of the standard being used in transmission.

5
6
The application layer supports or hosts the function of applications that run on a
system. All manner of a human supported interfaces, messaging, systems control,
and processing occur at the application level. While the application layer itself is not
the application it is where applications run.

7
DHCP is a client/server application that is designed to assign IP addresses from a
pool of pre-allotted addresses on a DHCP server. Based upon the specifications in
RFC 2131, the client transmits on port 67 and the server responds on port 68. The
client sends out a broadcast with a DHCPDISCOVER packet. The server responds
with a DHCPOFFER giving the client an available address to use. The client responds
back with DHCPREQUEST to use the offered address, and the server sends back a
DHCPACK allowing the client to bind the requested address to the network interface
card (NIC). If a DHCP server doesn’t respond in a predetermined time, then the
DHCP client self-assigns an IP address in the 169.254.x.x range based upon IPv4
Link-Local Addresses based upon RFC 3927.

8
DNS resolves Fully Qualified Domain Names (FQDN) to IP addresses and transmits
data on port 53. According to RFC 1035, the local user, or client, queries an agent
known as a Resolver that is part of the client operating system. DNS is used to
resolve a FQDN to an IP address. Network nodes automatically register this
resolution in the DNS server’s database. To resolve any external domain name,
each DNS in the world must hold a list of these root servers. Various extensions to
DNS have been proposed to enhance its functionality and security, for instance, by
introducing authentication using DNS Security Extensions (DNSSEC), multicasting, or
service discovery.

DNS maintains a directory of zones that have a hierarchical superior known as the
root that are represented by an administrative (“.”) that is appended to the end of a
FQDN. The root servers (at the initial printing of this publication there are 13) carry
references to what is known as Top Level Domains (TLDs). A few examples of TLDs
are .com; .edu; .gov; etc. The TLDs contain references to sub zones know as second
level domain. A few examples of second level domains include amazon.com;
microsoft.com; ibm.com; etc. The subzones can continue with third or fourth level
domains that are typically tied to a specific service.

9
When a resolver connects to a DNS server, the default specifications state that it will
do so with an iterative lookup. This means that the DNS server will hand the lookup
to the resolver after making the first query. In a recursive lookup, the DNS server will
return with a response of the FQDN to the original resolver after managing the
lookup from the root servers until the last answer.
The following records are necessary for the DNS server to be operational.
• Host (A)
• Start of Authority (SOA)
• Name Server (NS)
• Pointer (PTR)
• Mail Exchange (MX)

9
SNMP is designed to manage network infrastructure. SNMP architecture consists of
a management server (called the manager in SNMP terminology) and a client
usually installed on network devices, such as routers and switches, called an agent.
SNMP allows the manager to retrieve “get” values of variables from the agent, as
well as “set” variables. Such variables could be routing tables or performance-
monitoring information. Probably the most easily exploited SNMP vulnerability is a
brute-force attack on default or easily guessable SNMP passwords known as
“community strings” often used to manage a remote device. Given the scale of
SNMP v1 and v2 deployment, combined with a lack of clear direction from the
security professional with regards to the risks associated with
using SNMP without additional security enhancements to protect the community
string, it is certainly a realistic scenario and a potentially severe but easily mitigated
risk. Until version 2, SNMP did not provide any degree of authentication or
transmission security. Authentication consists of an identifier, called a community
string, by which a manager will identify itself against an agent (this string is
configured into the agent) and a password sent with
a command. As a result, passwords can be easily intercepted that could then result
in commands being sniffed and potentially faked. Like the previous problem, SNMP

10
version 2 did not support any form of encryption so that passwords (community
strings) were passed as cleartext. SNMP version 3 addresses this weakness with
encryption
for passwords.

These are the primary components of SNMP:


• Network management systems
• Management information base
• Managed devices
• Agents

10
LDAP uses a hierarchical tree structure for directory entries. Like X.500, LDAP entries
support the DN and RDN concepts. DN attributes are typically based on an entity’s
DNS name. Each entry in the database has a series of name/value pairs to denote
the various attributes associated with each entry.

Common attributes for an LDAP entry include the following:

• Distinguished Name (DN)


• Relative Distinguished Name (RDN)
• Common Name (CN)
• Domain Component (DC)
• Organizational Unit (OU)

LDAP operates in a client/server architecture. Clients make requests for access to


LDAP servers, and the server responds back to the client with results of that
request. LDAP typically runs over unsecured network connections using TCP port
389 for communications. If advanced security is required, version 3 of the LDAP
protocol supports using TLS to encrypt communications.

11
12
1
Remote Meeting Technology
Several technologies and services exist that allow organizations and individuals to
meet “virtually.” These applications are typically web-based and either install
extensions in the browser or client software on the host system. These technologies
also typically allow “desktop sharing” as a feature. This feature may allow the
viewing of a user’s desktop. Some organizations use dedicated equipment such as
cameras, monitors and meeting rooms to host and participate in remote meetings.
These devices are often integrated with Voice over Internet Protocol (VoIP).

Remote meeting technology risks include the following:


• Some software may allow control of another system when the desktop is shared
• Vulnerabilities in the underlying operating system or firmware

2
3
Permanent Virtual Circuits (PVCs) and Switched Virtual Circuits (SVCs).

Virtual circuits provide a connection between endpoints over high bandwidth,


multiuser cable or fiber that behaves as if the circuit were a dedicated physical
circuit. There are two types of virtual circuits based on when the routes in the
circuit are established. In a permanent virtual circuit (PVC), the carrier configures
the circuit’s routes when the circuit is purchased. Unless the carrier changes the
routes to tune the network, respond to an outage, etc., the routes do not change.
On the other hand, the routes of a switched virtual circuit (SVC) are configured
dynamically by the routers each time the circuit is used.

4
Circuit-Switched Networks
Circuit-switched networks establish a dedicated circuit between endpoints. These
circuits consist of dedicated switch connections. Neither endpoint starts
communicating until the circuit is completely established. The endpoints have
exclusive use of the circuit and its bandwidth. Carriers base the cost of using a
circuit-switched network on the duration of the connection that makes this type of
network only cost-effective for a steady communication stream between the
endpoints. Examples of circuit-switched networks are the plain old telephone
service (POTS), Integrated Services Digital Network (ISDN), and Point-to-Point
Protocol (PPP).

5
Packet-Switched Networks
Packet-switched networks do not use a dedicated connection between endpoints.
Instead, data is divided into packets and transmitted on a shared network. Each
packet contains meta-information so that it can be independently routed on the
network. Networking devices will attempt to find the best path for each packet to
its destination. Because network conditions could change while the partners are
communicating, packets could take different paths as they
transverse the network and arrive in any order. It is the responsibility of the
destination endpoint to ensure that the received packets are in the correct order
before sending them up the stack.

The modern virtualization of networks and the associated technology is called


Network Function Virtualization (NFV) or alternately referred to as virtual network
function. The objective of NFV is to decouple functions, such as firewall
management, intrusion detection, network address translation, or name service
resolution, away from specific hardware implementation into software solutions.
NFV focus is to optimize distinct network services. With the focus on network
service management and not hardware deployment, NFV readily supports capacity

6
management since there is a more thorough utilization of
resources. As service providers struggled to keep up with the quick deployment
needs and faster growth models, the slowness of hardware-based
solutions was exposed. A number of these service providers came together and
founded The European Telecommunications Standards Institute (ETSI) and worked to
formalize NFV standards.

The following benefits are sought for utilizing NFV:


• Support transition from capital expenditure to operational expenditure (CapEx to
OpEx).
• Reduce wait time in time-to-market ventures.
• Increase service consumption agility.

6
Hardware reigned supreme in the networking world until the emergence of
software-defined networking (SDN), a category of technologies that separate the
network control plane from the forwarding plane to enable more automated
provisioning and policy-based management of network resources.
SDN's origins can be traced to a research collaboration between Stanford University
and the University of California at Berkeley that ultimately yielded
the OpenFlow protocol in the 2008 timeframe.

OpenFlow is only one of the first SDN canons, but it's a key component because it
started the networking software revolution. OpenFlow defined a programmable
network protocol that could help manage and direct traffic among routers and
switches no matter which vendor made the underlying router or switch. In the years
since its inception, SDN has evolved into a reputable networking technology offered
by key vendors including Cisco, VMware, Juniper, Pluribus and Big Switch. The Open
Networking Foundation develops myriad open-source SDN technologies as well.

"Datacenter SDN no longer attracts breathless hype and fevered expectations, but
the market is growing healthily, and its prospects remain robust," wrote Brad

7
Casemore, IDC research vice president, data center networks, in a recent
report, Worldwide Datacenter Software-Defined Networking Forecast, 2018–2022.
"Datacenter modernization, driven by the relentless pursuit of digital transformation
and characterized by the adoption of cloudlike infrastructure, will help to maintain
growth, as will opportunities to extend datacenter SDN overlays and fabrics to
multicloud application environments." SDN will be increasingly perceived as a form
of established, conventional networking, Casemore said.

IDC estimates that the worldwide data center SDN market will be worth more than
$12 billion in 2022, recording a CAGR of 18.5% during the 2017–2022 period. The
market generated revenue of nearly $5.15 billion in 2017, up more than 32.2% from
2016. In 2017, the physical network represented the largest segment of the
worldwide datacenter SDN market, accounting for revenue of nearly $2.2 billion, or
about 42% of the overall total revenue. In 2022, however, the physical network is
expected to claim about $3.65 billion in revenue, slightly less than the $3.68 billion
attributable to network virtualization overlays/SDN controller software but more
than the $3.18 billion for SDN applications.

“We're now at a point where SDN is better understood, where its use cases and value
propositions are familiar to most datacenter network buyers and where a growing
number of enterprises are finding that SDN offerings offer practical benefits,”
Casemore said. “With SDN growth and the shift toward software-based network
automation, the network is regaining lost ground and moving into better alignment
with a wave of new application workloads that are driving meaningful business
outcomes.”

What is SDN?
The idea of programmability is the basis for the most precise definition of what SDN
is: technology that separates the control plane management of network devices from
the underlying data plane that forwards network traffic. IDC broadens that definition
of SDN by stating: “Datacenter SDN architectures feature software-defined overlays
or controllers that are abstracted from the underlying network hardware, offering
intent-or policy-based management of the network as a whole. This results in a
datacenter network that is better aligned with the needs of application workloads
through automated (thereby faster) provisioning, programmatic network
management, pervasive application-oriented visibility, and where needed, direct
integration with cloud orchestration platforms.”

The driving ideas behind the development of SDN are myriad. For example, it
promises to reduce the complexity of statically defined networks; make automating
network functions much easier; and allow for simpler provisioning and management

7
of networked resources, everywhere from the data center to the campus or wide
area network. Separating the control and data planes is the most common way to
think of what SDN is, but it is much more than that, said Mike Capuano, chief
marketing officer for Pluribus. “At its heart SDN has a centralized or distributed
intelligent entity that has an entire view of the network, that can make routing and
switching decisions based on that view,” Capuano said. “Typically, network routers
and switches only know about their neighboring network gear. But with a properly
configured SDN environment, that central entity can control everything, from easily
changing policies to simplifying configuration and automation across the enterprise.”

How does SDN support edge computing, IoT and remote access?
A variety of networking trends have played into the central idea of SDN. Distributing
computing power to remote sites, moving data center functions to the edge,
adopting cloud computing, and supporting Internet of Things environments – each of
these efforts can be made easier and more cost efficient via a properly configured
SDN environment. Typically in an SDN environment, customers can see all of their
devices and TCP flows, which means they can slice up the network from the data or
management plane to support a variety of applications and configurations, Capuano
said. So users can more easily segment an IoT application from the production world
if they want, for example.

Some SDN controllers have the smarts to see that the network is getting congested
and, in response, pump up bandwidth or processing to make sure remote and edge
components don’t suffer latency.

SDN technologies also help in distributed locations that have few IT personnel on
site, such as an enterprise branch office or service provider central office, said
Michael Bushong, vice president of enterprise and cloud marketing at Juniper
Networks. “Naturally these places require remote and centralized delivery of
connectivity, visibility and security. SDN solutions that centralize and abstract control
and automate workflows across many places in the network, and their devices,
improve operational reliability, speed and experience,” Bushong said.

How does SDN support intent-based networking?


Intent-based networking (IBN) has a variety of components, but basically is about
giving network administrators the ability to define what they want the network to do,
and having an automated network management platform create the desired state
and enforce policies to ensure what the business wants happens.

“If a key tenet of SDN is abstracted control over a fleet of infrastructure, then the
provisioning paradigm and dynamic control to regulate infrastructure state is

7
necessarily higher level,” Bushong said. “Policy is closer to declarative intent, moving
away from the minutia of individual device details and imperative and reactive
commands.” IDC says that intent-based networking “represents an evolution of SDN
to achieve even greater degrees of operational simplicity, automated intelligence,
and closed-loop functionality.”

For that reason, IBN represents a notable milestone on the journey toward
autonomous infrastructure that includes a self-driving network, which will function
much like the self-driving car, producing desired outcomes based on what network
operators and their organizations wish to accomplish, Casemore stated. “While the
self-driving car has been designed to deliver passengers safely to their destination
with minimal human intervention, the self-driving network, as part of autonomous
datacenter infrastructure, eventually will achieve similar outcomes in areas such as
network provisioning, management, and troubleshooting — delivering applications
and data, dynamically creating and altering network paths, and providing security
enforcement with minimal need for operator intervention,” Casemore stated.

While IBN technologies are relatively young, Gartner says by 2020, more than 1,000
large enterprises will use intent-based networking systems in production, up from
less than 15 in the second quarter of 2018.

7
How does SDN help customers with security?
SDN enables a variety of security benefits. A customer can split up a network
connection between an end user and the data center and have different security
settings for the various types of network traffic. A network could have one public-
facing, low security network that does not touch any sensitive information. Another
segment could have much more fine-grained remote access control with software-
based firewall and encryption policies on it, which allow sensitive data to traverse
over it.

“For example, if a customer has an IoT group it doesn’t feel is all that mature with
regards to security, via the SDN controller you can segment that group off away
from the critical high-value corporate traffic,” Capuano stated. “SDN users can roll
out security policies across the network from the data center to the edge and if you
do all of this on top of white boxes, deployments can be 30 – 60 percent cheaper
than traditional gear.”

The ability to look at a set of workloads and see if they match a given security policy
is a key benefit of SDN, especially as data is distributed, said Thomas Scheibe, vice

8
president of product management for Cisco’s Nexus and ACI product lines. "The
ability to deploy a whitelist security model like we do with ACI [Application Centric
Infrastructure] that lets only specific entities access explicit resources across your
network fabric is another key security element SDN enables," Scheibe said. A
growing number of SDN platforms now support microsegmentation, according to
Casemore. “In fact, micro-segmentation has developed as a notable use case for SDN.
As SDN platforms are extended to support multicloud environments, they will be
used to mitigate the inherent complexity of establishing and maintaining consistent
network and security policies across hybrid IT landscapes,” Casemore said.

8
What is SDN’s role in cloud computing?
SDN’s role in the move toward private cloud and hybrid cloud adoption seems a
natural. In fact, big SDN players such as Cisco, Juniper and VMware have all made
moves to tie together enterprise data center and cloud worlds. Cisco's ACI
Anywhere package would, for example, let policies configured through Cisco's SDN
APIC (Application Policy Infrastructure Controller) use native APIs offered by a
public-cloud provider to orchestrate changes within both the private and public
cloud environments, Cisco said. “As organizations look to scale their hybrid cloud
environments, it will be critical to leverage solutions that help improve productivity
and processes,” said Bob Laliberte, a senior analyst with Enterprise Strategy Group,
in a recent Network World article. “The ability to leverage the same solution, like
Cisco’s ACI, in your own private-cloud environment as well as across multiple public
clouds will enable organizations to successfully scale their cloud environments.”
Growth of public and private clouds and enterprises' embrace of distributed
multicloud application environments will have an ongoing and significant impact on
data center SDN, representing both a challenge and an opportunity for vendors,
said IDC’s Casemore. “Agility is a key attribute of digital transformation, and
enterprises will adopt architectures, infrastructures, and technologies that provide

9
for agile deployment, provisioning, and ongoing operational management. In a
datacenter networking context, the imperative of digital transformation drives
adoption of extensive network automation, including SDN,” Casemore said.

9
Where does SD-WAN fit in?
The software-defined wide area network (SD-WAN) is a natural application of SDN
that extends the technology over a WAN. While the SDN architecture is typically the
underpinning in a data center or campus, SD-WAN takes it a step further.

At its most basic, SD-WAN lets companies aggregate a variety of network


connections – including MPLS, 4G LTE and DSL – into a branch or network edge
location and have a software management platform that can turn up new sites,
prioritize traffic and set security policies. SD-WAN's driving principle is to simplify
the way big companies turn up new links to branch offices, better manage the way
those links are utilized – for data, voice or video – and potentially save money in the
process. SD-WAN lets networks route traffic based on centrally managed roles and
rules, no matter what the entry and exit points of the traffic are, and with full
security. For example, if a user in a branch office is working in Office365, SD-WAN
can route their traffic directly to the closest cloud data center for that app,
improving network responsiveness for the user and lowering bandwidth costs for
the business.

10
"SD-WAN has been a promised technology for years, but in 2019 it will be a major
driver in how networks are built and re-built," Anand Oswal, senior vice president of
engineering in Cisco’s Enterprise Networking Business, said a Network
World article earlier this year. It's a profoundly hot market with tons of players
including Cisco, VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa. IDC
says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a
more than 40% yearly clip between now and then.

From its VNI study, Cisco says that globally, SD-WAN traffic was 9% of business IP
WAN traffic in 2017 and will be 29% of business IP WAN traffic by 2022. In addition,
SD-WAN traffic will grow five-fold from 2017 to 2022, a compound annual growth
rate of 37%.

10
What is in the future for SDN?
Going forward there are a couple of developments to watch for, Cisco's Scheibe
said. One involves the increased ability to automate the provisioning of data center
services, to make it easier to horizontally extend access to data. The second
expected development is the ability to more easily allow customers to work across
domains to monitor and track what is going on across the infrastructure. According
to Cisco’s most recent Global Cloud Index research, SDN might streamline traffic
flows within the data center such that traffic is routed more efficiently than it is
today. “In theory, SDN allows for traffic handling policies to follow virtual machines
and containers, so that those elements can be moved within a data center in order
to minimize traffic in response to bandwidth bottlenecks,” Cisco stated.

Most major hyperscale data centers already employ flat architectures and SDN and
storage management, and adoption of SDN/NFV or network function
virtualization (which virtualizes network elements) within large-scale enterprise
data centers has been rapid, Cisco stated. Over two-thirds of data centers will adopt
SDN either fully or in a partial deployment by 2021. As a portion of traffic within the
data center, SDN/NFV is already transporting 23%, growing to 44% by 2021.

11
Cisco found that there are also ways in which SDN/NFV can lead to an increase in
both data center traffic and in general Internet traffic:

Traffic engineering enabled by SDN/ NFV supports very large data flows without
compromising short lived data flows, making it safe to transport large amounts of
data to and from big data clusters. SDN will allow video bitrates to increase, because
SDN can seek out highest bandwidth available even midstream, instead of lowering
the bitrate according the available bandwidth for the duration of the video, as is done
today. The future of SDN is shaped by operational needs and software innovation,
Bushong said. “While trends like cloud-native design certainly impact SDN
engineering, the operational side of SDN is likely to really benefit from the innovation
happening around machine learning and AI, and these innovations also benefit from
the accelerated pace of software and hardware innovation happening in prominent
public clouds.”

11
A content delivery network or content distribution network (CDN) is a large
distributed system of servers deployed in multiple data centers
across the internet. The goal of a CDN is to serve content to end users with high
availability and high performance. A key capability of CDN is to provide for capacity
management in that original content will not be easily exhausted by request from a
wide geographic field.

These are the two primary components of a CDN:


• Origin servers: Housing original content in the form of web and rich media
composed of audio and video files
• Edge servers: Holds cached copies of the original content that distributes media
to regionally close clients to speed delivery

1
Firewalls
Firewalls will not be effective right out of the box. Firewall rules must be defined
correctly not to inadvertently grant unauthorized access. Like all hosts on a
network, administrators must install patches to the firewall and disable all
unnecessary services. Also, firewalls offer limited protection against vulnerabilities
caused by applications flaws in server software on other hosts. For example, a
firewall will not prevent an attacker from manipulating a database to disclose
confidential information.

Firewalls filter traffic based on a rule set. Each rule instructs the firewall to block or
forward a packet based on one or more conditions. For each incoming packet, the
firewall will look through its rule set for a rule whose conditions apply to that packet
and block or forward the packet as specified in that rule. Below are two important
conditions used to determine if a packet should be filtered.

By address: Firewalls will often use the packet’s source or destination address, or
both, to determine if the packet should
be filtered.

2
By service: Packets can also be filtered by service. The firewall inspects the service
the packet is using (if the packet is part of the Transmission Control Protocol (TCP) or
User Datagram Protocol (UDP), the service is the destination port number) to
determine if the packet should be filtered. For example, firewalls will often have a
rule to filter the Finger service to prevent an attacker from using it to gather
information about a host. Filtering by address and by service are often combined in
rules. If the engineering department wanted to grant anyone on the LAN access to its
web server, a rule could be defined to forward packets whose destination address is
the web server’s and the
service is HTTP (TCP port 80). Firewalls can change the source address of each
outgoing (from trusted to untrusted network) packet to a different address. This has
several applications, most notably to allow hosts with RFC 1918 addresses access to
the internet by changing their private address to one that is routable on the internet.
A private address is one that will not be forwarded by an internet router and,
therefore, remote attacks using private internal addresses cannot be launched over
the open internet. Anonymity is another reason to use network address translation
(NAT). Many organizations do not want to advertise their IP addresses to an
untrusted host and, thus, unnecessarily give information about the network. They
would rather hide the entire network behind translated addresses. NAT also greatly
extends the capabilities of organizations to continue using IPv4 address spaces.

2
Static Packet Filtering
When a firewall uses static packet filtering, it examines each packet without regard
to the packet’s context in a session. Packets are examined against static criteria, for
example, blocking all packets with a port number of 79 (finger). Because of its
simplicity, static packet filtering requires very little overhead, but it has a significant
disadvantage. Static rules cannot be temporarily changed by the firewall to
accommodate legitimate traffic. If a protocol requires a port to be temporarily
opened, administrators must choose between permanently opening the port and
disallowing the protocol.

3
Stateful Inspection or Dynamic Packet Filtering
Stateful inspection examines each packet in the context of a session that allows it to
make dynamic adjustments to the rules to accommodate legitimate traffic and
block malicious traffic that would appear benign to a static filter. For example, if a
user sends a Syn request to a server and receives a Syn Ack back from the server,
the next appropriate frame to send is an Ack. If the user sends another Syn request,
the stateful inspection device will see and reject this next “inappropriate” packet.

4
Next-generation firewalls (NGFWs) are deep-packet inspection firewalls that move
beyond port/protocol inspection and blocking to add application-level inspection,
intrusion prevention, along with malware awareness and prevention. NGFWs are
not the same as intrusion prevention system (IPS) stand-alone devices or even
firewalls that are simply integrating IPS capabilities. Included in what is called the
third generation of firewall technology is in-line deep inspection of traffic,
application programming interface (API) gateways, and Database Activity
Monitoring.

5
Intrusion Detection and Prevention Systems (IDS/IPS)
Intrusion detection systems (IDSs) monitor activity and send alerts when they
detect suspicious traffic. There are two broad classifications
of IDS/IPS:

• Host-based IDS/IPS: Monitor activity on servers and workstations.


• Network-based IDS/IPS: Monitor network activity. Network IDS services are
typically stand-alone devices or at least independent blades within network
chassis. Network IDS logs would be accessed through a separate management
console that will also generate alarms and alerts.

Currently, there are two approaches to the deployment and use of IDSs.

An appliance on the network can monitor traffic for attacks based on a set of
signatures (analogous to antivirus software), or the appliance can watch the
network’s traffic for a while, learn what traffic patterns are normal and send an alert
when it detects an anomaly. Of course, the IDS can be deployed using a hybrid of
the two approaches as well.

6
Independent of the approach, how an organization uses an IDS determines whether
the tool is effective. Despite its name, the IDS should not be used to detect intrusions
because IDS solutions are not designed to be able to take preventative actions as part
of their response. Instead, it should send an alert when it detects interesting,
abnormal traffic that could be a prelude to an attack.

For example, someone in the engineering department trying to access payroll


information over the network at 3 a.m. is probably very interesting and not normal.
Or, perhaps a sudden rise in network utilization should be noted.

Intrusion systems use several techniques to determine whether an attack


is underway:

• Signature or pattern-matching systems examine the available information (logs or


network traffic) to determine if it matches a known attack.
• Protocol-anomaly-based systems examine network traffic to determine if what it
sees conforms to the defined standard for that protocol; for example, as it is
defined in a Request for Comment (RFC).
• Statistical-anomaly-based systems establish a baseline of normal traffic patterns
over time and detect any deviations from that baseline.

Some also use heuristics to evaluate the intended behavior of network traffic to
determine if it intended to be malicious or not. Most modern systems combine two
or more of these techniques together to provide a more accurate analysis before it
decides whether it sees an attack or not.

In most cases, there will continue to be problems associated with false positives as
well as false-negatives. False-positives occur when the IDS
or IPS identifies something as an attack, but it is in fact normal traffic. False-
negatives occur when the IPS or IDS fails to interpret something as an attack when it
should have. In these cases, intrusion systems must be carefully “tuned” to ensure
that these are kept to a minimum.

An IDS requires frequent attention. An IDS requires the response of a human who is
knowledgeable enough with the system and types of
normal activity to make an educated judgment about the relevance and significance
of the event. Alerts need to be investigated to Determine if they represent an actual
event, or if they are simply background noise.

6
7
8
9
Whitelisting/blacklisting: A whitelist is a list of email addresses and/or internet
addresses that someone knows as “good” senders. A blacklist is a corresponding list
of known “bad” senders. So, an email from an unrecognized sender is neither on
the whitelist or the blacklist and, therefore, is treated differently. Grey listing works
by telling the sending email server to resend the message sometime soon. Many
spammers set their software to blindly transmit their spam email, and the software
does not understand the “resend soon” message. Thus, the spam would never
actually be delivered.

10
1
Port Address Translation (PAT)
An extension to network address translation (NAT), which translates all addresses to
one externally routable IP address, is to use port address translation (PAT) to
translate the source port number for an external service. The port translation keeps
track of multiple sessions that are accessing the internet.

2
Proxy Firewall
A proxy firewall mediates communications between untrusted endpoints
(servers/hosts/clients) and trusted endpoints (servers/hosts/clients). From an
internal perspective, a proxy may forward traffic from known, internal client
machines to untrusted hosts on the internet, creating the illusion for the untrusted
host that the traffic originated from the proxy firewall, thus, hiding the trusted
internal client from potential attackers. To the user, it appears that they are
communicating directly with the untrusted server. Proxy servers are often placed at
internet gateways to hide the internal network behind one IP address and to
prevent direct communication between internal and external hosts.

3
Endpoint Security
Workstations should be hardened, and users should be using limited access
accounts whenever possible in accordance with the concept of “least privilege.”

Workstations should have the following:


• Up to date antivirus and anti-malware software
• A configured and operational host-based firewall
• A hardened configuration with unneeded services disabled
• A patched and maintained operating system

While workstations are clearly what most people will associate with endpoint
attacks, the landscape is changing. Mobile devices, such as smart phones, tablets
etc., are beginning to make up more and more of the average organization’s
endpoints. With this additional diversity of devices, there becomes a requirement
for the security architect to also increase the diversity and agility of an
organization’s endpoint defenses.

4
For mobile devices such as smart phones and tablets, consider the following:

• Encryption for the whole device, or if not possible, then at least encryption for
sensitive information held on the device
• Device virtualization/sandboxing
• Remote management capabilities including the following:
o Remote wipe
o Remote geo locate
o Remote update
o Remote operation
• User policies and agreements that ensure an organization can manage the device
or seize it for legal hold

4
Voice over Internet Protocol (VoIP) is a technology that allows you to make voice
calls using a broadband internet connection instead of a regular (or analog) phone
line. VoIP is simply the transmission of voice traffic over IP-based networks. VoIP is
also the foundation for more advanced unified communications applications such
as web and video conferencing. VoIP systems are based on the use of the Session
Initiation Protocol (SIP), which is the recognized standard. Any SIP compatible
device can talk to any other. In all VoIP systems, your voice is converted into packets
of data and then transmitted to the recipient over the internet and decoded back
into your voice at the other end. To make it quicker, these packets are compressed
before transmission with certain codecs, almost like zipping a file on the fly. There
are many codecs with diverse ways of achieving compression and managing
bitrates, thus, each codec has its own bandwidth requirements and provides
different voice quality for VoIP calls.

VoIP systems employ session control and signaling protocols to control the
signaling, set-up, and tear-down of calls. A codec is software that
encodes audio signals into digital frames and vice versa. Codecs are characterized
by different sampling rates and resolutions. Different

5
codecs employ different compression methods and algorithms, using different
bandwidth and computational requirements.

5
Session Initiation Protocol (SIP)
As its name implies, SIP is designed to manage multimedia connections. SIP is
designed to support digest authentication structured by realms, like HTTP (basic
username/password authentication has been removed from the protocol as of RFC
3261). In addition, SIP provides integrity protection through MD5 hash functions.
SIP supports a variety of encryption mechanisms, such as TLS. Privacy extensions to
SIP, including encryption and caller ID suppression, have been defined in extensions
to the original Session Initiation Protocol (RFC 3325).

6
VoIP Problems
Packet loss: A technique called packet loss concealment (PLC) is used in VoIP
communications to mask the effect of dropped packets. There are several
techniques that may be used by different implementations:

Zero substitution is the simplest PLC technique that requires the least
computational resources. These simple algorithms generally provide the lowest
quality sound when a considerable number of packets are discarded.

Filling empty spaces with artificially generated, substitute sound. The more
advanced algorithms interpolate the gaps, producing the best sound quality at the
cost of using extra computational resources. The best implementation can tolerate
up to 20 percent of packets lost without significant degradation of voice quality.
While some PLC techniques work better than others, no masking technique can
compensate for a significant loss of packets. When bursts of packets are lost due to
network congestion, noticeable degradation of call quality occurs.

In VoIP, packets can be discarded for many reasons, including network congestion,

7
line errors, and late arrival. The network architect and security practitioner need to
work together to select the right PLC technique that best matches the characteristics
of an environment, as well as to ensure that they implement measures to reduce
packet loss on the network.

Jitter: Unlike network delay, jitter does not occur because of the packet delay but
because of a variation of packet timing. As VoIP endpoints try to compensate for jitter
by increasing the size of the packet buffer, jitter causes delays in the conversation. If
the variation becomes too high and exceeds 150ms, callers notice the delay and
often revert to a walkie-talkie style of conversation. Reducing the delays on the
network helps keep the buffer under 150ms even if a significant variation is present.
While the reduced delay does not necessarily remove the variation, it still effectively
reduces the degree to which the effect is pronounced and brings it to the point
where it’s unnoticeable by the callers. Prioritizing VoIP traffic and implementing
bandwidth shaping also helps reduce the variation of packet delay. At the endpoint, it
is essential to optimize jitter buffering. While greater buffers reduce and remove the
jitter, anything over 150ms noticeably affects the perceived quality of the
conversation. Adaptive algorithms to control buffer size depending on the current
network conditions are often quite effective. Fiddling with packet size (payload) or
using a different codec often helps control jitter as well.

Sequence errors: Routed networks will send packets along the best possible path at
this moment. That means packets will, on occasion, arrive in a different order than
transmitted. This will cause a degradation in the call quality.

7
Peer-to-Peer (P2P) Applications and Protocols
Peer-to-peer (P2P) applications are often designed to open an uncontrolled channel
through network boundaries (normally through tunneling). Therefore, they provide
a way for dangerous content, such as botnets, spyware applications, and viruses, to
enter an otherwise protected network. Because P2P networks can be established
and managed using a series of multiple, overlapping master and slave nodes, they
can be very difficult to fully detect and shut down. If one master node is detected
and shutdown, the “bot herder” who controls the P2P botnet can make one of the
slave nodes a master and use that as a redundant staging point, allowing for botnet
operations to continue unimpeded.

Instant Messaging
Instant messaging systems can generally be categorized in three classes:
• P2P networks
• Brokered communication
• Server-oriented networks

All these classes will support basic “chat” services on a one-to-one basis and

8
frequently on a many-to-many basis. Most instant messaging
applications do offer additional services beyond their text messaging capability, for
instance, screen sharing, remote control, exchange of files, and voice and video
conversation. Some applications even allow command scripting. Instant messaging
and chat is increasingly considered a significant business application used for office
communications, customer support, and “presence” applications. Instant message
capabilities will frequently be deployed with a bundle of other IP-based services such
as VoIP and video conferencing support.

8
Internet Relay Chat (IRC)
Internet Relay Chat (IRC) is a client/server-based network. This is a common method
of communicating today. IRC is unencrypted and, therefore, an easy target for
sniffing attacks. The basic architecture of IRC, founded on trust among servers,
enables special forms of denial-of service attacks. For instance, a malicious user can
hijack a channel while a server or group of servers has been disconnected from the
rest (net split). IRC is also a common platform for social engineering attacks aimed
at inexperienced or technically unskilled users. While there are many business and
personal benefits and efficiencies to be gained from adopting instant
messaging/chat/IRC technologies, there are also many risks.

Authenticity: User identification can be easily faked in instant messaging and chat
applications by the following:

• Choosing a misleading identity upon registration or changing one’s nickname


while online.
• Manipulating the directory service if the application requires one.
• Manipulating either the attacker’s or the target’s client to send or display a

9
wrong identity.
• The continued growth of social-networking services and sites like Facebook, Vine,
KiK, Twitter, LinkedIn and others present amply opportunity to create false identity
and to try and dupe others for criminal purposes.

9
Remote-Access Services
The services described under this section are present in many UNIX operations and,
when combined with Network File System (NFS) and Network Information Service
(NIS), provide the user with seamless remote working capabilities. However, they
also form a risky combination if not configured and managed properly.

These services include the following:


• TELNET
• rlogin
• X Window System (X11)
• Remote copy (RCP)
• Remote shell (RSH)
• Secure shell (SSH)

Conceptually, because they are built on mutual trust, they can be misused to obtain
access and to horizontally and vertically escalate privileges in an attack. Their
authentication and transmission capabilities are insecure by design; therefore, they
have to be retrofitted (as X11) or replaced altogether (TELNET and rlogin by SSH).

10
TELNET is a command line protocol designed to give command line access to another
host. Although implementations for Windows exist, TELNET’s original domain was the
UNIX server world, and in fact, a TELNET server is standard equipment for any UNIX
server. (Whether it should be enabled is another question entirely, but in small LAN
environments, TELNET is still widely used.)

TELNET:
• Offers little security, and indeed, its use poses serious security risks in untrusted
environments.
• Is limited to username/password authentication.
• Does not offer encryption.

Once an attacker has obtained even a low-level user’s credentials, they have a trivial
path toward privilege escalation because they can transfer data to and from a
machine, as well as execute commands. As the TELNET server is running under
system privileges, it is an attractive target of attack in itself; exploits in TELNET
servers pave the way to system privileges for an attacker. Therefore, it is
recommended that security practitioners discontinue the use of TELNET over the
internet and on internet facing machines. In fact, the standard hardening procedure
for any internet facing server should include disabling its TELNET service that under
UNIX systems would normally run under the name of telnetd, and using SSHv2 for
remote administration and management where required.

Remote Log-in (rlogin), Remote Shell (rsh), Remote Copy (rcp) In its most generic
form, rlogin is a protocol used for granting remote access to a machine, normally a
UNIX server. Similarly, rsh grants direct remote command execution while rcp copies
data from or to a remote machine. If a rlogin daemon (rlogind) is running on a
machine, rlogin access can be granted in two ways:

• Using a central configuration file


• Through a user configuration

By the latter, a user may grant access that was not permitted by the system
administrator. The same mechanism applies to rsh and rcp although they are relying
on a different daemon (rshd). Authentication can be considered host/IP address
based. Although rlogin grants access based on user ID, it is not verified; i.e., the ID a
remote client claims to possess is taken for granted if the request comes from a
trusted host. The rlogin protocol transmits data without encryption and is hence
subject to eavesdropping and interception.

10
The rlogin protocol is of limited value—its main benefit can be considered its main
drawback: remote access without supplying a password. It should only be used in
trusted networks, if at all. A more secure replacement is available in the form of
SSHv2 for rlogin, rsh, and rcp.

10
Screen Scraper
A screen scraper is a program that can extract data from output on a display
intended for a human. Screen scrapers are used in a legitimate fashion when older
technologies are unable to interface with modern ones. In a nefarious sense, this
technology can also be used to capture images from a user’s computer such as PIN
pad sequences at a banking website when implemented by a virus or malware.

11
Virtual Applications and Desktops
Virtual Network Terminal Services
Virtual terminal service is a tool frequently used for remote access to server
resources. Virtual terminal services allow the desktop environment for a server to
be exported to a remote workstation. This allows users at the remote workstation
to execute desktop commands as though they were sitting at the server terminal
interface in person. The advantage of terminal services such as those provided by
Citrix, Microsoft, or public domain virtual network computing (VNC) services is that
they allow for complex administrative commands to be executed using the native
interface of the server, rather than a command-line interface, which might be
available through SSHv2 or telnet. Terminal services also allow for the
authentication and authorization services integrated into the server to be leveraged
for remote users, in addition to all the logging and auditing features of the server as
well.

12
Virtual Private Network (VPN)
A virtual private network (VPN) is point-to-point connection that extends a private
network across a public network. The most common security definition is an
encrypted tunnel between two hosts, but doesn’t have to be. A tunnel is the
encapsulation of one protocol inside another. Remote users employ VPNs to access
their organization’s network securely.
Depending on the VPN’s implementation, they may have most of the same
resources available to them as if they were physically at the office. As an alternative
to expensive dedicated point-to-point connections, organizations use gateway-to-
gateway VPNs to securely transmit information over the internet between sites or
even with business partners.

13
Telecommuting
Common issues such as visitor control, physical security, and network control are
almost impossible to address with teleworkers. Strong VPN connections between
the teleworker and the organization need to be established, and full device
encryption should be the norm for protecting sensitive information.
If the user works in public places or a home office the following should also be
considered:
• Is the user trained to use secure connectivity software and methods such as a
VPN?
• Does the user know which information is sensitive or valuable and why someone
might wish to steal or modify it?
• Is the user’s physical location appropriately secure for the type of work and type
of information they are using?
• Who else has access to the area? While a child may seem trusted, the child’s
friends may not be.

1
Tunneling
Point-to-Point Tunneling Protocol (PPTP) Point-to-Point Tunneling Protocol (PPTP) is
a tunnel protocol that runs over other protocols. PPTP relies on Generic Routing
Encapsulation (GRE) to build the tunnel between the endpoints. The security
architect and practitioner both need to consider known weaknesses, such as the
issues identified with PPTP, when planning for the deployment and use of remote
access technologies. PPTP is based on Point-to-Point Protocol (PPP), so it does offer
authentication by way of password authentication protocol (PAP), Challenge-
Handshake Authentication Protocol (CHAP), or Extensible Authentication Protocol
(EAP).

2
IP security (IPSec) is a suite of protocols for communicating securely with IP by
providing mechanisms for authentication and encryption. Standard IPSec only
authenticates hosts with each other. If an organization requires users to
authenticate, they must employ a nonstandard proprietary IPSec implementation,
or use IPSec over Layer 2 Tunneling Protocol (L2TP).

The latter approach uses L2TP to authenticate the users and encapsulate IPSec
packets within an L2TP tunnel. Because IPSec interprets the change of IP address
within packet headers as an attack, NAT does not work well with IPSec. To resolve
the incompatibility of the two protocols, NAT Transversal (NAT-T) encapsulates IPSec
within UDP port 4500 (see RFC 3948 for details. Read this RFC and search it on
google please).

3
Authentication Header (AH) The Authentication Header (AH) is used to prove the
identity of the origin node and ensure that the transmitted data has not been
tampered with. Before each packet (headers + data) is transmitted, a hash value of
the packet’s contents (except for the fields that are expected to change when the
packet is routed) based on a shared secret is inserted in the last field of the AH. The
endpoints negotiate which hashing algorithm to use and the shared secret when
they establish their security association. To help thwart replay attacks (when a
legitimate session is retransmitted to gain unauthorized access), each packet that is
transmitted during a security association has a sequence number that is stored in
the AH. In transport mode, the AH is inserted between the packet’s IP and TCP
header. The AH helps ensure authenticity and integrity, not confidentiality.
Encryption is implemented through the use of encapsulating security payload (ESP).

Encapsulating Security Payload (ESP)


The ESP encrypts IP packets and ensures their integrity. ESP contains four sections:

• ESP header: Contains information showing which security association to use and
the packet sequence number. Like the AH, the ESP sequences every packet to

4
thwart replay attacks.
• ESP payload: The payload contains the encrypted part of the packet. If the
encryption algorithm requires an initialization vector (IV), it is included with the
payload. The endpoints negotiate which encryption to use when the security
association is established. Because packets must be encrypted with as little
overhead as possible, ESP typically uses a symmetric encryption algorithm.
• ESP trailer: May include padding (filler bytes) if required by the encryption
algorithm or to align fields.
• Authentication: If authentication is used, this field contains the integrity check
value (hash) of the ESP packet. As with the AH, the authentication algorithm is
negotiated when the endpoints establish their security association.

4
Security Associations (SAs)
A security association (SA) defines the mechanisms that an endpoint will use to
communicate with its partner. All SAs cover transmissions in one direction only. A
second SA must be defined for two-way communication. Mechanisms that are
defined in the SA include the encryption and authentication algorithms and
whether to use the AH or ESP protocol. Deferring the mechanisms to the SA, as
opposed to specifying them in the protocol, allows the communicating partners to
use the appropriate mechanisms based on situational risk.

5
Transport Mode and Tunnel Mode
Endpoints communicate with IPSec using either transport or tunnel mode. In
transport mode, the IP payload is protected. This mode is mostly used for end-to-
end protection, for example, between client and server.

In tunnel mode, the IP payload and its IP header are protected. The entire protected
IP packet becomes a payload of a new IP packet and header. Tunnel mode is often
used between networks, such as with firewall-to- firewall VPNs.

6
Internet Key Exchange (IKE)
Internet key exchange (IKE) allows two devices to “exchange” symmetric keys for
the use of encrypting in AH or ESP. There are two ways to “exchange” keys:

1. Use a Diffie-Hellman (DH) style negotiation


2. Use public key certificates
DH would be used between devices like routers. Public key certificates would be
used in an end user VPN connection.

7
SSL VPNs are another approach to remote access. Instead of building a VPN around
the IPSec and the network layer, SSL VPNs leverage SSL/TLS to create a tunnel back
to the home office. SSL 3.0 (Secure Socket Layer) and TLS 1.2 (Transport Layer
Security) are essentially fully compatible, with SSL being a session encryption tool
originally developed by Netscape and TLS 1.2 being the open standard IETF version
of SSL 3.0. SSL and TSL use public key certs to authenticate each through mutual
authentication.

Remote users employ a web browser to access applications that are in the
organization’s network. Even though users employ a web browser, SSL VPNs are not
restricted to applications that use HTTP. With the aid of plug-ins, such as Java, users
can have access to back-end databases, and other non-web- based applications. SSL
VPNs have several advantages over IPSec. They are easier to deploy on client
workstations than IPSec because they require a web browser only, and almost all
networks permit outgoing HTTP. SSL VPNs can be operated through a proxy server.
In addition, applications can restrict users’ access based on criteria, such as the
network the user is on, which is useful for building extranets with several
organizations.

8
9
1
2
3
4
Identity and access management (IAM) are core to maintaining confidentiality,
integrity, and availability of assets and resources that are critical to business survival
and function. Central to maintaining protection of business-critical assets is the
ability to name, associate, and apply suitable identity and access control
methodologies and technologies that meet specific business needs.

5
6
Information and the administration of information is key to the management of
individual and systemic access control systems. Information can be associated with
both logical and physical access control systems. Whether it is a logical or physical
access system, the control of that system is maintained somewhere as discrete data
and/or information. The management of information related to physical and logical
access is accomplished in three primary ways, namely:

• centralized,
• decentralized,
• and hybrid.

7
Centralized–Centralized administration means that one element is responsible for
configuring access controls so that users can access data and perform the activities
they need to. As users’ information processing needs change, their access can be
modified only through central administration, usually after requests have been
approved through an established procedure and by the appropriate authority. The
main advantage of centralized administration is that very strict control over
information can be maintained because the ability to make changes resides with
very few persons. Each user’s account can be centrally monitored, and closing all
access for any user can be easily accomplished if that individual leaves the
organization. Consistent and uniform procedures and criteria are usually not
difficult to enforce, since relatively few individuals oversee the process.

Decentralized–In contrast to centralized administration, decentralized


administration means that access to information is controlled by the owners or
creators of the files, whoever or wherever those individuals may be. An advantage
of decentralized administration is that control is in the hands of the individuals most
accountable for the information, most familiar with it, and best able to judge who
should be able to do what in relation to it. One disadvantage, however, is that there

8
may not be consistency among creators/owners as to procedures and criteria for
granting user access and capabilities. Another disadvantage is that
when requests are not processed centrally, it may be more difficult to form a system-
wide view of all user access on the system at any given time. Different data owners
may inadvertently implement combinations of access that introduce conflicts of
interest or that are in some way not in the organization’s best interest. It may also be
difficult to ensure that access is properly terminated when an employee transfers
within, or leaves an organization.

Hybrid–In a hybrid approach, centralized control is exercised for some information


and decentralized is allowed for other information. One typical arrangement is that
central administration is responsible for the broadest and most basic access, and the
creators/owners of files control the types of access or users’ abilities for the files
under their control. For example, when a new employee is hired into a department, a
central administrator might provide the employee with a set of access perhaps based
on the functional element they are assigned to, job classification, and the specific
task the employee was hired to work on. The employee might have read-only access
to an organization-wide SharePoint document library and to project status report
files, but read and write privileges to his department’s weekly activities report. Also,
if the employee left a project, the project manager can easily close that employee’s
access to that file.

8
Systems
Access controls can be classified by either logical or physical systems. The simplest
example of a physical access control system is a door that can be locked, limiting
people to one side of the door or the other. A logical access control system is
normally operational in an office network where users are allowed or not allowed
to login to a system to access data labeled with a classification by users granted a
clearance.

Access Controls and Administration


ISO/IEC 27000:2016(E) defines access control as a “means to ensure that access to
assets is authorized and restricted based on business and security requirements.”
These requirements will be formalized in the organizational policy that is pertinent
to individual organizations. Two primary system types that form access controls are
physical and logical. Each type requires administration that can have various
degrees of involvement from senior management regarding risk based decisions
concerning the organizational risk appetite and profile, the data owner concerning
“need-to-know” and “least privilege” and asset value determination, the custodian
concerning tool implementation to provide appropriate restriction of the assets to

9
disclosure, destruction, or alteration.

9
The Federal Identity, Credential, and Access Management (FICAM) defines logical
access control as: “An automated system that controls an individual’s ability to
access one or more computer system resources such as a workstation, network,
application, or database. A logical access control system requires validation of an
individual’s identity through some mechanism such as a Personal Identification
Number (PIN), card, biometric, or other token. It has the capability to assign
different access privileges to meet different persons depending on their roles and
responsibilities in an organization.”

Logical access control requires more complex and nuanced administration than
physical. Before selection and implementation of the logical access control type, the
data owner has classified and categorized the data. Categorizing the data will reveal
the impact that would occur if there is disclosure, alteration, or destruction.
Classifying the data will define the value of discreet assets and who should have
access and authorization. Logical access controls are often built into the operating
system, or may be part of the “logic” of applications programs or major utilities,
such as database management systems (DBMS). They may also be implemented in
add-on security packages that are installed into an operating system; such packages

10
are available for a variety of systems, including PCs and mainframes. Additionally,
logical access controls may be present in specialized components that regulate
communications between computers and networks.

10
Special Publications 800-53r4 defines physical access control as “An automated
system that manages the passage of people or assets through an opening(s) in a
secure perimeter(s) based on a set of authorization rules.”

Devices
There are a range of devices (systems or components if logical) associated with
logical and physical access control. Logical and physical access control devices
include but are not limited to access tokens (hardware and software), keys, and
cards.

1
2
Access Control Tokens
Access control tokens are available in many different technologies and in many
different shapes. The information that is stored on the token is presented to a
reader that reads the information and sends it to the system for processing. The
token may have to be swiped, inserted, or placed on or near a reader. When the
reader sends information to the system, it verifies that the token belongs to the
system and identifies the token itself. Then, the system decides if access is to be
granted or denied based upon the validity of the token for the point where it is read
based on time, date, day, holiday, or other condition used for controlling validation.
When biometric readers are used, the token or key is the user’s retina, fingerprint,
hand geometry, voice, or whatever biological attribute is enrolled into the system.
Most biometric readers also require a PIN to index the stored data on the sample
readings of the biological attribute. Biometric systems can also be used to
determine whether a person is already in a database, such as for social service or
national ID applications.

3
At the development of the enterprise security architecture, the security architect
will map business requirements to technology agnostic views or statements that
enforce the security policy and answer business goals throughout the organization.
These architectural views or statements are what provide guidance for
implementation of cohesive technology solutions that come from specific design
elements that are informed by the architecture. Within the lifecycle of identity and
access provisioning, it is imperative that user access reviews are conducted on an
on-going basis once an account has been created and provisioned. The review will
be based upon the business requirements that are expressed within the enterprise
security architecture. Scheduled and regular user access reviews could reveal
vulnerabilities that might require the need for revocation, disablement, or deletion
of an account.

These occurrences are causes for revocation/disablement/or deletion of user


access:
• If a user is voluntarily or involuntarily terminated from an organization.
• If an account has been inactive for a period that surpasses the organizational
policy.

4
• If the user account is no longer appropriate for the job description or role.
• If user account privileges have experienced unnecessary access aggregation.

4
System accounts such as “administrator,” “sudo,” or “root” accounts present an
often-exploited vulnerability for attackers. Making a non-linear representation
between the user ID name and its function could represent the first layer of defense
against attackers. Disconnecting the account name from the function is as simple as
renaming the account to something that looks more like a traditional user name or
randomly generated name. In addition to identifying an account by the name, an
attacker could also identify the account by other attributes such as system assigned
static numeric ID. Therefore, “security by obscurity” or only renaming the system
account is insufficient due diligence to protect them from anything more than trivial
exploitation efforts.

5
Provisioning and deprovision of access and identities involves a list of activities that
are driven by business needs and requirements, job function and role, asset
classification and categorization, and dynamic legal and regulatory issues. Users
needing access to system resources go through a process of provisioning that rightly
begins with the data/information owner expressing a business need for the stated
access.

Vulnerabilities that are readily ascribed to technology often have their introduction
by means of a lack of due care and due diligence related to administrative controls.
Identity and access management (IAM) forms a lifecycle that begins with
provisioning or enrollment, access and consumption of resources, and finally
deprovisioning or revocation of access.

The Federal Identity, Credential, and Access Management (FICAM) Roadmap and
Implementation Guidance 4.7.1. As-is Analysis provides for three phases that
manage the Provisioning and Deprovisioning process.

• Provision a user account and apply user permissions

6
• Modify user permissions
• Deprovision user account and end user permissions

6
Identification
The objective of identification is to bind a user to the appropriate controls based on
the unique user instance. For example, once the unique user is identified and
validated through authentication, his or her identity within the infrastructure is
used to allocate resources based on predefined privileges.

7
An identity represents the initial attribute in a linear succession of attributes to
protect access and use of a system. Providing an identity to access a system is
simply an assertion or claim of an entity. An assertion or claim made by an entity
should be followed by rigorous proof that the entity’s claim is legitimate. The
attributes that follow an identity to prove out a legitimate claim are authentication,
authorization, and usually some form of accountability. The downstream effect of
proper identification includes accountability with a protected audit trail and the
ability to trace activities to individuals. It also includes the provisioning of rights and
privileges, system profiles, and availability of system information, applications, and
services.

8
Authentication within a system involves presenting evidence that an identified
entity should be allowed access through a control point. Standard evidence for
being allowed to log into a system includes three primary factors:

• Something you know, such as a password or PIN


• Something you have, such as a token or smart card
• Something you are or do, such as biometrics or a fingerprint

Single factor authentication involves a user or entity providing one type of


evidence to support an assertion or claim for access to a system. The factor could
be related to something the entity knows, something the entity has, something the
entity is, or somewhere the entity is. One factor or type of evidence can have
multiple methodologies. As an example, if an entity provided a password and a PIN
that would be two methodologies of the same factor (something you know); thus,
these two elements would be considered a single factor.

Multi-factor authentication involves an entity providing more than one factor of


proof of their identity. An example of this would be an entity providing both a

9
password and an iris scan to authenticate to a source. Each factor of authentication
may represent an additional hurdle that needs to be overcome by the unauthorized.
As the factors of authentication grow, then so grows the layers of defense or of
defense in depth. Multifactor
systems may increase the complexity of systems management
or decrease or otherwise impact the productivity of the user
attempting to gain access to the system. Burgeoning authentication methodologies
include location and node. Location authentication makes use of geo-location data
that can allow or disallow authentication from or to specific global locations. Service
providers such as Netflix and Amazon use location authentication to protect against
intellectual property content leakage or theft. Node authentication allows for device-
type recognition to be used as a means of authentication. Examples of node
authentication could include a specific smartphone, laptop, desktop, etc.

9
Biometric devices rely on measurements of biological characteristics of an individual, such
as a fingerprint, hand geometry, voice, or iris patterns. Biometric technology involves data
that is unique to the individual and is difficult to counterfeit. Selected individual
characteristics are stored in a device’s memory, or on a card, which stores reference data
that can be analyzed and compared with the presented template. A one-to-many or a one-
to-one comparison of the presented template with the stored template can be made and
access granted if a match is found. However, on the negative side, some biometric systems
may periodically fail to perform, or have a high rejection rate. The sensitivity of readers
makes system readers susceptible to inadvertent reader damage or intentional sabotage.
Some systems may be perceived by the user as a safety or health risk. Also, some of the
systems may require a degree of skill on the part of the user for proper operation. Other
systems may be perceived as unacceptable by management for a combination of reasons.

1
2
Types of Failure in Biometric Identification
There are two types of failures in biometric identification:

False Rejection Rate (Type I): This is a failure to recognize a legitimate user. While it could
be argued that this effectively keeps the protected area extra secure, it is an intolerable
frustration to legitimate users who are refused access because the scanner does not
recognize them.

False Acceptance Rate (Type II): This is erroneous recognition, either by confusing one user
with another, or by accepting an imposter as a legitimate user. Failure rates can be adjusted
by changing the criteria for declaring an acceptance or rejection; but decreasing one failure
rate increases the other. Crossover Error Rate (CER) is achieved when the type I and type II
are equal.

3
4
Fingerprint: Fingerprint reader technology scans the loops, whorls, and other
characteristics of a fingerprint and compares it with stored templates. When a match is
found, access is granted. The advantage of fingerprint technology is that it is easily
understood. The disadvantages are that the system can be disrupted if cuts or sores appear
on fingers, or if grease or other medium contaminates the fingers and the scanning plates.

5
Facial image: This technology measures the geometric properties of the subject’s face
relative to an archived image. Specifically, the center of the subject’s eyes must be located
and placed at precise locations.

6
Hand geometry: This technology assesses the hand’s geometry: height, width, and distance
between knuckle joints and finger length. Advantages of hand geometry are that the
systems are durable and easily understood. The speed of hand recognition tends to be
more rapid than fingerprint recognition. Hand recognition is reasonably accurate because
the shape of a hand is unique. A disadvantage is that hand recognition tends to give higher
false acceptance rates than fingerprint recognition.

7
Voice recognition: Voice recognition compares the voice characteristics of a given phrase to
one held in a template. Voice recognition is generally not performed as one function and is
typically part of a system where a valid PIN must be entered before the voice analyzer is
activated. Advantages of voice recognition are that the technology is less expensive than
other biometric technologies, and it has hands-free operation. A disadvantage is that the
voice synthesizer must be placed in an area where the voice is not disturbed by background
sounds; often a booth or security portal must be installed to house the sensor to provide
the system with a quiet background.

8
Iris patterns: Iris recognition technology scans the surface of the eye and compares the iris
pattern with stored iris templates. An advantage of iris recognition is that it is not
susceptible to theft, loss, or compromise, and irises are less susceptible to wear and injury
than many other parts of the body. Newer iris scanners allow scanning to occur from up to
ten inches away. A disadvantage of iris scanning is that some people are timid about having
their eye scanned. Throughput time for this technology also should be considered; typical
throughput time is two seconds. If a number of people need to be processed through an
entrance in a short period of time, this can be problematic.

9
Retinal scanning: Retinal scanning analyzes the layer of blood vessels at the back of the eye,
which are unique to each person. Scanning involves using a low-intensity LED light source
and an optical coupler that can read the patterns with great accuracy. It does require the
user to remove glasses, place the eye close to the device, and focus on a certain point. The
user looks through a small opening in the device, and the head needs to be still and the eye
focused for several seconds, during which time the device verifies identity. This process
takes about ten seconds. The continuity of the retinal pattern throughout life and the
difficulty in fooling such a device also makes it a great long-term, high-security option.

10
Signature dynamics: First, the signer writes out a handwritten signature on a special
electronic pad, such as the ePad by Interlink or a Palm Pilot. The shape of the signature is
then electronically read and recorded, along with unique features, such as the pressure on
the pen and the speed at which the signature was written, to identify the signer’s unique
writing; for example, did the “t” get crossed from right to left and did the “i” get dotted at
the very end. The advantage of signature dynamics is that it works like a traditional
signature. Signers do not need special knowledge of computers nor any unusual tools to
provide a signature. At the same time, the system allows the notary to record unique
identifying features to help prevent and detect forged signatures.

11
Vascular patterns: This is the ultimate palm reader; vascular patterns are best described as
a picture of the veins in a person’s hand or finger. The thickness and location of these veins
are believed to be unique enough to an individual to verify a person’s identity. The National
Television Standards Committee (NTSC) Subcommittee on Biometrics reports that
researchers determined that the vascular pattern of the human body is unique to each
individual and does not change with age.

12
Keystroke dynamics: Keystroke dynamics are also known as keyboard dynamics, which
identify the way a person types at a keyboard; specifically, the keystroke rhythms of a user
are measured to develop a unique template of the user’s typing pattern for future
authentication. Raw measurements available from most keyboards can be recorded to
determine dwell time, or the amount of time a particular key is held, and flight time, or the
amount of time between the next key down and the next key up.

13
Authorization defines what resources users may have access to.

14
Session management is related to when a user is authenticated, authorized, and held
accountable for using system resources. The system must maintain an uninterrupted path
of protection of resources by means of system management. Open Web Application
Security Project (OWASP) Top 10 number 2 threat is broken authentication and session
management. RFC 2965 provides an example of how to maintain session managements
with cookies. When a user accesses a website, the user’s actions and identity are tracked
across various requests from that website. A state of these interactions is maintained in a
session cookie. Evidence of this state is maintained by linking all new connections across
the entirety of a session to the cookie. Cookie handling achieves non-repudiation;
effectively leveraging an audit trail of session activity.

15
Registration and proofing of an identity are processes that connect an entity or user
identity to an access control system that creates a confirmed relationship of trust that an
entity is who he or she claims to be. The process of proving that a person is authentically
the person that is being claimed can be challenging and even serve as an opening for
impersonation. If a user is valid, there is also the threat that the user can be a malicious or
bad actor. Writing for the New Yorker, Peter Steiner stated succinctly, “On the Internet no
one knows that you are a dog.”

Herein lies the crux of the concern; balancing the needs of controlling access to valued
assets and the simplicity of registering and proofing the credentials of the potential user of
a system.

The Digital Identity Guidelines of NIST SP 800-63-3 contains recommendations to support,


among other items, requirements for identity proofing and registration. These
requirements are the following:

Identity Assurance Level (IAL) refers to the identity proofing process. A category that
conveys the degree of confidence that the applicant’s claimed identity is their real identity

Identity Assurance Levels

16
IAL1: At IAL1, attributes, if any, are self-asserted or should be treated as self-asserted.

IAL2: At IAL2, either remote or in-person identity proofing is required.

IAL2 requires identifying attributes to have been verified in person or remotely, using, at a
minimum, the procedures given in SP 800-63A.

IAL3: At IAL3, in-person identity proofing is required. Identifying attributes must be verified
by an authorized Credential Service Provider (CSP) representative through examination of
physical documentation as described in SP 800-63A.

• Authenticator Assurance Level (AAL) refers to the authentication process.

• Federation Assurance Level (FAL) refers to the strength of an assertion in a federated


environment, used to communicate authentication and attribute information (if
applicable) to a relying party (RP).

16
Authorization defines what resources users may have access to.

1
Session management is related to when a user is authenticated, authorized, and
held accountable for using system resources. The system must maintain an
uninterrupted path of protection of resources by means of system management.
Open Web Application Security Project (OWASP) Top 10 number 2 threat is broken
authentication and session management. RFC 2965 provides an example of how to
maintain session managements with cookies. When a user accesses a website, the
user’s actions and identity are tracked across various requests from that website. A
state of these interactions is maintained in a session cookie. Evidence of this state is
maintained by linking all new connections across the entirety of a session to the
cookie. Cookie handling achieves non-repudiation; effectively leveraging an audit
trail of session activity.

2
Registration and proofing of an identity are processes that connect an entity or user
identity to an access control system that creates a confirmed relationship of trust
that an entity is who he or she claims to be. The process of proving that a person is
authentically the person that is being claimed can be challenging and even serve as
an opening for impersonation. If a user is valid, there is also the threat that the user
can be a malicious or bad actor. Writing for the New Yorker, Peter Steiner stated
succinctly, “On the Internet no one knows that you are a dog.”

Herein lies the crux of the concern; balancing the needs of controlling access to
valued assets and the simplicity of registering and proofing the credentials of the
potential user of a system.

The Digital Identity Guidelines of NIST SP 800-63-3 contains recommendations to


support, among other items, requirements for identity proofing and registration.
These requirements are the following:

Identity Assurance Level (IAL) refers to the identity proofing process. A category
that conveys the degree of confidence that the applicant’s claimed identity is their

3
real identity

Identity Assurance Levels

IAL1: At IAL1, attributes, if any, are self-asserted or should be treated as self-asserted.

IAL2: At IAL2, either remote or in-person identity proofing is required.

IAL2 requires identifying attributes to have been verified in person or remotely, using,
at a minimum, the procedures given in SP 800-63A.

IAL3: At IAL3, in-person identity proofing is required. Identifying attributes must be


verified by an authorized Credential Service Provider (CSP) representative through
examination of physical documentation as described in SP 800-63A.

• Authenticator Assurance Level (AAL) refers to the authentication process.

• Federation Assurance Level (FAL) refers to the strength of an assertion in a


federated environment, used to communicate authentication and attribute
information (if applicable) to a relying party (RP).

3
NIST SP 800-63-3 describes a credential as a binding between an authenticator and
a subscriber by means of an identifier. The credential may be collected and
managed by the CSP, although it is possessed by the claimant. Credential examples
include but are not limited to smart cards, private/public cryptographic keys, and
digital certificates. The FICAM Roadmap and Implementation Guidance Version 2.0
within the U.S. federal government has the following five-step enrollment process:

1. Sponsorship: An authorized entity sponsors claimant for a credential with a CSP.


2. Enrollment: The sponsored claimant enrolls for the credentials from a CSP. This
step would include identity proofing, which might include capture of biographic and
biometric data.
3. Credential Production: Credentials are produced in the form of smart cards,
private/public cryptographic keys, and digital certificates.
4. Issuance: Claimant is issued credential.
5. Credential Lifecycle Management: Credentials are maintained through activities
that includes revocation, reissuance, re-enrollment, expiration, suspension, or
reinstatement.

4
5
When disparate organizations have a need to share common information, federated
identity management (FIM) solutions are sought. Think of businesses that use social
media platforms such as Linkedin and Twitter but have different business models
and corporate goals and missions.

Twitter:
“Twitter is what’s happening in the world and what people are talking about right
now.”

Linkedin:
“Creating a digital map of the global economy to connect talent with opportunity at
massive scale.” Although Linkedin and Twitter are markedly different in their
mission statements, they share a common customer base. The common customers
between Linkedin and Twitter may at times want the information that is resident on
one service provider platform to appear automatically and synchronously on
another service provider platform.

6
Security Assertion Markup Language (SAML) and Open Authorization (OAuth)

SAML and OAuth 2.0 are two protocols that support the access and authorization
that is required to link disparate organizations.

SAML defines an XML-based framework for describing and exchanging security


information between online business relationships. This security information is
maintained in SAML assertions that work between trusted security domain
boundaries.

The SAML standard follows a prescribed set of rules for requesting, creating,
communicating, and using SAML assertions. SAML has three roles and four primary
components.
SAML roles:
1. Identity provider (IdP)
2. Service provider / relying party
3. User/principal

7
SAML components:
1. Assumptions-defines how SAML attributes, authentication, and authorization
request-response protocol messages can be exchanged between systems using
common underlying communication protocols and frameworks.
2. Bindings-defines how SAML assertions and protocol message exchanges are
conducted with response/request pairs.
3. Protocols-defines what protocols are used, which include SOAP and HTTP.
4. Profiles-defines specific sets of rules for a use case for attributes, bindings, and
protocols for a SAML session.

7
Internet Engineering Task Force (IETF) rfc 6749 states: The Open Authorization
(OAuth) 2.0 authorization framework enables a third-party application to obtain
limited access to an HTTP service, either on behalf of a resource owner by
orchestrating an approval interaction between the resource owner and the HTTP
service, or by allowing the third-party application to obtain access on its own
behalf.

OAuth standard has four roles:


1. Resource owner: An entity capable of granting access to a protected resource.
When the resource owner is a person, the entity is referred to as an end-user.
2. Resource server: The server hosting the protected resources, capable of
accepting and responding to protected resource requests using access tokens.
3. Client application: An application making protected resource requests on behalf
of the resource owner and with its authorization. The term “client” does not imply
any implementation characteristics (e.g., whether the application executes on a
server, a desktop, or other devices).
4. Authorization server: The server issuing access tokens to the client after
successfully authenticating the resource owner and obtaining authorization.

8
Review this link https://developers.onelogin.com/saml

https://oauth.net/2/

https://openid.net/

9
Gartner defines identity as a service (IDaaS) as, “a predominantly cloud-based
service in a multi-tenant or dedicated and hosted delivery model that brokers core
identity governance and administration (IGA), access and intelligence functions to
target systems on customers’ premises and in the cloud.”

Gartner states that the core aspects of IDaaS are:


• IGA: Provisioning of users to cloud applications and password reset functionality.
• Access: User authentication, single sign-on (SSO), and authorization, supporting
federation standards such as SAML.
• Intelligence: Identity access log monitoring and reporting.

The modern convergence of various business needs (that include ubiquitous access
to services, reduced effort with sign-on, and greater support with federated
standards) have driven adoption of IDaaS. These are some of the top performers in
the IDaaS space that are part of Gardner’s Magic Quadrant:

• Centrify
• Okta

10
• Windows Active Directory Federated Services

10
On-premise organizations can use existing infrastructure that manages identities
through LDAP services like Windows Active Directory to connect and login to a
service provider that extends their internal identities to authenticate to consume
services that are in the cloud. An example of extending internal services related to
ID management to integrate with cloud services would be an enterprise Windows
Active Directory connecting to Windows Azure (public cloud) AD to consume
services related to Office 365. Office 365 represents a service that the enterprise is
seeking to consume as software as a service (SaaS) that would be facilitated
through linking an enterprise directory to a provider directory. While the service is
provided externally, the passwords and IDs would be managed internal, thus on-
premise.

11
If the previous scenario is managed by creating and storing the identities within an
instance of Office 365 and Windows Active Directory in Windows Azure, then the
third-party service is completely managed in the cloud.

12
Types of Access Control
NIST SP 800-192 specifies access control models as “formal presentations of the
security policies enforced by AC systems, and are useful for proving theoretical
limitations of systems. AC models bridge the gap in abstraction between policy and
mechanism.” The access control types addressed in this module are discretionary
access control (DAC), mandatory access control (MAC), nondiscretionary access
control (NDAC), role-based access control (RBAC), rule-based access control (RBAC),
and attribute based access control (ABAC).

1
Discretionary Access Control (DAC)
DAC leaves a certain amount of access control to the discretion of the object’s
owner or anyone else who is authorized to control the object’s access. The owner
can determine who should have access rights to an object and what those rights
should be. DAC allows for the greatest flexibility in controls along with the greatest
vulnerabilities. The object’s owner can pass on control weaknesses that can
contribute to access and privilege aggregation.

2
Mandatory Access Control (MAC)
MAC means that access control policy decisions are made by a central authority and
not by the individual owner of an object. User cannot change access rights. An
example of MAC occurs in military security, where an individual data owner does
not decide who has a top-secret clearance, nor can the owner change the
classification of an object from top-secret to secret.

3
Nondiscretionary Access Control (NDAC)
In general, all AC policies other than DAC are grouped under the category of
nondiscretionary AC (NDAC). As the name implies, policies in this category have
rules that are not established at the discretion of the user. Nondiscretionary policies
establish controls that cannot be changed by users but only through administrative
action.

4
Role-Based Access Control (RBAC)
RBAC is an access control policy that restricts information system access to
authorized users. Organizations can create specific roles based on job functions and
the authorizations (i.e., privileges) to perform needed operations on organizational
information systems associated with the organization-defined roles. Access can be
granted by the owner as with DAC and applied with the policy according to MAC.

5
Rule-Based Access Control (RBAC)
This is based upon a pre-defined list of rules that can determine access with
additional granularity controls such as when, where, and if the system will allow
read, write, or execute based upon special conditions. RBACs are managed by the
system owner and represent an implementation of DAC.

6
Attribute-Based Access Control (ABAC)
ABAC is an access control paradigm whereby access rights are granted to users with
policies that combine attributes together. The policies can use any type of attributes
(user attributes, resource attributes, environment attributes etc.).

7
Accountability
Ultimately one of the drivers behind strong identification, authentication, auditing,
and session management is accountability. Fundamentally, accountability is being
able to determine whom or what is responsible for an action and can be held
responsible. Accountability ensures that account management has assurance that
only authorized users are accessing the system and that they are using the system
properly.

A closely related information assurance topic is non-repudiation. Repudiation is the


ability to deny an action, event, impact, or result. Non-repudiation is the process of
ensuring a user may not deny an action. Accountability relies heavily on non-
repudiation to ensure users, processes, and actions may be held responsible. A
primary activity in establishing accountability is to log relevant accesses and events
within a system and to have a process that includes log review analysis.

8
9
1
2
3
4
Security testing and assessment are activities that assist an organization in
managing risk, developing applications, managing systems, and utilizing services. To
be successful in mitigating risks, organizations must develop competencies that
align with business needs related to assessing, validating, testing, and auditing
systems and applications that support business objectives and goals.

5
6
7
8
9
Please download and study this study
https://resources.sei.cmu.edu/asset_files/SpecialReport/2012_003_001_28137.pdf

10
https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-115.pdf please
download and read this document further.

11
12.6 Technical vulnerability management
Objective: To prevent exploitation of technical vulnerabilities.

12.6.1 Management of technical vulnerabilities


Control
Information about technical vulnerabilities of information systems being used
should be obtained in a timely fashion, the organization’s exposure to such
vulnerabilities evaluated and appropriate measures taken to address the associated
risk.

Implementation guidance
A current and complete inventory of assets (see Clause 8) is a prerequisite for
effective technical vulnerability management. Specific information needed to
support technical vulnerability management includes the software vendor, version
numbers, current state of deployment (e.g. what software is installed on what
systems) and the person(s) within the organization responsible for the software.

Appropriate and timely action should be taken in response to the identification of

12
potential technical vulnerabilities. The following guidance should be followed to
establish an effective management process for technical vulnerabilities:
a) the organization should define and establish the roles and responsibilities
associated with technical vulnerability management, including vulnerability
monitoring, vulnerability risk assessment,
patching, asset tracking and any coordination responsibilities required;
b) information resources that will be used to identify relevant technical vulnerabilities
and to maintain awareness about them should be identified for software and other
technology (based on the asset inventory list, see 8.1.1); these information resources
should be updated based on changes in the inventory or when other new or useful
resources are found;
c) a timeline should be defined to react to notifications of potentially relevant
technical vulnerabilities;
d) once a potential technical vulnerability has been identified, the organization
should identify the associated risks and the actions to be taken; such action could
involve patching of vulnerable
systems or applying other controls;
e) depending on how urgently a technical vulnerability needs to be addressed, the
action taken should be carried out according to the controls related to change
management (see 12.1.2) or by following information security incident response
procedures (see 16.1.5);
f) if a patch is available from a legitimate source, the risks associated with installing
the patch should be assessed (the risks posed by the vulnerability should be
compared with the risk of installing the patch);
g) patches should be tested and evaluated before they are installed to ensure they
are effective and do not result in side effects that cannot be tolerated; if no patch is
available, other controls should be considered, such as:
1) turning off services or capabilities related to the vulnerability;
2) adapting or adding access controls, e.g. firewalls, at network borders (see 13.1);
3) increased monitoring to detect actual attacks;
4) raising awareness of the vulnerability;
h) an audit log should be kept for all procedures undertaken;
i) the technical vulnerability management process should be regularly monitored and
evaluated in order to ensure its effectiveness and efficiency;
j) systems at high risk should be addressed first;
k) an effective technical vulnerability management process should be aligned with
incident management activities, to communicate data on vulnerabilities to the
incident response function
and provide technical procedures to be carried out should an incident occur;
l) define a procedure to address the situation where a vulnerability has been
identified but there is no suitable countermeasure. In this situation, the organization

12
should evaluate risks relating to the known vulnerability and define appropriate
detective and corrective actions.

Other information
Technical vulnerability management can be viewed as a sub-function of change
management and as such can take advantage of the change management processes
and procedures (see 12.1.2 and 14.2.2). Vendors are often under significant pressure
to release patches as soon as possible. Therefore, there is a possibility that a patch
does not address the problem adequately and has negative side effects. Also, in some
cases, uninstalling a patch cannot be easily achieved once the patch has been
applied.

12
13
14
15
16
1
2
3
4
Security testing and assessment are activities that assist an organization in
managing risk, developing applications, managing systems, and utilizing services. To
be successful in mitigating risks,
organizations must develop competencies that align with business needs related to
assessing, validating, testing, and auditing systems and applications that support
business objectives and goals.

5
6
7
8
9
Please download and study this study
https://resources.sei.cmu.edu/asset_files/SpecialReport/2012_003_001_28137.pdf

10
https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-115.pdf please
download and read this document further.

11
12.6 Technical vulnerability management
Objective: To prevent exploitation of technical vulnerabilities.

12.6.1 Management of technical vulnerabilities


Control
Information about technical vulnerabilities of information systems being used
should be obtained in a timely fashion, the organization’s exposure to such
vulnerabilities evaluated and appropriate measures taken to address the associated
risk.

Implementation guidance
A current and complete inventory of assets (see Clause 8) is a prerequisite for
effective technical vulnerability management. Specific information needed to
support technical vulnerability management includes the software vendor, version
numbers, current state of deployment (e.g. what software is installed on what
systems) and the person(s) within the organization responsible for the software.

Appropriate and timely action should be taken in response to the identification of

12
potential technical vulnerabilities. The following guidance should be followed to
establish an effective management process for technical vulnerabilities:
a) the organization should define and establish the roles and responsibilities
associated with technical vulnerability management, including vulnerability
monitoring, vulnerability risk assessment,
patching, asset tracking and any coordination responsibilities required;
b) information resources that will be used to identify relevant technical vulnerabilities
and to maintain awareness about them should be identified for software and other
technology (based on the asset inventory list, see 8.1.1); these information resources
should be updated based on changes in the inventory or when other new or useful
resources are found;
c) a timeline should be defined to react to notifications of potentially relevant
technical vulnerabilities;
d) once a potential technical vulnerability has been identified, the organization
should identify the associated risks and the actions to be taken; such action could
involve patching of vulnerable
systems or applying other controls;
e) depending on how urgently a technical vulnerability needs to be addressed, the
action taken should be carried out according to the controls related to change
management (see 12.1.2) or by following information security incident response
procedures (see 16.1.5);
f) if a patch is available from a legitimate source, the risks associated with installing
the patch should be assessed (the risks posed by the vulnerability should be
compared with the risk of installing the patch);
g) patches should be tested and evaluated before they are installed to ensure they
are effective and do not result in side effects that cannot be tolerated; if no patch is
available, other controls should be considered, such as:
1) turning off services or capabilities related to the vulnerability;
2) adapting or adding access controls, e.g. firewalls, at network borders (see 13.1);
3) increased monitoring to detect actual attacks;
4) raising awareness of the vulnerability;
h) an audit log should be kept for all procedures undertaken;
i) the technical vulnerability management process should be regularly monitored and
evaluated in order to ensure its effectiveness and efficiency;
j) systems at high risk should be addressed first;
k) an effective technical vulnerability management process should be aligned with
incident management activities, to communicate data on vulnerabilities to the
incident response function
and provide technical procedures to be carried out should an incident occur;
l) define a procedure to address the situation where a vulnerability has been
identified but there is no suitable countermeasure. In this situation, the organization

12
should evaluate risks relating to the known vulnerability and define appropriate
detective and corrective actions.

Other information
Technical vulnerability management can be viewed as a sub-function of change
management and as such can take advantage of the change management processes
and procedures (see 12.1.2 and 14.2.2). Vendors are often under significant pressure
to release patches as soon as possible. Therefore, there is a possibility that a patch
does not address the problem adequately and has negative side effects. Also, in some
cases, uninstalling a patch cannot be easily achieved once the patch has been
applied.

12
13
14
15
16
White hat or Overt Testing
The term "white hat" in Internet slang refers to an ethical computer hacker, or
a computer security expert, who specializes in penetration testing and in other
testing methodologies that ensures the security of an organization's information
systems.[1] Ethical hacking is a term meant to imply a broader category than just
penetration testing.[2][3] Contrasted with black hat, a malicious hacker, the name
comes from Western films, where heroic and antagonistic cowboys might
traditionally wear a white and a black hat respectively.[4] While a white hat hacker
hacks under good intentions with permission, and a black hat hacker has malicious
intent, there is a third kind known as a grey hat hacker who hacks with good
intentions without permission.[Symantec Group 1]
White hat hackers may also work in teams called "sneakers",[5] red teams, or tiger
teams.[6]

Overt security testing and white hat testing are synonymous terms. Overt testing
can be used with both internal and external testing. When used from an internal
perspective the bad actor simulated is an employee of the organization. The
organization’s IT staff is made aware of the testing and can assist the assessor in

17
limiting the impact of the test by providing specific guidelines for the test scope and
parameters. Since overt testing is transparent to the IT staff, it can be an optimal way
to train the IT staff. Overt testing carries less risk than covert testing, has lower cost
than
covert testing, and is utilized more often than covert testing.

Gray Hat

A grey hat (greyhat or gray hat) is a computer hacker or computer security expert
who may sometimes violate laws or typical ethical standards, but does not have the
malicious intent typical of a black hat hacker.

The term began to be used in the late 1990s, derived from the concepts of "white
hat" and "black hat" hackers.[1] When a white hat hacker discovers a vulnerability,
they will exploit it only with permission and not divulge its existence until it has been
fixed, whereas the black hat will illegally exploit it and/or tell others how to do so.
The grey hat will neither illegally exploit it, nor tell others how to do so.[2]

A further difference among these types of hacker lies in their methods of discovering
vulnerabilities. The white hat breaks into systems and networks at the request of
their employer or with explicit permission for the purpose of determining how secure
it is against hackers, whereas the black hat will break into any system or network in
order to uncover sensitive information and for personal gain. The grey hat generally
has the skills and intent of the white hat but will break into any system or network
without permission.[3][4]

According to one definition of a grey-hat hacker, when they discover a vulnerability,


instead of telling the vendor how the exploit works, they may offer to repair it for a
small fee. When one successfully gains illegal access to a system or network, they
may suggest to the system administrator that one of their friends be hired to fix the
problem; however, this practice has been declining due to the increasing willingness
of businesses to prosecute. Another definition of Grey hat maintains that Grey hat
hackers only arguably violate the law in an effort to research and improve security:
legality being set according to the particular ramifications of any hacks they
participate in.[5]

Black Hat

The term's origin is often attributed to hacker culture theorist Richard


Stallman (though he denies coining it)[1] to contrast the exploitative hacker with
the white hat hacker who hacks protectively by drawing attention to vulnerabilities in

17
computer systems that require repair.[2] The black hat/white hat terminology
originates in the Western genre of popular American culture, in which black and
white hats denote villainous and heroic cowboys respectively.[3]
Black hat hackers are the stereotypical illegal hacking groups often portrayed in
popular culture, and are "the epitome of all that the public fears in a computer
criminal".[4] Black hat hackers break into secure networks to destroy, modify, or steal
data, or to make the networks unusable for authorized network users.[5]

Covert security testing and black hat testing are synonymous terms. Covert testing is
performed to simulate the threats that are associated with external adversaries.
While the security staff has no knowledge of the covert test, the organization
management is fully aware and consents to the test. A third-party organization may
participate in the test as a mitigation point for the security staff’s reaction and a
communication focal point between the assessors, management, and the security
staff. Covert testing will illuminate security staff responsiveness. Typically, the most
basic and fundamental exploits are executed within predetermined boundaries and
scope to reduce the potential impact of system degradation or damage. Covert tests
are often carried out in a stealth fashion, “under the radar,” or “slow and low” to
simulate an adversary that is seeking to avoid detection. Covert testing provides a
comprehensive view of the behavior, posture, and responsiveness of the security
staff.

17
Log Overview
Purpose:
How to detect suspicious activities as soon as possible to reduce the impact of
incidence or make prevention if possible.
How to unify the log format and elements as well as the functions?

Role:
Who typically does this?
Security Administrator or independent party who has no access rights/accounts in
the reviewed systems. You can't be an user administrator. At the same time, you
review your activity everyday. However, if there is a resource limitation, you need
another supervisor to authorize your log review.

Frequency:
It depends on the criticality (i.e. payment system, customer information, business
secret, etc.) of the system labelled by the organization, logs could be reviewed
ranging from minute, every day, weekly, monthly or even 3 months. In fact, log
review is a kind of detective control and the preventive control is lacking. Log review

18
will be the Goal Keeper and frequency is critical.
However, user account and authority list should be reviewed at least 3 to 6 months
and never take a check ONLY when the audit cycle is coming

Log Review Tips


Critical systems require at least daily log review, however, what types of
logs/activities should we pay attention to?
1. Consecutive login failure especially in non-office hour.
2. Login in non-office hour.
3. Authority change, addition and removal. Check them against with authorized
application.
4. Any system administrator's activities
5. Any unknown workstation/server are plugged into the network?
6. Logs removal/log overwritten/log size is full
7. Pay more attention to the log reports after week-end and holiday
8. Any account unlocked/password reset by system administrators without
authorized forms?

Log Standard
In fact, we are suffering various log format and standard from various systems even
we are working in-house or act as a consultant. Why don't we produce a
standard/guidelines to developer before they design the user administrative and
audit trail functions to fulfill security control.

Functions:-
Search - By date and time, by event type, by criticality, by account/user ID, by
department
Sorting - By date and time, by event type, by criticality, by account/user ID, by
department
Paging (Optional)
Critical event is marked by "*"
Log archive and export
Log code and description table
Highlighting system and user adminisitrator activities

Mandatory Fields:-
User ID and Name (Sometimes, event may involve the action from administrator)
Activity Date/Timestamp
Activity Code, Type and Description
Terminal IP address and Location

18
User Account List:-
User Info - Name, Department, Role
Last Accessed Time
Account Creation Date/Time
Current Authority and Role
Account authority and information change history
Show expired and inactive accounts (for example: 90 days)

18
Logging and monitoring
Objective: To record events and generate evidence.

12.4.1 Event logging


Control
Event logs recording user activities, exceptions, faults and information security
events should be
produced, kept and regularly reviewed.
Implementation guidance

Event logs should include, when relevant:


a) user IDs;
b) system activities;
c) dates, times and details of key events, e.g. log-on and log-off;
d) device identity or location if possible and system identifier;
e) records of successful and rejected system access attempts;
f) records of successful and rejected data and other resource access attempts;
g) changes to system configuration;

19
h) use of privileges;
i) use of system utilities and applications;
j) files accessed and the kind of access;
k) network addresses and protocols;
l) alarms raised by the access control system;
m) activation and de-activation of protection systems, such as anti-virus systems and
intrusion detection systems;
n) records of transactions executed by users in applications. Event logging sets the
foundation for automated monitoring systems which are capable of generating
consolidated reports and alerts on system security.

Other information
Event logs can contain sensitive data and personally identifiable information.
Appropriate privacy protection measures should be taken (see 18.1.4).
Where possible, system administrators should not have permission to erase or de-
activate logs of their own activities (see 12.4.3).

12.4.2 Protection of log information


Control
Logging facilities and log information should be protected against tampering and
unauthorized access.

Implementation guidance
Controls should aim to protect against unauthorized changes to log information and
operational problems with the logging facility including:
a) alterations to the message types that are recorded;
b) log files being edited or deleted;
c) storage capacity of the log file media being exceeded, resulting in either the failure
to record events or over-writing of past recorded events.
Some audit logs may be required to be archived as part of the record retention policy
or because of requirements to collect and retain evidence (see 16.1.7).

Other information

System logs often contain a large volume of information, much of which is extraneous
to information security monitoring. To help identify significant events for information
security monitoring purposes, the copying of appropriate message types
automatically to a second log, or the use of suitable system utilities or audit tools to
perform file interrogation and rationalization should be considered. System logs need
to be protected, because if the data can be modified or data in them deleted, their
existence may create a false sense of security. Real-time copying of logs to a system

19
outside the control of a system administrator or operator can be used to safeguard
logs.

12.4.3 Administrator and operator logs


Control
System administrator and system operator activities should be logged and the logs
protected and regularly reviewed.
Implementation guidance
Privileged user account holders may be able to manipulate the logs on information
processing facilities under their direct control, therefore it is necessary to protect and
review the logs to maintain accountability for the privileged users.
Other information
An intrusion detection system managed outside of the control of system and network
administrators can be used to monitor system and network administration activities
for compliance.
12.4.4 Clock synchronisation
Control
The clocks of all relevant information processing systems within an organization or
security domain should be synchronised to a single reference time source.
Implementation guidance
External and internal requirements for time representation, synchronisation and
accuracy should be documented. Such requirements can be legal, regulatory,
contractual requirements, standards
compliance or requirements for internal monitoring. A standard reference time for
use within the organization should be defined. The organization’s approach to
obtaining a reference time from external source(s) and how to synchronise internal
clocks reliably should be documented and implemented.
Other information
The correct setting of computer clocks is important to ensure the accuracy of audit
logs, which may be required for investigations or as evidence in legal or disciplinary
cases. Inaccurate audit logs may hinder such investigations and damage the
credibility of such evidence. A clock linked to a radio time broadcast from a national
atomic clock can be used as the master clock for logging systems. A network time
protocol can be used to keep all of the servers in synchronisation with the master
clock.

Also read NIST SP 800-92 identifies log reviews section.

19
20
21
“Synthetic monitoring (also known as active monitoring or proactive monitoring) is website
monitoring that is done using a Web browser emulation or scripted recordings of
Web transactions. Behavioral scripts (or paths) are created to simulate an action or path
that a customer or end-user would take on a site.”

It has two types


• Real User Monitoring (RUM)
• Synthetic Performance Monitoring

22
Synthetic monitoring (also known as active monitoring or proactive monitoring) is
a monitoring technique that is done by using an emulation or scripted recordings of
transactions. Behavioral scripts (or paths) are created to simulate an action or path
that a customer or end-user would take on a site, application or other software (or
even hardware). Those paths are then continuously monitored at specified intervals
for performance, such as: functionality, availability, and response time measures.
Synthetic monitoring is valuable because it enables a webmaster or
an IT/Operations professional to identify problems and determine if
a website or application is slow or experiencing downtime before that problem
affects actual end-users or customers. This type of monitoring does not require
actual traffic, thus the name synthetic, so it enables companies to test applications
24x7, or test new applications prior to a live customer-facing launch. This is usually
a good complement when used with passive monitoring to help provide visibility on
application health during off peak hours when transaction volume is low.[1]
When combined with traditional APM tools, synthetic monitoring can provide
deeper visibility into end-to-end performance, regardless of where applications are
running.[2]
Because synthetic monitoring is a simulation of typical user behavior or navigation

23
through a website, it is often best used to monitor commonly trafficked paths and
critical business processes. Synthetic tests must be scripted in advance, so it is not
feasible to measure performance for every permutation of a navigational path an
end-user might take. This is more suited for passive monitoring.
Synthetic testing is useful for measuring uptime, availability and response time of
critical pages and transaction (how a site performs from all geographies) but doesn't
monitor or capture actual end-user interactions, see Website monitoring. This is also
known as Active monitoring that consists of synthetic probes and Web robots to help
report on system availability and predefined business transactions.[3]

Source : https://en.wikipedia.org/wiki/Synthetic_monitoring

Types of Monitoring
Website Monitoring: Website monitoring uses synthetic transactions to perform
HTTP requests to check availability and to measure performance of a web page,
website, or web application.

Database Monitoring: Database monitoring using synthetic transactions monitors the


availability of a database.

TCP Port Monitoring: A TCP port synthetic transaction measures the availability of
your website, service, or application; you can specify the server and TCP port for
Operations Manager to monitor.

23
As per ISO 27002:2013 Standard

14.2 Security in development and support processes


Objective: To ensure that information security is designed and implemented within
the development lifecycle of information systems.

14.2.1 Secure development policy


Control
Rules for the development of software and systems should be established and
applied to developments within the organization.

Implementation guidance
Secure development is a requirement to build up a secure service, architecture,
software and system. Within a secure development policy, the following aspects
should be put under consideration:
a) security of the development environment;
b) guidance on the security in the software development lifecycle:
1) security in the software development methodology;

24
2) secure coding guidelines for each programming language used;
c) security requirements in the design phase;
d) security checkpoints within the project milestones;
e) secure repositories;
f) security in the version control;
g) required application security knowledge;
h) developers’ capability of avoiding, finding and fixing vulnerabilities.

Secure programming techniques should be used both for new developments and in
code re-use scenarios where the standards applied to development may not be
known or were not consistent with current best practices. Secure coding standards
should be considered and where relevant mandated for use. Developers should be
trained in their use and testing and code review should verify their use. If
development is outsourced, the organization should obtain assurance that the
external party complies with these rules for secure development.

Other information
Development may also take place inside applications, such as office applications,
scripting, browsers and databases.

During Planning and Design


While a security review of the architecture and threat modeling are not security
testing methods, they are an important prerequisite for subsequent security testing
efforts, and the security practitioner should be aware of the options available to
them. The following is a consideration of the prerequisites and benefits of
architecture security review and threat modeling:

Architecture security review: A manual review of the product architecture to ensure


that it fulfills the necessary security requirements:
o Prerequisites: Architectural model
o Benefit: Detecting architectural violations of the security standard

Threat modeling: A structured manual analysis of an application specific business


case or usage scenario. This analysis is guided by a set of precompiled security
threats:
o Prerequisites: Business Case or Usage Scenario
o Benefits: Identification of threats, including their impact and potential
countermeasures specific to the development of the software product

These methods help to identify the attack surface and, thus, the most critical

24
components. This allows a focusing of the security testing activities to ensure they
are as effective as possible.

24
What is SAST?
Static application security testing (SAST), or static analysis, is a testing methodology that
analyzes source code to find security vulnerabilities that make your organization’s
applications susceptible to attack. SAST scans an application before the code is compiled.
It’s also known as white box testing.

What problems does SAST solve?


SAST takes place very early in the software development life cycle (SDLC) as it does
not require a working application and can take place without code being executed.
It helps developers identify vulnerabilities in the initial stages of development and
quickly resolve issues without breaking builds or passing on vulnerabilities to the
final release of the application. SAST tools give developers real-time feedback as
they code, helping them fix issues before they pass the code to the next phase of
the SDLC. This prevents security-related issues from being considered an
afterthought. SAST tools also provide graphical representations of the issues found,
from source to sink. These help you navigate the code easier. Some tools point out
the exact location of vulnerabilities and highlight the risky code. Tools can also
provide in-depth guidance on how to fix issues and the best place in the code to fix

25
them, without requiring deep security domain expertise.

Developers can also create the customized reports they need with SAST tools; these
reports can be exported offline and tracked using dashboards. Tracking all the
security issues reported by the tool in an organized way can help developers
remediate these issues promptly and release applications with minimal problems.
This process contributes to the creation of a secure SDLC.
It’s important to note that SAST tools must be run on the application on a regular
basis, such as during daily/monthly builds, every time code is checked in, or during a
code release.

Why is SAST an important security activity?


Developers dramatically outnumber security staff. It can be challenging for an
organization to find the resources to perform code reviews on even a fraction of its
applications. A key strength of SAST tools is the ability to analyze 100% of the
codebase. Additionally, they are much faster than manual secure code reviews
performed by humans. These tools can scan millions of lines of code in a matter of
minutes. SAST tools automatically identify critical vulnerabilities—such as buffer
overflows, SQL injection, cross-site scripting, and others—with high confidence. Thus,
integrating static analysis into the SDLC can yield dramatic results in the overall
quality of the code developed.

What are the key steps to run SAST effectively?


There are six simple steps needed to perform SAST efficiently in organizations that
have a very large number of applications built with different languages, frameworks,
and platforms.
Finalize the tool. Select a static analysis tool that can perform code reviews of
applications written in the programming languages you use. The tool should also be
able to comprehend the underlying framework used by your software.
Create the scanning infrastructure, and deploy the tool. This step involves handling
the licensing requirements, setting up access control and authorization, and
procuring the resources required (e.g., servers and databases) to deploy the tool.
Customize the tool. Fine-tune the tool to suit the needs of the organization. For
example, you might configure it to reduce false positives or find additional security
vulnerabilities by writing new rules or updating existing ones. Integrate the tool into
the build environment, create dashboards for tracking scan results, and build custom
reports.
Prioritize and onboard applications. Once the tool is ready, onboard your
applications. If you have a large number of applications, prioritize the high-risk

25
applications to scan first. Eventually, all your applications should be onboarded and
scanned regularly, with application scans synced with release cycles, daily or monthly
builds, or code check-ins.
Analyze scan results. This step involves triaging the results of the scan to remove
false positives. Once the set of issues is finalized, they should be tracked and
provided to the deployment teams for proper and timely remediation.
Provide governance and training. Proper governance ensures that your development
teams are employing the scanning tools properly. The software security
touchpoints should be present within the SDLC. SAST should be incorporated as part
of your application development and deployment process.
What tools can be used for SAST?
Synopsys offers the most comprehensive solution for integrating security and quality
into your SDLC and supply chain.

Coverity Static Application Security Testing finds critical defects and security
weaknesses in code as it’s written. It provides full path coverage, ensuring that every
line of code and every potential execution path is tested. Through a deep
understanding of the source code and the underlying frameworks, it provides highly
accurate analysis, so developers don’t waste time on a large volume of false positives.

Coverity scales to accommodate thousands of developers and can analyze projects


with more than 100 million lines of code with ease. It can be rapidly integrated with
critical tools and systems that support the development process, such as source
control management, build and continuous integration, bug tracking, and application
life cycle management (ALM) solutions, as well as IDEs.
SAST in IDE (Code Sight) is a real-time, developer-centric SAST tool. It scans for and
identifies vulnerabilities as developers code. Code Sight integrates into the integrated
development environment (IDE), where it identifies security vulnerabilities and
provides guidance to remediate them.

How is SAST different from DAST?


Organizations are paying more attention to application security, owing to the rising
number of breaches. They want to identify vulnerabilities in their applications and
mitigate risks at an early stage. There are two different types of application security
testing—SAST and dynamic application security testing (DAST). Both testing
methodologies identify security flaws in applications, but they do so differently.

25
Static Source Code Analysis (SAST) and manual code review:
Analysis of the application source code for finding vulnerabilities without executing
the application:
• Prerequisites: Application source code
• Benefits: Detection of insecure programming, outdated libraries, and
misconfigurations

Static binary code analysis and manual binary review: Analysis of the compiled
application (binary) for finding vulnerabilities without executing the application. In
general, this is like the source
code analysis but is not as precise and fix recommendations typically cannot be
provided.

26
27
In black-box testing, the tested system is used as a black box, i.e., no internal details
of the system implementation are used. In contrast, white-box testing takes the
internal system details (e.g., the source code) into account.

28
Static Testing And Dynamic Testing – Understand the Difference
Software Testing Tutorial | By Meenakshi Agarwal

Static testing and dynamic testing are essential testing techniques meant for
developers and testers for use during the Software Development. These are unique
validation methods which the organization must decide after due analysis which
one to practice for software verification. Since your objective is to get the maximum
benefit from these type of testing, hence pick the right tool most suited for your
needs. In this tutorial, you’ll learn the pros and cons of each of these type of
testing.

Static Testing And Dynamic Testing


Static Testing:

In Static Testing, it does not require executing the code. You can perform it manually
or by using a set of tools. This testing type covers the analysis of the source, review
of specification documents, also the design description documents. In this, the
testers provide review comments on each of the doc reviewed. If the application is

29
yet not operational and hasn’t implemented the user interface, then you can execute
a security analysis to observe it under the runtime-less configuration.

While doing static testing, a tester or a developer can go on to look for the bugs,
buffer overflows, and probably identify the vulnerable code in the system. To begin
such testing, you don’t need to wait for finishing the entire application development.
It can get started in an early phase of the development lifecycle. And testers can
begin reviewing the code, scripts, requirements, test cases, or any related doc
whichever is available at the point of time.

Static Testing Techniques:


1. Inspection:
The principal moto of this Type of Testing is to identify defects in the early stage of
the software development cycle. The team can start with the inspection of any of the
artifacts as mentioned earlier such the code or the test cases or the product docs
walkthroughs. It requires a moderator for organizing the review sessions. Since the
inspection is a formal type of review, so the moderator needs to prepare a checklist
of what to go through and what not.

2. Walkthrough:
Another technique is the walkthrough. It requires the owner of the document to
explain the work done. The attendees can place their queries and another person
allocated as the scribe has to
record the points in the notes.

3. Technical Reviews:
In this Static Testing method, the team carries out the Technical Scrutiny of the code
written to check whether it meets the coding guidelines and standards. In general,
the testing artifacts such as a testing plan, validation strategy, and the automation
scripts get also reviewed in this session.

4. Informal Reviews:
Static Testing technique in which the docs get scrutinized informally. Also, the
participants provide informal comments during the meeting.

Dynamic Testing:
Dynamic testing gets performed after the application has reached into the
operational mode. It gets executed in the real runtime environment set up by the QA.
While the code behind the app is running, the tester supplies the required input and
waits for the result. After that, he/she matches the output with the expected
outcome.

29
That’s how the testers inspect the functional behavior of the application, track the
system RAM, CPU usage, its response time, and the performance of the overall
software. Dynamic testing has another name as validation testing. It can either be
Functional Testing as well as non-functional testing.

Types Of Dynamic Testing Techniques:

1. Unit Testing:
This testing happens mostly at the developer’s end. The essential artifacts which get
tested are the source code of the application’s various modules.

2. Integration Testing:
The purpose of this technique is to verify the interfacing between two or more
modules once they get tied up together.

3. System Testing:
This testing gets done on the entire software with all modules working.

4. Acceptance Testing:
This testing runs the validation keeping the user’s point of view in mind.

In software development lifecycle, both the Static Testing and Dynamic Testing are
essential to certify the application functionality. Each of these has its strength and
weakness which you should be aware.

Static Testing Vs. Dynamic Testing:


1. Static Testing belongs to the white box testing. It gets performed at an early stage
of development. It incurs a lower cost as compared to the dynamic testing. Vs
Dynamic Testing gets performed at the final stage of the development process.
2. Static testing has better line coverage than the dynamic testing in the short
duration. Vs Dynamic Testing has lower no. Of line coverage as it examines only a
smaller part of the code.
3. Static testing occurs before the application is ready for the deployment. Vs.
Dynamic Testing happens after the code got deployed.
4. Static testing gets done in the verification stage. Vs. Dynamic testing completes in
the validation stage.
5. No execution happens in the static testing. Vs. Dynamic testing requires the code
execution.
6. Static testing produces the analysis of the code along with the documentation. Vs.
Dynamic Testing reports the bottlenecks in the application.
7. In Static Testing, the team prepares a checklist describing the testing process. Vs. In

29
Dynamic Testing, the test cases get executed.
8. Static Testing Methods are Walkthroughs and code reviews. Vs. Dynamic testing
majorly has the functional and nonfunctional validation.

Summary – Static Testing And Dynamic Testing


In any Software development methodology, both the Verification and Validation
process get carried out to certify that the final software has all the requirements
implemented correctly.
Static testing scrutinizes the application code without any execution. It lies under the
umbrella of Verification. The testers have got multiple Static testing techniques such
as Inspection, Walkthrough, Technical and Informal reviews, etc.

On the contrary, Dynamic testing validates the working product. It lies under the
umbrella of Validation. The standard Dynamic testing techniques are Unit Testing,
Integration Testing, System or Stabilization Testing and User Acceptance Testing.
Here, the product gets validated by both functional and non-functional aspects.

29
30
31
Automated vulnerability scanners: Test an application for the use of system
components or configurations that are known to be insecure. For this, predefined
attack patterns are executed as well as system fingerprints are analyzed:

o Benefits: Detection of well-known vulnerabilities, i.e., detection of


outdated frameworks and misconfigurations

32
Source Wikipedia:

Fuzzing or fuzz testing is an automated software testing technique that involves


providing invalid, unexpected, or random data as inputs to a computer program.
The program is then monitored for exceptions such as crashes, failing built-in
code assertions, or potential memory leaks. Typically, fuzzers are used to test
programs that take structured inputs. This structure is specified, e.g., in a file
format or protocol and distinguishes valid from invalid input. An effective fuzzer
generates semi-valid inputs that are "valid enough" in that they are not directly
rejected by the parser, but do create unexpected behaviors deeper in the program
and are "invalid enough" to expose corner cases that have not been properly dealt
with.

For the purpose of security, input that crosses a trust boundary is often the most
interesting.[1] For example, it is more important to fuzz code that handles the
upload of a file by any user than it is to fuzz the code that parses a configuration file
that is accessible only to a privileged user.

33
34
The attack surface of a software or hardware environment is the sum of the
different points (the "attack vectors") where an unauthorized user (the "attacker")
can try to enter data to or extract data from an environment. Keeping the attack
surface as small as possible is a basic security measure.

35
Different security testing methods behave differently when applied to different
application types

36
Security testing techniques and tools differ in usability (e.g., fix recommendations)
and quality (e.g., false positives rate)

37
Security testing tools usually only support a limited number of technologies (e.g.,
programming languages), and if a tool supports multiple technologies, it does not
necessarily support all of them equally well

38
Different tools and methods require different computing power or different manual
efforts

39
Misuse case is a business process modeling tool used in the software development
industry. The term Misuse Case or mis-use case is derived from and is the inverse
of use case.[1] The term was first used in the 1990s by Guttorm Sindre of
the Norwegian University of Science and Technology, and Andreas L. Opdahl of
the University of Bergen, Norway. It describes the process of executing a malicious
act against a system, while use case can be used to describe any action taken by the
system.[2]

Some misuse cases occur in highly specific situations, whereas others continually
threaten systems. For instance, a car is most likely to be stolen when parked and
unattended; whereas a web server might suffer a denial-of-service attack at any
time. You can develop misuse and use cases recursively, going from system to
subsystem levels or lower as necessary. Lower-level cases can highlight aspects not
considered at higher levels, possibly forcing another analysis. The approach offers
rich possibilities for exploring, understanding, and validating the requirements in
any direction. Drawing the agents and misuse cases explicitly helps to focus the
attention of the security practitioner on the elements of the scenario.

40
In contrast to a positive test (that determines that a system works as expected, and
with any error fails the test); a negative test is designed to provide evidence of the
application behavior if there is unexpected or invalid data. Any provocation of
application failure is designed to surface in the test rather than once the application
is approved for production. An optimal response for an application to a negative
test is to gracefully reject the unexpected or invalid data without crashing. While
exceptions and error conditions are expected in negative tests
they are not expected in positive tests. It is optimal to combine a range of positive
and negative test to run on an application for thorough examination of behavior.

41
Test-Coverage Analyzers
Test-coverage analyzers measure how much of the total program code has been
analyzed. The results can be presented in terms of statement coverage (percentage
of lines of code tested) or branch coverage (percentage of available paths tested).
For large applications, acceptable levels of coverage can be determined in advance
and then compared to the results produced by test-coverage analyzers to accelerate
the testing-and-release process. These tools can also detect if particular lines of
code or branches of logic are not actually able to be reached during program
execution, which is inefficient and a potential security concern. Some SAST tools
incorporate this functionality into their products, but standalone products also
exist.
Since the functionality of analyzing coverage is being incorporated into some of the
other AST tool types, standalone coverage analyzers are mainly for niche use.

42
Interface testing involves the testing of the different components of an application,
e.g., software and hardware, in combination. This kind of combination testing is
done to ensure they are working correctly and conforming to the requirements
based on which they were designed and developed. Interface testing is different
from integration testing in that interface testing is done to check whether the
different components of the application or system being developed are in sync with
each other. In technical terms, interface testing helps determine that distinct
functions, such as data transfer between the different elements in the system, are
happening according to the way they were designed to happen.
Interface testing is one of the most important software tests in assuring the quality
of software products. Interface testing is conducted to evaluate whether systems or
components pass data and control correctly to one another. Interface testing is
usually performed by both testing and development teams. Interface testing helps
to determine which application areas are accessed as well as their user-friendliness.

Interface testing can be used to do the following:

• Check and verify if all the interactions between the application and a server are

43
executed properly
• Check and verify if errors are being handled properly
• Check what happens if a user interrupts any transaction
• Check what happens if a connection to a web server is reset

43
10 Types of Application Security Testing Tools: When and How to Use Them
JULY 9, 2018 • SEI BLOG

By Thomas Scanlon

Bugs and weaknesses in software are common: 84 percent of software breaches


exploit vulnerabilities at the application layer. The prevalence of software-related
problems is a key motivation for using application security testing (AST) tools. With
a growing number of application security testing tools available, it can be confusing
for information technology (IT) leaders, developers, and engineers to know which
tools address which issues. This blog post, the first in a series on application
security testing tools, will help to navigate the sea of offerings by categorizing the
different types of AST tools available and providing guidance on how and when to
use each class of tool.

See the second post in this series, Decision-Making Factors for Selecting Application
Security Testing Tools.

44
Application security is not a simple binary choice, whereby you either have security
or you don't. Application security is more of a sliding scale where providing additional
security layers helps reduce the risk of an incident, hopefully to an acceptable level of
risk for the organization. Thus, application-security testing reduces risk in
applications, but cannot completely eliminate it. Steps can be taken, however, to
remove those risks that are easiest to remove and to harden the software in use.

The major motivation for using AST tools is that manual code reviews and traditional
test plans are time consuming, and new vulnerabilities are continually being
introduced or discovered. In many domains, there are regulatory and compliance
directives that mandate the use of AST tools. Moreover--and perhaps most
importantly--individuals and groups intent on compromising systems use tools too,
and those charged with protecting those systems must keep pace with their
adversaries.

There are many benefits to using AST tools, which increase the speed, efficiency, and
coverage paths for testing applications. The tests they conduct are repeatable and
scale well--once a test case is developed in a tool, it can be executed against many
lines of code with little incremental cost. AST tools are effective at finding known
vulnerabilities, issues, and weaknesses, and they enable users to triage and classify
their findings. They can also be used in the remediation workflow, particularly in
verification, and they can be used to correlate and identify trends and patterns.

Guide to Application Security Testing Tools


This graphic depicts classes or categories of application security testing tools. The
boundaries are blurred at times, as particular products can perform elements of
multiple categories, but these are roughly the classes of tools within this domain.
There is a rough hierarchy in that the tools at the bottom of the pyramid are
foundational and as proficiency is gained with them, organizations may look to use
some of the more progressive methods higher in the pyramid.

Static Application Security Testing (SAST)


SAST tools can be thought of as white-hat or white-box testing, where the tester
knows information about the system or software being tested, including an
architecture diagram, access to source code, etc. SAST tools examine source code (at
rest) to detect and report weaknesses that can lead to security vulnerabilities.
Source-code analyzers can run on non-compiled code to check for defects such as
numerical errors, input validation, race conditions, path traversals, pointers and
references, and more. Binary and byte-code analyzers do the same on built and
compiled code. Some tools run on source code only, some on compiled code only,
and some on both.

44
Dynamic Application Security Testing (DAST)
In contrast to SAST tools, DAST tools can be thought of as black-hat or black-box
testing, where the tester has no prior knowledge of the system. They detect
conditions that indicate a security vulnerability in an application in its running state.
DAST tools run on operating code to detect issues with interfaces, requests,
responses, scripting (i.e., JavaScript), data injection, sessions, authentication, and
more.
DAST tools employ fuzzing: throwing known invalid and unexpected test cases at an
application, often in large volume.

Origin Analysis/Software Composition Analysis (SCA)


Software-governance processes that depend on manual inspection are prone to
failure. SCA tools examine software to determine the origins of all components and
libraries within the software. These tools are highly effective at identifying and
finding vulnerabilities in common and popular components, particularly open-source
components. They do not, however, detect vulnerabilities for in-house custom
developed components.

SCA tools are most effective in finding common and popular libraries and
components, particularly open-source pieces. They work by comparing known
modules found in code to a list of known vulnerabilities. The SCA tools find
components that have known and documented vulnerabilities and will often advise if
components are out of date or have patches available.
To make this comparison, almost all SCA tools use the NIST National Vulnerability
Database Common Vulnerabilities and Exposures (CVEs) as a source for known
vulnerabilities. Many commercial SCA products also use the VulnDB commercial
vulnerability database as a source, as well as some other public and proprietary
sources. SCA tools can run on source code, byte code, binary code, or some
combination.

Database Security Scanning


The SQL Slammer worm of 2003 exploited a known vulnerability in a database-
management system that had a patch released more than one year before the attack.
Although databases are not always considered part of an application, application
developers often rely heavily on the database, and applications can often heavily
affect databases. Database-security-scanning tools check for updated patches and
versions, weak passwords, configuration errors, access control list (ACL) issues, and
more. Some tools can mine logs looking for irregular patterns or actions, such as
excessive administrative actions.

44
Database scanners generally run on the static data that is at rest while the database-
management system is operating. Some scanners can monitor data that is in transit.

Interactive Application Security Testing (IAST) and Hybrid Tools


Hybrid approaches have been available for a long time, but more recently have been
categorized and discussed using the term IAST. IAST tools use a combination of static
and dynamic analysis techniques. They can test whether known vulnerabilities in
code are actually exploitable in the running application.
IAST tools use knowledge of application flow and data flow to create advanced attack
scenarios and use dynamic analysis results recursively: as a dynamic scan is being
performed, the tool will learn things about the application based on how it responds
to test cases. Some tools will use this knowledge to create additional test cases,
which then could yield more knowledge for more test cases and so on. IAST tools are
adept at reducing the number of false positives, and work well in Agile and DevOps
environments where traditional stand-alone DAST and SAST tools can be too time
intensive for the development cycle.

Mobile Application Security Testing (MAST)


The Open Web Application Security Project (OWASP) listed the top 10 mobile risks in
2016 as
improper platform usage
insecure data storage
insecure communication
insecure authentication
insufficient cryptography
insecure authorization
client code quality
code tampering
reverse engineering
extraneous functionality
MAST Tools are a blend of static, dynamic, and forensics analysis. They perform some
of the same functions as traditional static and dynamic analyzers but enable mobile
code to be run through many of those analyzers as well. MAST tools have specialized
features that focus on issues specific to mobile applications, such as jail-breaking or
rooting of the device, spoofed WI-FI connections, handling and validation of
certificates, prevention of data leakage, and more.

Application Security Testing as a Service (ASTaaS)


As the name suggests, with ASTaaS, you pay someone to perform security testing on
your application. The service will usually be a combination of static and dynamic
analysis, penetration testing, testing of application programming interfaces (APIs),

44
risk assessments, and more. ASTaaS can be used on traditional applications,
especially mobile and web apps.
Momentum for the use of ASTaaS is coming from use of cloud applications, where
resources for testing are easier to marshal. Worldwide spending on public cloud
computing is projected to increase from $67B in 2015 to $162B in 2020.

Correlation Tools
Dealing with false positives is a big issue in application security testing. Correlation
tools can help reduce some of the noise by providing a central repository for findings
from others AST tools.
Different AST tools will have different findings, so correlation tools correlate and
analyze results from different AST tools and help with validation and prioritization of
findings, including remediation workflows. Whereas some correlation tools include
code scanners, they are useful mainly for importing findings from other tools.

Test-Coverage Analyzers
Test-coverage analyzers measure how much of the total program code has been
analyzed. The results can be presented in terms of statement coverage (percentage
of lines of code tested) or branch coverage (percentage of available paths tested).
For large applications, acceptable levels of coverage can be determined in advance
and then compared to the results produced by test-coverage analyzers to accelerate
the testing-and-release process. These tools can also detect if particular lines of code
or branches of logic are not actually able to be reached during program execution,
which is inefficient and a potential security concern. Some SAST tools incorporate this
functionality into their products, but standalone products also exist.
Since the functionality of analyzing coverage is being incorporated into some of the
other AST tool types, standalone coverage analyzers are mainly for niche use.

Application Security Testing Orchestration (ASTO)


ASTO integrates security tooling across a software development lifecycle (SDLC).
While the term ASTO is newly coined by Gartner since this is an emerging field, there
are tools that have been doing ASTO already, mainly those created by correlation-tool
vendors. The idea of ASTO is to have central, coordinated management and reporting
of all the different AST tools running in an ecosystem. It is still too early to know if the
term and product lines will endure, but as automated testing becomes more
ubiquitous, ASTO does fill a need.

Selecting Testing Tool Types


There are many factors to consider when selecting from among these different types
of AST tools. If you are wondering how to begin, the biggest decision you will make is
to get started by beginning using the tools. According to a 2013 Microsoft security

44
study, 76 percent of U.S. developers use no secure application-program process and
more than 40 percent of software developers globally said that security wasn't a top
priority for them. Our strongest recommendation is that you exclude yourself from
these percentages.
There are factors that will help you to decide which type of AST tools to use and to
determine which products within an AST tool class to use. It is important to note,
however, that no single tool will solve all problems. As stated above, security is not
binary; the goal is to reduce risk and exposure.
Before looking at specific AST products, the first step is to determine which type of
AST tool is appropriate for your application. Until your application software testing
grows in sophistication, most tooling will be done using AST tools from the base of
the pyramid, shown in blue in the figure below. These are the most mature AST tools
that address most common weaknesses.
After you gain proficiency and experience, you can consider adding some of the
second-level approaches shown below in blue. For instance, many testing tools for
mobile platforms provide frameworks for you to write custom scripts for testing.
Having some experience with traditional DAST tools will allow you to write better test
scripts. Likewise, if you have experience with all the classes of tools at the base of the
pyramid, you will be better positioned to negotiate the terms and features of an
ASTaaS contract.

The decision to employ tools in the top three boxes in the pyramid is dictated as
much by management and resource concerns as by technical considerations.
If you are able to implement only one AST tool, here are some guidelines for which
type of tool to choose:
If the application is written in-house or you have access to the source code, a good
starting point is to run a static application security tool (SAST) and check for coding
issues and adherence to coding standards. In fact, SAST is the most common starting
point for initial code analysis.
If the application is not written in house or you otherwise don't have access to the
source code, dynamic application security testing (DAST) is the best choice.
Whether you have access to the source code or not, if a lot of third-party and open-
source components are known to be used in the application, then origin
analysis/software composition analysis (SCA) tools are the best choice. Ideally, SCA
tools are run alongside SAST and/or DAST tools, but if resources only allow for
implementation of one tool, SCA tools are imperative for applications with 3rd party
components because they will check for vulnerabilities that are already widely
known.

Wrapping Up and Looking Ahead

44
In the long run, incorporating AST tools into the development process should save
time and effort on re-work by catching issues earlier. In practice, however,
implementing AST tools requires some initial investment of time and resources. Our
guidance presented above is intended to help you select an appropriate starting
point. After you begin using AST tools, they can produce lots of results, and someone
must manage and act on them.
As you analyze the results with one tool, it may become desirable to introduce
additional tools into your environment. As a reference example, the graphic below
depicts how many classes of tools could be effectively deployed in a continuous
integration and continuous delivery (CI/CD) development process. It is not intended
that all these tools be introduced at once into environment. This graphic shows
where certain classes of tools fit in to help you make decisions and to provide a
roadmap for where you can get to eventually.

These tools also have many knobs and buttons for calibrating the output, but it takes
time to set them at a desirable level. Both false positives and false negatives can be
troublesome if the tools are not set correctly.
In the next post in this series, I will consider these decision factors in greater detail
and present guidance in the form of lists that can easily be scanned and used as
checklists by those responsible for application security testing.

Additional Resources
Read the second post in this series, Decision-Making Factors for Selecting Application
Security Testing Tools.
Learn about the National Institute of Standards and Technology (NIST) Software
Assurance Metrics and Tool Evaluation (SAMATE) Project.
Learn about the Open Web Application Security Project (OWASP).
Learn about the SANS Institute.
Access and download the software, tools, and methods that the SEI creates, tests,
refines, and disseminates.
Review the Department of Homeland Security (DHS) Build Security In website.

44
Account management supports organizational and mission or business functions by:
• Assigning account managers for information systems accounts.
• Establishing conditions for group or role membership.
• Specifying authorized users of information systems.
• Requiring approval for authorizations, creating, enabling, modifying, disabling,
and removing access.
• Monitoring use of information systems accounts.
• Notification to account manager when account access is no longer needed.
• Reviews account for compliance with account management requirements.

45
9.3 Management review
Top management shall review the organization’s information security management
system at planned intervals to ensure its continuing suitability, adequacy and
effectiveness.
The management review shall include consideration of:
a) the status of actions from previous management reviews;
b) changes in external and internal issues that are relevant to the information
security management system;
c) feedback on the information security performance, including trends in:
1) nonconformities and corrective actions;
2) monitoring and measurement results;
3) audit results; and
4) fulfilment of information security objectives;
d) feedback from interested parties;
e) results of risk assessment and status of risk treatment plan; and
f) opportunities for continual improvement.

The outputs of the management review shall include decisions related to continual

46
improvement opportunities and any needs for changes to the information security
management system.
The organization shall retain documented information as evidence of the results of
management reviews.

46
Download this document and read end to end to understand KPI’s and KRI’s.

https://www.coso.org/Documents/COSO-KRI-Paper-Full-FINAL-for-Web-Posting-Dec110-
000.pdf

It is important to distinguish key performance indicators (KPIs) from key risk indicators
(KRIs). Both management and boards regularly review summary data that include selected
KPIs designed to provide a high-level overview of the performance of the organization and
its major operating units. These reports often are focused almost exclusively on the
historical performance of the organization and its key units and operations. For example,
reports often highlight monthly, quarterly, and year-to-date sales trends, customer
shipments, delinquencies, and other performance data points relevant to the organization.
It is important to recognize that these measures may not provide an adequate “early
warning indicator” of a developing risk because they mostly focus on results that have
already occurred.

While KPIs are important to the successful management of an organization by identifying


underperforming aspects of the enterprise as well as those aspects of the business that
merit increased resources and energy, senior management and boards also benefit from a
set of KRIs that provide timely leading-indicator information about emerging risks.

47
Measures of events or trigger points that might signal issues developing internally within the
operations of the organization or potential risks emerging from external events, such as
macroeconomic shifts that affect the demand for the organization’s products or services,
may provide rich information for management and boards to consider as they execute the
strategies of the organization.

Key risk indicators are metrics used by organizations to provide an early signal of increasing
risk exposures in various areas of the enterprise. In some instances, they may represent key
ratios that management throughout the organization track as indicators of evolving risks, and
potential opportunities, which signal the need for actions that need to be taken. Others may
be more elaborate and involve the aggregation of several individual risk indicators into a
multi-dimensional score about emerging events that may lead to new risks or opportunities.
An example related to the oversight of accounts receivable collection helps illustrate the
difference in KPIs and KRIs. A key performance indicator for customer credit is likely to
include data about customer delinquencies and write-offs. This key performance indicator,
while important, provides insights about a risk event that has already occurred (e.g., a
customer failed to pay in accordance with the sales agreement or contract). A KRI could be
developed to help anticipate potential future customer collection issues so that the credit
function could be more proactive in addressing customer payment trends before risk events
occur. A relevant KRI for this example might be analysis of reported financial results of the
company’s 25 largest customers or general collection challenges throughout the industry to
see what trends might be emerging among customers that could potentially signal challenges
related to collection efforts in future periods.

Developing Effective Key Risk Indicators A goal of developing an effective set of KRIs is to
identify relevant metrics that provide useful insights about potential risks that may have an
impact on the achievement of the organization’s objectives. Therefore, the selection and
design of effective KRIs starts with a firm grasp of organizational objectives and risk-related
events that might affect the achievement of those objectives. Linkage of top risks to core
strategies helps pinpoint the most relevant information that might serve as an effective
leading indicator of an emerging risk. In the simple illustration below, management has an
objective to achieve greater profitability by increasing revenues and decreasing costs. They
have identified four strategic initiatives that are critical to accomplishing those objectives.
Several potential risks have been identified that may have an impact on one or more of four
key strategic initiatives. Mapping key risks to core strategic initiatives puts management in a
position to begin identifying the most critical metrics that can serve as leading key risk
indicators to help them oversee the execution of core strategic initiatives. As shown below,
KRIs have been identified for each critical risk. Mapping KRIs to critical risks and core
strategies reduces the likelihood that management becomes distracted by other information
that may be less relevant to the achievement of enterprise objectives.

47
12.3 Backup
Objective: To protect against loss of data.
12.3.1 Information backup
Control
Backup copies of information, software and system images should be taken and
tested regularly in accordance with an agreed backup policy.
Implementation guidance
A backup policy should be established to define the organization’s requirements for
backup of information, software and systems. The backup policy should define the
retention and protection requirements.

Adequate backup facilities should be provided to ensure that all essential


information and software can be recovered following a disaster or media failure.

When designing a backup plan, the following items should be taken into
consideration:
a) accurate and complete records of the backup copies and documented restoration
procedures should be produced;

48
b) the extent (e.g. full or differential backup) and frequency of backups should reflect
the business requirements of the organization, the security requirements of the
information involved and the
criticality of the information to the continued operation of the organization;
c) the backups should be stored in a remote location, at a sufficient distance to
escape any damage from a disaster at the main site;
d) backup information should be given an appropriate level of physical and
environmental protection (see Clause 11) consistent with the standards applied at
the main site;
e) backup media should be regularly tested to ensure that they can be relied upon for
emergency use when necessary; this should be combined with a test of the
restoration procedures and checked against the restoration time required. Testing the
ability to restore backed-up data should be performed onto dedicated test media, not
by overwriting the original media in case the backup or restoration process fails and
causes irreparable data damage or loss;
f) in situations where confidentiality is of importance, backups should be protected
by means of encryption. Operational procedures should monitor the execution of
backups and address failures of scheduled backups to ensure completeness of
backups according to the backup policy. Backup arrangements for individual systems
and services should be regularly tested to ensure that
they meet the requirements of business continuity plans. In the case of critical
systems and services, backup arrangements should cover all systems information,
applications and data necessary to recover the complete system in the event of a
disaster. The retention period for essential business information should be
determined, taking into account any requirement for archive copies to be
permanently retained.

48
Training is defined in NIST Special Publication 800-16 as follows: “The ‘Training’
level of the learning continuum strives to produce relevant and needed security
skills and competencies by practitioners of functional specialties other than IT
security (e.g., management, systems design and development, acquisition,
auditing).” The most significant difference between training and awareness is that
training seeks to teach skills, which allow a person to perform a specific function,
while awareness seeks to focus an individual’s attention on an issue or set of issues.
The skills acquired during training are built upon the awareness foundation, in
particular, upon the security basics and literacy material. A training curriculum must
not necessarily lead to a formal degree from an institution of higher learning;
however, a training course may contain much of the same material found in a
course that a college or university includes in a certificate or degree program.

An example of training is an IT security course for system administrators, which


should address in detail the management controls, operational controls, and
technical controls that should be implemented. Management controls include
policy, IT security program management, risk management, and life-cycle security.

49
Operational controls include personnel and user issues, contingency planning,
incident handling, awareness and training, computer support and operations, and
physical and environmental security issues. Technical controls include identification
and authentication, logical access controls, audit trails, and cryptography. (See NIST
Special Publication 800-12, An Introduction to Computer Security: The NIST Handbook,
for in-depth discussion of these controls
(http://csrc.nist.gov/publications/nistpubs/index.html).)

Please download and review


https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-50.pdf

Executive management: Organizational leaders need to fully understand directives


and laws that form the basis for the security program. They also need to comprehend
their leadership roles in ensuring full compliance by users within their units.

Security personnel (security program managers and security officers): These


individuals act as expert consultants for their organization; therefore, they must be
well educated on security policy and accepted best practices.

System owners: Owners must have a broad understanding of security policy and a
high degree of understanding regarding security controls and requirements
applicable to the systems they manage.

System administrators and IT support personnel: Entrusted with a high degree of


authority over support operations critical to a successful security program, these
individuals need a higher degree of technical knowledge in effective security
practices and implementation.

Operational managers and system users: These individuals need a high degree of
security awareness and training on security controls and rules of behavior for systems
they use to conduct business operations.

49
We will cover this in detail in later domains as it was covered to some extent in the
EARLIER DOMAINS. We will also cover ISO 22301 Lead Implementation Course as
well inshAllah.

50
14.3.1 Protection of test data

Control
Test data should be selected carefully, protected and controlled.

Implementation guidance
The use of operational data containing personally identifiable information or any
other confidential information for testing purposes should be avoided. If personally
identifiable information or otherwise confidential information is used for testing
purposes, all sensitive details and content should be protected by removal or
modification (see ISO/IEC 29101[26]).
The following guidelines should be applied to protect operational data, when used
for testing purposes:
a) the access control procedures, which apply to operational application systems,
should also apply to test application systems;
b) there should be separate authorization each time operational information is
copied to a test environment;
c) operational information should be erased from a test environment immediately

51
after the testing is complete;
d) the copying and use of operational information should be logged to provide an
audit trail.

Other information
System and acceptance testing usually requires substantial volumes of test data that
are as close as possible to operational data.

51
Review this link. Joseph Kirkpatrick is an old friend of mine and he will be helping us
achieve ISO 27001 certification along with SOC1, 2 and 3 down the line once we open our
other business etc.

https://kirkpatrickprice.com/video/soc-1-vs-soc-2-vs-soc-3/

What’s The Difference Between SOC 1, SOC 2, and SOC 3?


August 16, 2017/by Joseph KirkpatrickWhen it comes to SOC (Service Organization
Control) reports, there are three different report types: SOC 1, SOC 2, and SOC 3.
When considering which report fits your organization’s needs, you must first
understand what your clients require of you and then consider the areas of internal
control over financial reporting (ICFR), the Trust Services Principles, and restricted
use.
SOC 1 vs. SOC 2 vs. SOC 3

What Is a SOC 1 Report?


SOC 1 engagements are based on the SSAE 18 standard and report on the
effectiveness of internal controls at a service organization that may be relevant to
their client’s internal control over financial reporting (ICFR).

52
What Is a SOC 2 Report?
A SOC 2 audit evaluates internal controls, policies, and procedures that directly relate
to the security of a system at a service organization. The SOC 2 report was designed
to determine if service organizations are compliant with the principles of security,
availability, processing integrity, confidentiality, and privacy, also known as the Trust
Services Principles. These principles address internal controls unrelated to ICFR.

What Is a SOC 3 Report?


A SOC 3 report, just like a SOC 2, is based on the Trust Services Principles, but there’s
a major difference between these types of reports: restricted use. A SOC 3 report can
be freely distributed, whereas a SOC 1 or SOC 2 can only be read by the user
organizations that rely on your services. A SOC 3 does not give a description of the
service organization’s system, but can provide interested parties with the auditor’s
report on whether an entity maintained effective controls over its systems as it
relates to the Trust Services Principles.
When trying to determine whether your service organization needs a SOC 1, SOC 2,
or SOC 3, keep these requirements in mind:
Could your service organization affect a client’s financial reporting? A SOC 1 would
apply to you.
Does your service organization want to be evaluated on the Trust Service Principles?
SOC 2 and SOC 3 reports would work.
Does restricted use affect your decision? SOC 1 and SOC 2 reports can only be read
by the user organizations that rely on your services. A SOC 3 report can be freely
distributed, used in many different applications.
Each of these reports must be issued by a licensed CPA firm, such as KirkpatrickPrice.
We offer SOC 1, SOC 2, and SOC 3 engagements. To learn more about
KirkpatrickPrice’s SOC services, contact us today using the form below.

52
53
54
1
2
“Synthetic monitoring (also known as active monitoring or proactive monitoring) is website
monitoring that is done using a Web browser emulation or scripted recordings of
Web transactions. Behavioral scripts (or paths) are created to simulate an action or path
that a customer or end-user would take on a site.”

It has two types


• Real User Monitoring (RUM)
• Synthetic Performance Monitoring

3
Synthetic monitoring (also known as active monitoring or proactive
monitoring) is a monitoring technique that is done by using an emulation or
scripted recordings of transactions. Behavioral scripts (or paths) are created to
simulate an action or path that a customer or end-user would take on a site,
application or other software (or even hardware). Those paths are then
continuously monitored at specified intervals for performance, such as:
functionality, availability, and response time measures.
Synthetic monitoring is valuable because it enables a webmaster or
an IT/Operations professional to identify problems and determine if
a website or application is slow or experiencing downtime before that
problem affects actual end-users or customers. This type of monitoring does
not require actual traffic, thus the name synthetic, so it enables companies to
test applications 24x7, or test new applications prior to a live customer-facing
launch. This is usually a good complement when used with passive
monitoring to help provide visibility on application health during off peak
hours when transaction volume is low.[1]
When combined with traditional APM tools, synthetic monitoring can provide

4
deeper visibility into end-to-end performance, regardless of where applications
are running.[2]
Because synthetic monitoring is a simulation of typical user behavior or
navigation through a website, it is often best used to monitor commonly
trafficked paths and critical business processes. Synthetic tests must be scripted
in advance, so it is not feasible to measure performance for every permutation
of a navigational path an end-user might take. This is more suited for passive
monitoring.
Synthetic testing is useful for measuring uptime, availability and response time
of critical pages and transaction (how a site performs from all geographies) but
doesn't monitor or capture actual end-user interactions, see Website
monitoring. This is also known as Active monitoring that consists of synthetic
probes and Web robots to help report on system availability and predefined
business transactions.[3]

Source : https://en.wikipedia.org/wiki/Synthetic_monitoring

Types of Monitoring
Website Monitoring: Website monitoring uses synthetic transactions to
perform HTTP requests to check availability and to measure performance of a
web page, website, or web application.

Database Monitoring: Database monitoring using synthetic transactions


monitors the availability of a database.

TCP Port Monitoring: A TCP port synthetic transaction measures the availability
of your website, service, or application; you can specify the server and TCP port
for
Operations Manager to monitor.

4
As per ISO 27002:2013 Standard

14.2 Security in development and support processes


Objective: To ensure that information security is designed and implemented
within the development lifecycle of information systems.

14.2.1 Secure development policy


Control
Rules for the development of software and systems should be established and
applied to developments within the organization.

Implementation guidance
Secure development is a requirement to build up a secure service,
architecture, software and system. Within a secure development policy, the
following aspects should be put under consideration:
a) security of the development environment;
b) guidance on the security in the software development lifecycle:

5
1) security in the software development methodology;
2) secure coding guidelines for each programming language used;
c) security requirements in the design phase;
d) security checkpoints within the project milestones;
e) secure repositories;
f) security in the version control;
g) required application security knowledge;
h) developers’ capability of avoiding, finding and fixing vulnerabilities.

Secure programming techniques should be used both for new developments


and in code re-use scenarios where the standards applied to development may
not be known or were not consistent with current best practices. Secure coding
standards should be considered and where relevant mandated for use.
Developers should be trained in their use and testing and code review should
verify their use. If development is outsourced, the organization should obtain
assurance that the external party complies with these rules for secure
development.

Other information
Development may also take place inside applications, such as office
applications, scripting, browsers and databases.

During Planning and Design


While a security review of the architecture and threat modeling are not
security testing methods, they are an important prerequisite for subsequent
security testing efforts, and the security practitioner should be aware of the
options available to them. The following is a consideration of the prerequisites
and benefits of architecture security review and threat modeling:

Architecture security review: A manual review of the product architecture to


ensure that it fulfills the necessary security requirements:
o Prerequisites: Architectural model
o Benefit: Detecting architectural violations of the security standard

Threat modeling: A structured manual analysis of an application specific


business case or usage scenario. This analysis is guided by a set of precompiled
security threats:

5
o Prerequisites: Business Case or Usage Scenario
o Benefits: Identification of threats, including their impact and potential
countermeasures specific to the development of the software product

These methods help to identify the attack surface and, thus, the most critical
components. This allows a focusing of the security testing activities to ensure
they are as effective as possible.

5
What is SAST?
Static application security testing (SAST), or static analysis, is a testing methodology that
analyzes source code to find security vulnerabilities that make your organization’s
applications susceptible to attack. SAST scans an application before the code is compiled.
It’s also known as white box testing.

What problems does SAST solve?


SAST takes place very early in the software development life cycle (SDLC) as it
does not require a working application and can take place without code being
executed. It helps developers identify vulnerabilities in the initial stages of
development and quickly resolve issues without breaking builds or passing on
vulnerabilities to the final release of the application. SAST tools give
developers real-time feedback as they code, helping them fix issues before
they pass the code to the next phase of the SDLC. This prevents security-
related issues from being considered an afterthought. SAST tools also provide
graphical representations of the issues found, from source to sink. These help
you navigate the code easier. Some tools point out the exact location of

6
vulnerabilities and highlight the risky code. Tools can also provide in-depth
guidance on how to fix issues and the best place in the code to fix them,
without requiring deep security domain expertise.

Developers can also create the customized reports they need with SAST tools;
these reports can be exported offline and tracked using dashboards. Tracking all
the security issues reported by the tool in an organized way can help
developers remediate these issues promptly and release applications with
minimal problems. This process contributes to the creation of a secure SDLC.
It’s important to note that SAST tools must be run on the application on a
regular basis, such as during daily/monthly builds, every time code is checked
in, or during a code release.

Why is SAST an important security activity?


Developers dramatically outnumber security staff. It can be challenging for an
organization to find the resources to perform code reviews on even a fraction
of its applications. A key strength of SAST tools is the ability to analyze 100% of
the codebase. Additionally, they are much faster than manual secure code
reviews performed by humans. These tools can scan millions of lines of code in
a matter of minutes. SAST tools automatically identify critical vulnerabilities—
such as buffer overflows, SQL injection, cross-site scripting, and others—with
high confidence. Thus, integrating static analysis into the SDLC can yield
dramatic results in the overall quality of the code developed.

What are the key steps to run SAST effectively?


There are six simple steps needed to perform SAST efficiently in organizations
that have a very large number of applications built with different languages,
frameworks, and platforms.
Finalize the tool. Select a static analysis tool that can perform code reviews of
applications written in the programming languages you use. The tool should
also be able to comprehend the underlying framework used by your software.
Create the scanning infrastructure, and deploy the tool. This step involves
handling the licensing requirements, setting up access control and
authorization, and procuring the resources required (e.g., servers and
databases) to deploy the tool.
Customize the tool. Fine-tune the tool to suit the needs of the organization. For

6
example, you might configure it to reduce false positives or find additional
security vulnerabilities by writing new rules or updating existing ones. Integrate
the tool into the build environment, create dashboards for tracking scan
results, and build custom reports.
Prioritize and onboard applications. Once the tool is ready, onboard your
applications. If you have a large number of applications, prioritize the high-risk
applications to scan first. Eventually, all your applications should be onboarded
and scanned regularly, with application scans synced with release cycles, daily
or monthly builds, or code check-ins.
Analyze scan results. This step involves triaging the results of the scan to
remove false positives. Once the set of issues is finalized, they should be
tracked and provided to the deployment teams for proper and timely
remediation.
Provide governance and training. Proper governance ensures that your
development teams are employing the scanning tools properly. The software
security touchpoints should be present within the SDLC. SAST should be
incorporated as part of your application development and deployment process.
What tools can be used for SAST?
Synopsys offers the most comprehensive solution for integrating security and
quality into your SDLC and supply chain.

Coverity Static Application Security Testing finds critical defects and security
weaknesses in code as it’s written. It provides full path coverage, ensuring that
every line of code and every potential execution path is tested. Through a deep
understanding of the source code and the underlying frameworks, it provides
highly accurate analysis, so developers don’t waste time on a large volume of
false positives.

Coverity scales to accommodate thousands of developers and can analyze


projects with more than 100 million lines of code with ease. It can be rapidly
integrated with critical tools and systems that support the development
process, such as source control management, build and continuous integration,
bug tracking, and application life cycle management (ALM) solutions, as well as
IDEs.
SAST in IDE (Code Sight) is a real-time, developer-centric SAST tool. It scans for
and identifies vulnerabilities as developers code. Code Sight integrates into the
integrated development environment (IDE), where it identifies security
vulnerabilities and provides guidance to remediate them.

6
How is SAST different from DAST?
Organizations are paying more attention to application security, owing to the
rising number of breaches. They want to identify vulnerabilities in their
applications and mitigate risks at an early stage. There are two different types
of application security testing—SAST and dynamic application security testing
(DAST). Both testing methodologies identify security flaws in applications, but
they do so differently.

6
Static Source Code Analysis (SAST) and manual code review:
Analysis of the application source code for finding vulnerabilities without
executing the application:
• Prerequisites: Application source code
• Benefits: Detection of insecure programming, outdated libraries, and
misconfigurations

Static binary code analysis and manual binary review: Analysis of the compiled
application (binary) for finding vulnerabilities without executing the
application. In general, this is like the source
code analysis but is not as precise and fix recommendations typically cannot
be provided.

7
8
In black-box testing, the tested system is used as a black box, i.e., no internal
details of the system implementation are used. In contrast, white-box testing
takes the internal system details (e.g., the source code) into account.

9
Static Testing And Dynamic Testing – Understand the Difference
Software Testing Tutorial | By Meenakshi Agarwal

Static testing and dynamic testing are essential testing techniques meant for
developers and testers for use during the Software Development. These are
unique validation methods which the organization must decide after due
analysis which one to practice for software verification. Since your objective is
to get the maximum benefit from these type of testing, hence pick the right
tool most suited for your needs. In this tutorial, you’ll learn the pros and cons
of each of these type of testing.

Static Testing And Dynamic Testing


Static Testing:

In Static Testing, it does not require executing the code. You can perform it
manually or by using a set of tools. This testing type covers the analysis of the
source, review of specification documents, also the design description

10
documents. In this, the testers provide review comments on each of the doc
reviewed. If the application is yet not operational and hasn’t implemented the
user interface, then you can execute a security analysis to observe it under the
runtime-less configuration.

While doing static testing, a tester or a developer can go on to look for the
bugs, buffer overflows, and probably identify the vulnerable code in the
system. To begin such testing, you don’t need to wait for finishing the entire
application development. It can get started in an early phase of the
development lifecycle. And testers can begin reviewing the code, scripts,
requirements, test cases, or any related doc whichever is available at the point
of time.

Static Testing Techniques:


1. Inspection:
The principal moto of this Type of Testing is to identify defects in the early stage
of the software development cycle. The team can start with the inspection of
any of the artifacts as mentioned earlier such the code or the test cases or the
product docs walkthroughs. It requires a moderator for organizing the review
sessions. Since the inspection is a formal type of review, so the moderator
needs to prepare a checklist of what to go through and what not.

2. Walkthrough:
Another technique is the walkthrough. It requires the owner of the document
to explain the work done. The attendees can place their queries and another
person allocated as the scribe has to
record the points in the notes.

3. Technical Reviews:
In this Static Testing method, the team carries out the Technical Scrutiny of the
code written to check whether it meets the coding guidelines and standards. In
general, the testing artifacts such as a testing plan, validation strategy, and the
automation scripts get also reviewed in this session.

4. Informal Reviews:
Static Testing technique in which the docs get scrutinized informally. Also, the
participants provide informal comments during the meeting.

10
Dynamic Testing:
Dynamic testing gets performed after the application has reached into the
operational mode. It gets executed in the real runtime environment set up by
the QA. While the code behind the app is running, the tester supplies the
required input and waits for the result. After that, he/she matches the output
with the expected outcome.
That’s how the testers inspect the functional behavior of the application, track
the system RAM, CPU usage, its response time, and the performance of the
overall software. Dynamic testing has another name as validation testing. It can
either be Functional Testing as well as non-functional testing.

Types Of Dynamic Testing Techniques:

1. Unit Testing:
This testing happens mostly at the developer’s end. The essential artifacts
which get tested are the source code of the application’s various modules.

2. Integration Testing:
The purpose of this technique is to verify the interfacing between two or more
modules once they get tied up together.

3. System Testing:
This testing gets done on the entire software with all modules working.

4. Acceptance Testing:
This testing runs the validation keeping the user’s point of view in mind.

In software development lifecycle, both the Static Testing and Dynamic Testing
are essential to certify the application functionality. Each of these has its
strength and weakness which you should be aware.

Static Testing Vs. Dynamic Testing:


1. Static Testing belongs to the white box testing. It gets performed at an early
stage of development. It incurs a lower cost as compared to the dynamic
testing. Vs Dynamic Testing gets performed at the final stage of the
development process.
2. Static testing has better line coverage than the dynamic testing in the short
duration. Vs Dynamic Testing has lower no. Of line coverage as it examines only

10
a smaller part of the code.
3. Static testing occurs before the application is ready for the deployment. Vs.
Dynamic Testing happens after the code got deployed.
4. Static testing gets done in the verification stage. Vs. Dynamic testing
completes in the validation stage.
5. No execution happens in the static testing. Vs. Dynamic testing requires the
code execution.
6. Static testing produces the analysis of the code along with the
documentation. Vs. Dynamic Testing reports the bottlenecks in the application.
7. In Static Testing, the team prepares a checklist describing the testing process.
Vs. In Dynamic Testing, the test cases get executed.
8. Static Testing Methods are Walkthroughs and code reviews. Vs. Dynamic
testing majorly has the functional and nonfunctional validation.

Summary – Static Testing And Dynamic Testing


In any Software development methodology, both the Verification and
Validation process get carried out to certify that the final software has all the
requirements implemented correctly.
Static testing scrutinizes the application code without any execution. It lies
under the umbrella of Verification. The testers have got multiple Static testing
techniques such as Inspection, Walkthrough, Technical and Informal reviews,
etc.

On the contrary, Dynamic testing validates the working product. It lies under
the umbrella of Validation. The standard Dynamic testing techniques are Unit
Testing, Integration Testing, System or Stabilization Testing and User
Acceptance Testing. Here, the product gets validated by both functional and
non-functional aspects.

10
11
12
Automated vulnerability scanners: Test an application for the use of system
components or configurations that are known to be insecure. For this,
predefined attack patterns are executed as well as system fingerprints are
analyzed:

o Benefits: Detection of well-known vulnerabilities, i.e., detection of


outdated frameworks and misconfigurations

13
Source Wikipedia:

Fuzzing or fuzz testing is an automated software testing technique that


involves providing invalid, unexpected, or random data as inputs to
a computer program. The program is then monitored for exceptions such
as crashes, failing built-in code assertions, or potential memory leaks.
Typically, fuzzers are used to test programs that take structured inputs. This
structure is specified, e.g., in a file format or protocol and distinguishes valid
from invalid input. An effective fuzzer generates semi-valid inputs that are
"valid enough" in that they are not directly rejected by the parser, but do
create unexpected behaviors deeper in the program and are "invalid enough"
to expose corner cases that have not been properly dealt with.

For the purpose of security, input that crosses a trust boundary is often the
most interesting.[1] For example, it is more important to fuzz code that
handles the upload of a file by any user than it is to fuzz the code that parses
a configuration file that is accessible only to a privileged user.

14
15
The attack surface of a software or hardware environment is the sum of the
different points (the "attack vectors") where an unauthorized user (the
"attacker") can try to enter data to or extract data from an environment.
Keeping the attack surface as small as possible is a basic security measure.

16
Different security testing methods behave differently when applied to
different application types

17
Security testing techniques and tools differ in usability (e.g., fix
recommendations) and quality (e.g., false positives rate)

18
Security testing tools usually only support a limited number of technologies
(e.g., programming languages), and if a tool supports multiple technologies, it
does not necessarily support all of them equally well

19
Different tools and methods require different computing power or different
manual efforts

20
Misuse case is a business process modeling tool used in the software
development industry. The term Misuse Case or mis-use case is derived from
and is the inverse of use case.[1] The term was first used in the 1990s by
Guttorm Sindre of the Norwegian University of Science and Technology,
and Andreas L. Opdahl of the University of Bergen, Norway. It describes the
process of executing a malicious act against a system, while use case can be
used to describe any action taken by the system.[2]

Some misuse cases occur in highly specific situations, whereas others


continually threaten systems. For instance, a car is most likely to be stolen
when parked and unattended; whereas a web server might suffer a denial-of-
service attack at any time. You can develop misuse and use cases recursively,
going from system to subsystem levels or lower as necessary. Lower-level
cases can highlight aspects not considered at higher levels, possibly forcing
another analysis. The approach offers rich possibilities for exploring,
understanding, and validating the requirements in any direction. Drawing the
agents and misuse cases explicitly helps to focus the attention of the security

21
practitioner on the elements of the scenario.

21
In contrast to a positive test (that determines that a system works as
expected, and with any error fails the test); a negative test is designed to
provide evidence of the application behavior if there is unexpected or invalid
data. Any provocation of application failure is designed to surface in the test
rather than once the application is approved for production. An optimal
response for an application to a negative test is to gracefully reject the
unexpected or invalid data without crashing. While exceptions and error
conditions are expected in negative tests
they are not expected in positive tests. It is optimal to combine a range of
positive and negative test to run on an application for thorough examination
of behavior.

22
Test-Coverage Analyzers
Test-coverage analyzers measure how much of the total program code has
been analyzed. The results can be presented in terms of statement coverage
(percentage of lines of code tested) or branch coverage (percentage of
available paths tested).
For large applications, acceptable levels of coverage can be determined in
advance and then compared to the results produced by test-coverage
analyzers to accelerate the testing-and-release process. These tools can also
detect if particular lines of code or branches of logic are not actually able to
be reached during program execution, which is inefficient and a potential
security concern. Some SAST tools incorporate this functionality into their
products, but standalone products also exist.
Since the functionality of analyzing coverage is being incorporated into some
of the other AST tool types, standalone coverage analyzers are mainly for
niche use.

1
Interface testing involves the testing of the different components of an
application, e.g., software and hardware, in combination. This kind of
combination testing is done to ensure they are working correctly and
conforming to the requirements based on which they were designed and
developed. Interface testing is different from integration testing in that
interface testing is done to check whether the different components of the
application or system being developed are in sync with each other. In
technical terms, interface testing helps determine that distinct functions, such
as data transfer between the different elements in the system, are happening
according to the way they were designed to happen.
Interface testing is one of the most important software tests in assuring the
quality of software products. Interface testing is conducted to evaluate
whether systems or components pass data and control correctly to one
another. Interface testing is usually performed by both testing and
development teams. Interface testing helps to determine which application
areas are accessed as well as their user-friendliness.

2
Interface testing can be used to do the following:

• Check and verify if all the interactions between the application and a server
are executed properly
• Check and verify if errors are being handled properly
• Check what happens if a user interrupts any transaction
• Check what happens if a connection to a web server is reset

2
10 Types of Application Security Testing Tools: When and How to Use Them
JULY 9, 2018 • SEI BLOG

By Thomas Scanlon

Bugs and weaknesses in software are common: 84 percent of software


breaches exploit vulnerabilities at the application layer. The prevalence of
software-related problems is a key motivation for using application security
testing (AST) tools. With a growing number of application security testing
tools available, it can be confusing for information technology (IT) leaders,
developers, and engineers to know which tools address which issues. This
blog post, the first in a series on application security testing tools, will help to
navigate the sea of offerings by categorizing the different types of AST tools
available and providing guidance on how and when to use each class of tool.

See the second post in this series, Decision-Making Factors for Selecting
Application Security Testing Tools.

3
Application security is not a simple binary choice, whereby you either have
security or you don't. Application security is more of a sliding scale where
providing additional security layers helps reduce the risk of an incident,
hopefully to an acceptable level of risk for the organization. Thus, application-
security testing reduces risk in applications, but cannot completely eliminate it.
Steps can be taken, however, to remove those risks that are easiest to remove
and to harden the software in use.

The major motivation for using AST tools is that manual code reviews and
traditional test plans are time consuming, and new vulnerabilities are
continually being introduced or discovered. In many domains, there are
regulatory and compliance directives that mandate the use of AST tools.
Moreover--and perhaps most importantly--individuals and groups intent on
compromising systems use tools too, and those charged with protecting those
systems must keep pace with their adversaries.

There are many benefits to using AST tools, which increase the speed,
efficiency, and coverage paths for testing applications. The tests they conduct
are repeatable and scale well--once a test case is developed in a tool, it can be
executed against many lines of code with little incremental cost. AST tools are
effective at finding known vulnerabilities, issues, and weaknesses, and they
enable users to triage and classify their findings. They can also be used in the
remediation workflow, particularly in verification, and they can be used to
correlate and identify trends and patterns.

Guide to Application Security Testing Tools


This graphic depicts classes or categories of application security testing tools.
The boundaries are blurred at times, as particular products can perform
elements of multiple categories, but these are roughly the classes of tools
within this domain. There is a rough hierarchy in that the tools at the bottom of
the pyramid are foundational and as proficiency is gained with them,
organizations may look to use some of the more progressive methods higher in
the pyramid.

Static Application Security Testing (SAST)


SAST tools can be thought of as white-hat or white-box testing, where the
tester knows information about the system or software being tested, including

3
an architecture diagram, access to source code, etc. SAST tools examine source
code (at rest) to detect and report weaknesses that can lead to security
vulnerabilities.
Source-code analyzers can run on non-compiled code to check for defects such
as numerical errors, input validation, race conditions, path traversals, pointers
and references, and more. Binary and byte-code analyzers do the same on built
and compiled code. Some tools run on source code only, some on compiled
code only, and some on both.

Dynamic Application Security Testing (DAST)


In contrast to SAST tools, DAST tools can be thought of as black-hat or black-box
testing, where the tester has no prior knowledge of the system. They detect
conditions that indicate a security vulnerability in an application in its running
state. DAST tools run on operating code to detect issues with interfaces,
requests, responses, scripting (i.e., JavaScript), data injection, sessions,
authentication, and more.
DAST tools employ fuzzing: throwing known invalid and unexpected test cases
at an application, often in large volume.

Origin Analysis/Software Composition Analysis (SCA)


Software-governance processes that depend on manual inspection are prone to
failure. SCA tools examine software to determine the origins of all components
and libraries within the software. These tools are highly effective at identifying
and finding vulnerabilities in common and popular components, particularly
open-source components. They do not, however, detect vulnerabilities for in-
house custom developed components.

SCA tools are most effective in finding common and popular libraries and
components, particularly open-source pieces. They work by comparing known
modules found in code to a list of known vulnerabilities. The SCA tools find
components that have known and documented vulnerabilities and will often
advise if components are out of date or have patches available.
To make this comparison, almost all SCA tools use the NIST National
Vulnerability Database Common Vulnerabilities and Exposures (CVEs) as a
source for known vulnerabilities. Many commercial SCA products also use
the VulnDB commercial vulnerability database as a source, as well as some
other public and proprietary sources. SCA tools can run on source code, byte
code, binary code, or some combination.

3
Database Security Scanning
The SQL Slammer worm of 2003 exploited a known vulnerability in a database-
management system that had a patch released more than one year before the
attack. Although databases are not always considered part of an application,
application developers often rely heavily on the database, and applications can
often heavily affect databases. Database-security-scanning tools check for
updated patches and versions, weak passwords, configuration errors, access
control list (ACL) issues, and more. Some tools can mine logs looking for
irregular patterns or actions, such as excessive administrative actions.

Database scanners generally run on the static data that is at rest while the
database-management system is operating. Some scanners can monitor data
that is in transit.

Interactive Application Security Testing (IAST) and Hybrid Tools


Hybrid approaches have been available for a long time, but more recently have
been categorized and discussed using the term IAST. IAST tools use a
combination of static and dynamic analysis techniques. They can test whether
known vulnerabilities in code are actually exploitable in the running
application.
IAST tools use knowledge of application flow and data flow to create advanced
attack scenarios and use dynamic analysis results recursively: as a dynamic scan
is being performed, the tool will learn things about the application based on
how it responds to test cases. Some tools will use this knowledge to create
additional test cases, which then could yield more knowledge for more test
cases and so on. IAST tools are adept at reducing the number of false positives,
and work well in Agile and DevOps environments where traditional stand-alone
DAST and SAST tools can be too time intensive for the development cycle.

Mobile Application Security Testing (MAST)


The Open Web Application Security Project (OWASP) listed the top 10 mobile
risks in 2016 as
improper platform usage
insecure data storage
insecure communication
insecure authentication
insufficient cryptography

3
insecure authorization
client code quality
code tampering
reverse engineering
extraneous functionality
MAST Tools are a blend of static, dynamic, and forensics analysis. They perform
some of the same functions as traditional static and dynamic analyzers but
enable mobile code to be run through many of those analyzers as well. MAST
tools have specialized features that focus on issues specific to mobile
applications, such as jail-breaking or rooting of the device, spoofed WI-FI
connections, handling and validation of certificates, prevention of data leakage,
and more.

Application Security Testing as a Service (ASTaaS)


As the name suggests, with ASTaaS, you pay someone to perform security
testing on your application. The service will usually be a combination of static
and dynamic analysis, penetration testing, testing of application programming
interfaces (APIs), risk assessments, and more. ASTaaS can be used on traditional
applications, especially mobile and web apps.
Momentum for the use of ASTaaS is coming from use of cloud applications,
where resources for testing are easier to marshal. Worldwide spending on
public cloud computing is projected to increase from $67B in 2015 to $162B in
2020.

Correlation Tools
Dealing with false positives is a big issue in application security testing.
Correlation tools can help reduce some of the noise by providing a central
repository for findings from others AST tools.
Different AST tools will have different findings, so correlation tools correlate
and analyze results from different AST tools and help with validation and
prioritization of findings, including remediation workflows. Whereas some
correlation tools include code scanners, they are useful mainly for importing
findings from other tools.

Test-Coverage Analyzers
Test-coverage analyzers measure how much of the total program code has been
analyzed. The results can be presented in terms of statement coverage
(percentage of lines of code tested) or branch coverage (percentage of available

3
paths tested).
For large applications, acceptable levels of coverage can be determined in
advance and then compared to the results produced by test-coverage analyzers
to accelerate the testing-and-release process. These tools can also detect if
particular lines of code or branches of logic are not actually able to be reached
during program execution, which is inefficient and a potential security concern.
Some SAST tools incorporate this functionality into their products, but
standalone products also exist.
Since the functionality of analyzing coverage is being incorporated into some of
the other AST tool types, standalone coverage analyzers are mainly for niche
use.

Application Security Testing Orchestration (ASTO)


ASTO integrates security tooling across a software development lifecycle
(SDLC). While the term ASTO is newly coined by Gartner since this is an
emerging field, there are tools that have been doing ASTO already, mainly
those created by correlation-tool vendors. The idea of ASTO is to have central,
coordinated management and reporting of all the different AST tools running in
an ecosystem. It is still too early to know if the term and product lines will
endure, but as automated testing becomes more ubiquitous, ASTO does fill a
need.

Selecting Testing Tool Types


There are many factors to consider when selecting from among these different
types of AST tools. If you are wondering how to begin, the biggest decision you
will make is to get started by beginning using the tools. According to a 2013
Microsoft security study, 76 percent of U.S. developers use no secure
application-program process and more than 40 percent of software developers
globally said that security wasn't a top priority for them. Our strongest
recommendation is that you exclude yourself from these percentages.
There are factors that will help you to decide which type of AST tools to use and
to determine which products within an AST tool class to use. It is important to
note, however, that no single tool will solve all problems. As stated above,
security is not binary; the goal is to reduce risk and exposure.
Before looking at specific AST products, the first step is to determine which
type of AST tool is appropriate for your application. Until your application
software testing grows in sophistication, most tooling will be done using AST
tools from the base of the pyramid, shown in blue in the figure below. These

3
are the most mature AST tools that address most common weaknesses.
After you gain proficiency and experience, you can consider adding some of the
second-level approaches shown below in blue. For instance, many testing tools
for mobile platforms provide frameworks for you to write custom scripts for
testing. Having some experience with traditional DAST tools will allow you to
write better test scripts. Likewise, if you have experience with all the classes of
tools at the base of the pyramid, you will be better positioned to negotiate the
terms and features of an ASTaaS contract.

The decision to employ tools in the top three boxes in the pyramid is dictated
as much by management and resource concerns as by technical considerations.
If you are able to implement only one AST tool, here are some guidelines for
which type of tool to choose:
If the application is written in-house or you have access to the source code, a
good starting point is to run a static application security tool (SAST) and check
for coding issues and adherence to coding standards. In fact, SAST is the most
common starting point for initial code analysis.
If the application is not written in house or you otherwise don't have access to
the source code, dynamic application security testing (DAST) is the best choice.
Whether you have access to the source code or not, if a lot of third-party and
open-source components are known to be used in the application, then origin
analysis/software composition analysis (SCA) tools are the best choice. Ideally,
SCA tools are run alongside SAST and/or DAST tools, but if resources only allow
for implementation of one tool, SCA tools are imperative for applications with
3rd party components because they will check for vulnerabilities that are
already widely known.

Wrapping Up and Looking Ahead


In the long run, incorporating AST tools into the development process should
save time and effort on re-work by catching issues earlier. In practice, however,
implementing AST tools requires some initial investment of time and resources.
Our guidance presented above is intended to help you select an appropriate
starting point. After you begin using AST tools, they can produce lots of results,
and someone must manage and act on them.
As you analyze the results with one tool, it may become desirable to introduce
additional tools into your environment. As a reference example, the graphic
below depicts how many classes of tools could be effectively deployed in

3
a continuous integration and continuous delivery (CI/CD) development process.
It is not intended that all these tools be introduced at once into environment.
This graphic shows where certain classes of tools fit in to help you make
decisions and to provide a roadmap for where you can get to eventually.

These tools also have many knobs and buttons for calibrating the output, but it
takes time to set them at a desirable level. Both false positives and false
negatives can be troublesome if the tools are not set correctly.
In the next post in this series, I will consider these decision factors in greater
detail and present guidance in the form of lists that can easily be scanned and
used as checklists by those responsible for application security testing.

Additional Resources
Read the second post in this series, Decision-Making Factors for Selecting
Application Security Testing Tools.
Learn about the National Institute of Standards and Technology (NIST) Software
Assurance Metrics and Tool Evaluation (SAMATE) Project.
Learn about the Open Web Application Security Project (OWASP).
Learn about the SANS Institute.
Access and download the software, tools, and methods that the SEI creates,
tests, refines, and disseminates.
Review the Department of Homeland Security (DHS) Build Security In website.

3
Account management supports organizational and mission or business
functions by:
• Assigning account managers for information systems accounts.
• Establishing conditions for group or role membership.
• Specifying authorized users of information systems.
• Requiring approval for authorizations, creating, enabling, modifying,
disabling, and removing access.
• Monitoring use of information systems accounts.
• Notification to account manager when account access is no longer needed.
• Reviews account for compliance with account management requirements.

4
9.3 Management review
Top management shall review the organization’s information security
management system at planned intervals to ensure its continuing suitability,
adequacy and effectiveness.
The management review shall include consideration of:
a) the status of actions from previous management reviews;
b) changes in external and internal issues that are relevant to the information
security management system;
c) feedback on the information security performance, including trends in:
1) nonconformities and corrective actions;
2) monitoring and measurement results;
3) audit results; and
4) fulfilment of information security objectives;
d) feedback from interested parties;
e) results of risk assessment and status of risk treatment plan; and
f) opportunities for continual improvement.

5
The outputs of the management review shall include decisions related to
continual improvement opportunities and any needs for changes to the
information security management system.
The organization shall retain documented information as evidence of the
results of management reviews.

5
Download this document and read end to end to understand KPI’s and KRI’s.

https://www.coso.org/Documents/COSO-KRI-Paper-Full-FINAL-for-Web-Posting-Dec110-
000.pdf

It is important to distinguish key performance indicators (KPIs) from key risk indicators
(KRIs). Both management and boards regularly review summary data that include selected
KPIs designed to provide a high-level overview of the performance of the organization and
its major operating units. These reports often are focused almost exclusively on the
historical performance of the organization and its key units and operations. For example,
reports often highlight monthly, quarterly, and year-to-date sales trends, customer
shipments, delinquencies, and other performance data points relevant to the organization.
It is important to recognize that these measures may not provide an adequate “early
warning indicator” of a developing risk because they mostly focus on results that have
already occurred.

While KPIs are important to the successful management of an organization by identifying


underperforming aspects of the enterprise as well as those aspects of the business that
merit increased resources and energy, senior management and boards also benefit from a
set of KRIs that provide timely leading-indicator information about emerging risks.

6
Measures of events or trigger points that might signal issues developing internally within the
operations of the organization or potential risks emerging from external events, such as
macroeconomic shifts that affect the demand for the organization’s products or services,
may provide rich information for management and boards to consider as they execute the
strategies of the organization.

Key risk indicators are metrics used by organizations to provide an early signal of increasing
risk exposures in various areas of the enterprise. In some instances, they may represent key
ratios that management throughout the organization track as indicators of evolving risks, and
potential opportunities, which signal the need for actions that need to be taken. Others may
be more elaborate and involve the aggregation of several individual risk indicators into a
multi-dimensional score about emerging events that may lead to new risks or opportunities.
An example related to the oversight of accounts receivable collection helps illustrate the
difference in KPIs and KRIs. A key performance indicator for customer credit is likely to
include data about customer delinquencies and write-offs. This key performance indicator,
while important, provides insights about a risk event that has already occurred (e.g., a
customer failed to pay in accordance with the sales agreement or contract). A KRI could be
developed to help anticipate potential future customer collection issues so that the credit
function could be more proactive in addressing customer payment trends before risk events
occur. A relevant KRI for this example might be analysis of reported financial results of the
company’s 25 largest customers or general collection challenges throughout the industry to
see what trends might be emerging among customers that could potentially signal challenges
related to collection efforts in future periods.

Developing Effective Key Risk Indicators A goal of developing an effective set of KRIs is to
identify relevant metrics that provide useful insights about potential risks that may have an
impact on the achievement of the organization’s objectives. Therefore, the selection and
design of effective KRIs starts with a firm grasp of organizational objectives and risk-related
events that might affect the achievement of those objectives. Linkage of top risks to core
strategies helps pinpoint the most relevant information that might serve as an effective
leading indicator of an emerging risk. In the simple illustration below, management has an
objective to achieve greater profitability by increasing revenues and decreasing costs. They
have identified four strategic initiatives that are critical to accomplishing those objectives.
Several potential risks have been identified that may have an impact on one or more of four
key strategic initiatives. Mapping key risks to core strategic initiatives puts management in a
position to begin identifying the most critical metrics that can serve as leading key risk
indicators to help them oversee the execution of core strategic initiatives. As shown below,
KRIs have been identified for each critical risk. Mapping KRIs to critical risks and core
strategies reduces the likelihood that management becomes distracted by other information
that may be less relevant to the achievement of enterprise objectives.

6
12.3 Backup
Objective: To protect against loss of data.
12.3.1 Information backup
Control
Backup copies of information, software and system images should be taken
and tested regularly in accordance with an agreed backup policy.
Implementation guidance
A backup policy should be established to define the organization’s
requirements for backup of information, software and systems. The backup
policy should define the retention and protection requirements.

Adequate backup facilities should be provided to ensure that all essential


information and software can be recovered following a disaster or media
failure.

When designing a backup plan, the following items should be taken into
consideration:

7
a) accurate and complete records of the backup copies and documented
restoration procedures should be produced;
b) the extent (e.g. full or differential backup) and frequency of backups should
reflect the business requirements of the organization, the security
requirements of the information involved and the
criticality of the information to the continued operation of the organization;
c) the backups should be stored in a remote location, at a sufficient distance to
escape any damage from a disaster at the main site;
d) backup information should be given an appropriate level of physical and
environmental protection (see Clause 11) consistent with the standards applied
at the main site;
e) backup media should be regularly tested to ensure that they can be relied
upon for emergency use when necessary; this should be combined with a test
of the restoration procedures and checked against the restoration time
required. Testing the ability to restore backed-up data should be performed
onto dedicated test media, not by overwriting the original media in case the
backup or restoration process fails and causes irreparable data damage or loss;
f) in situations where confidentiality is of importance, backups should be
protected by means of encryption. Operational procedures should monitor the
execution of backups and address failures of scheduled backups to ensure
completeness of backups according to the backup policy. Backup arrangements
for individual systems and services should be regularly tested to ensure that
they meet the requirements of business continuity plans. In the case of critical
systems and services, backup arrangements should cover all systems
information, applications and data necessary to recover the complete system in
the event of a disaster. The retention period for essential business information
should be determined, taking into account any requirement for archive copies
to be permanently retained.

7
Training is defined in NIST Special Publication 800-16 as follows: “The
‘Training’ level of the learning continuum strives to produce relevant and
needed security skills and competencies by practitioners of functional
specialties other than IT security (e.g., management, systems design and
development, acquisition, auditing).” The most significant difference between
training and awareness is that training seeks to teach skills, which allow a
person to perform a specific function, while awareness seeks to focus an
individual’s attention on an issue or set of issues. The skills acquired during
training are built upon the awareness foundation, in particular, upon the
security basics and literacy material. A training curriculum must not
necessarily lead to a formal degree from an institution of higher learning;
however, a training course may contain much of the same material found in a
course that a college or university includes in a certificate or degree program.

An example of training is an IT security course for system administrators,


which should address in detail the management controls, operational

8
controls, and technical controls that should be implemented. Management
controls include policy, IT security program management, risk management,
and life-cycle security. Operational controls include personnel and user issues,
contingency planning, incident handling, awareness and training, computer
support and operations, and physical and environmental security issues.
Technical controls include identification and authentication, logical access
controls, audit trails, and cryptography. (See NIST Special Publication 800-12,
An Introduction to Computer Security: The NIST Handbook, for in-depth
discussion of these controls
(http://csrc.nist.gov/publications/nistpubs/index.html).)

Please download and review


https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-50.pdf

Executive management: Organizational leaders need to fully understand


directives and laws that form the basis for the security program. They also need
to comprehend their leadership roles in ensuring full compliance by users
within their units.

Security personnel (security program managers and security officers): These


individuals act as expert consultants for their organization; therefore, they must
be well educated on security policy and accepted best practices.

System owners: Owners must have a broad understanding of security policy


and a high degree of understanding regarding security controls and
requirements applicable to the systems they manage.

System administrators and IT support personnel: Entrusted with a high degree


of authority over support operations critical to a successful security program,
these individuals need a higher degree of technical knowledge in effective
security practices and implementation.

Operational managers and system users: These individuals need a high degree
of security awareness and training on security controls and rules of behavior for
systems they use to conduct business operations.

8
We will cover this in detail in later domains as it was covered to some extent
in the EARLIER DOMAINS. We will also cover ISO 22301 Lead Implementation
Course as well inshAllah.

9
14.3.1 Protection of test data

Control
Test data should be selected carefully, protected and controlled.

Implementation guidance
The use of operational data containing personally identifiable information or
any other confidential information for testing purposes should be avoided. If
personally identifiable information or otherwise confidential information is
used for testing purposes, all sensitive details and content should be
protected by removal or modification (see ISO/IEC 29101[26]).
The following guidelines should be applied to protect operational data, when
used for testing purposes:
a) the access control procedures, which apply to operational application
systems, should also apply to test application systems;
b) there should be separate authorization each time operational information
is copied to a test environment;

10
c) operational information should be erased from a test environment
immediately after the testing is complete;
d) the copying and use of operational information should be logged to provide
an audit trail.

Other information
System and acceptance testing usually requires substantial volumes of test data
that are as close as possible to operational data.

10
Review this link. Joseph Kirkpatrick is an old friend of mine and he will be helping us
achieve ISO 27001 certification along with SOC1, 2 and 3 down the line once we open our
other business etc.

https://kirkpatrickprice.com/video/soc-1-vs-soc-2-vs-soc-3/

What’s The Difference Between SOC 1, SOC 2, and SOC 3?


August 16, 2017/by Joseph KirkpatrickWhen it comes to SOC (Service
Organization Control) reports, there are three different report types: SOC 1,
SOC 2, and SOC 3. When considering which report fits your organization’s
needs, you must first understand what your clients require of you and then
consider the areas of internal control over financial reporting (ICFR), the Trust
Services Principles, and restricted use.
SOC 1 vs. SOC 2 vs. SOC 3

What Is a SOC 1 Report?


SOC 1 engagements are based on the SSAE 18 standard and report on the
effectiveness of internal controls at a service organization that may be

11
relevant to their client’s internal control over financial reporting (ICFR).

What Is a SOC 2 Report?


A SOC 2 audit evaluates internal controls, policies, and procedures that directly
relate to the security of a system at a service organization. The SOC 2 report
was designed to determine if service organizations are compliant with the
principles of security, availability, processing integrity, confidentiality, and
privacy, also known as the Trust Services Principles. These principles address
internal controls unrelated to ICFR.

What Is a SOC 3 Report?


A SOC 3 report, just like a SOC 2, is based on the Trust Services Principles, but
there’s a major difference between these types of reports: restricted use. A
SOC 3 report can be freely distributed, whereas a SOC 1 or SOC 2 can only be
read by the user organizations that rely on your services. A SOC 3 does not give
a description of the service organization’s system, but can provide interested
parties with the auditor’s report on whether an entity maintained effective
controls over its systems as it relates to the Trust Services Principles.
When trying to determine whether your service organization needs a SOC 1,
SOC 2, or SOC 3, keep these requirements in mind:
Could your service organization affect a client’s financial reporting? A SOC 1
would apply to you.
Does your service organization want to be evaluated on the Trust Service
Principles? SOC 2 and SOC 3 reports would work.
Does restricted use affect your decision? SOC 1 and SOC 2 reports can only be
read by the user organizations that rely on your services. A SOC 3 report can be
freely distributed, used in many different applications.
Each of these reports must be issued by a licensed CPA firm, such as
KirkpatrickPrice. We offer SOC 1, SOC 2, and SOC 3 engagements. To learn more
about KirkpatrickPrice’s SOC services, contact us today using the form below.

11
12
13
1
2
3
4
5
6
7
8
9
Domain 7 deals with aspects of security the practitioner encounters while
servicing the organization’s operational environment. The course material
addresses foundational concepts, asset protection, incident management and
response, business continuity and disaster recovery (BCDR), and personnel
security.

10
The principle of least privilege (POLP), an important concept in computer
security, is the practice of limiting access rights for users to the bare minimum
permissions they need to perform their work. Under POLP, users are granted
permission to read, write or execute only the files or resources they need to
do their jobs: In other words, the least amount of privilege necessary.

Additionally, the principle of least privilege can be applied to restricting access


rights for applications, systems, processes and devices to only those
permissions required to perform authorized activities.

Depending on the system, some privilege assignments may be based on


attributes that are role-based, such as business units like marketing, human
resources or IT, in addition to other parameters such as location, seniority,
special circumstances or time of day. Depending on the operating system in
use, administrators may need to tailor the different default privilege settings
available for different types of user accounts.

11
Superuser accounts, mainly used for administration by IT staff members, have
unlimited privileges over a system. The privileges granted to superuser
accounts include full read, write and execute privileges as well as the ability to
make changes across a network, e.g., installing or creating software or files,
modifying settings and files, and deleting data and users.

Under current best practices for security, access through superuser accounts
should be limited to only those required to administer systems; ideally,
superuser credentials should never be used to log in to an account, but rather
used with the "sudo" ("superuser do") command in Unix/Linux systems, which
allows the holder of superuser credentials to issue a single command that is
executed with superuser privileges. This reduces the risk of an active superuser
session being hijacked.
Applying the principle of least privilege to standard user accounts means
granting a limited set of privileges -- just enough privileges for users to get their
jobs done, but no more than that. This type of account should be the template
for ordinary employees -- least privileged users (LPUs) -- who do not need to
manage or administer systems or network resources. These are the type of
accounts that most users should be operating the majority of the time.

11
Separation of duties, also known as Segregation of duties, is the concept of
having more than one person required to complete a task. It is a key concept
of internal controls and is the most difficult and sometimes the costliest one
to achieve. The idea is to spread the tasks and privileges for security tasks
among multiple people. No one person should do everything.
Separation of duties is already well-known in financial accounting systems.
Companies of all sizes understand not to combine roles such as receiving
checks and approving write-offs, depositing cash and reconciling bank
statements, approving time cards and have custody of pay checks, and so on.
The concept of Separation of duties became more relevant to the IT
organization when regulatory mandates such as Sarbanes-Oxley (SOX) and the
Gramm-Leach-Bliley Act (GLBA) were enacted. A very high portion of SOX
internal control issues, for example, come from or rely on IT. This forced IT
organizations to place greater emphasis on Separation of duties across all IT
functions, especially security.
What is Separation of Duties?
Separation of duties, as it relates to security, has two primary objectives. The

12
first is the prevention of conflict of interest (real or apparent), wrongful
acts, fraud, abuse and errors. The second is the detection of control failures
that include security breaches, information theft and circumvention of security
controls. Correct Separation of duties is designed to ensure that individuals
don’t have conflicting responsibilities or are not responsible for reporting on
themselves or their superior.
There is an easy test for Separation of duties.
Can any one person alter or destroy your financial data without being
detected?
Can any one person steal or exfiltrate sensitive information?
Does any one person have influence over controls design, implementation and
reporting of the effectiveness of the controls?
The answers to all these questions should be “no.” If the answer to any of them
is “yes,” then you need to rethink the organization chart to align with proper
Separation of duties. Also, the individual responsible for designing and
implementing security must not be the same person that is responsible for
testing security, conducting security audits or monitoring and reporting on
security. Also, the person responsible for information security should not
report to the CIO. The reason for this is that the CIO has a vested interest in
having the rest of the C-Level staff believe that there are no cybersecurity
issues. Anything that is discovered by the tester has the potential to be swept
under the rug and not addresses as quickly as it should be. Best industry
practice is that the person testing your cybersecurity should not be a member
of your organization. They should be a disinterested third party.
Here are a few possible ways to accomplish proper Separation of duties:
Have the individual responsible for information security report to chairman of
the audit committee.
Use a third party to monitor security, conduct surprise security audits and
security testing.
Have an individual (CISO) responsible for information security report to the
board of directors.
The importance of Separation of Duties for security
The issue of Separation of duties in security continues to be significant. It is
imperative that there be separation between operations, development and
testing of security and all controls to reduce the risk of unauthorized activity or
access to operational systems or data. Responsibilities must be assigned to
individuals in such a way as to mandate checks and balances within the system
and minimize the opportunity for unauthorized access and fraud.

12
Remember, control techniques surrounding Separation of duties are subject to
review by external auditors. Auditors have in the past listed this concern as
a material deficiency on the audit report when they determine the risks are
great enough. It is just a matter of time before this is done as it relates to IT
security. For this reason, as well as objectivity, why not discuss separation of
duties as it relates to IT security with your external auditors? It can save you a
lot of aggravation, cost and political infighting by getting what they view as
necessary in your case.
DNV GL has helped companies implement Separation of duties policies and
procedures and has also performed audits to assure that procedures are being
followed. Also, we have a team of cybersecurity professionals that can come
into your organization to test your cybersecurity through vulnerability
assessments and penetration testing. If you would like more information about
the services we can offer you, please contact me.

12
Privileged accounts are those with permissions beyond that of normal users,
such as managers and administrators. Because those permissions lend the
privileged user more capability to cause potential harm to the organization,
privileged accounts require additional protections. Typical measures used for
attenuating elevated risks from privileged accounts include the following:

More extensive and detailed logging than regular user accounts. The record of
privileged actions is vitally important, as both a deterrent (for privileged
account holders that might be tempted to engage in untoward activity) and an
administrative control (the logs can be audited and reviewed to detect and
respond to malicious activity).

More advanced access control than regular user accounts. Password


complexity requirements should be higher for privileged accounts than
regular accounts, and
refresh rates should be more frequent (if regular users are required, for
instance, to change passwords every 90 days, privileged account holders

1
might have to change them every 30). Privileged account access might also
entail multifactor authentication, or other measures more stringent than
regular log-on tasks.

Temporary access. Privileged accounts should necessarily be limited in


duration; privileged users should only have access to systems/data for which
they have clear need-to-know and only for the duration of the project/task for
which that access
is necessary.

Deeper trust verification than regular users. Privileged account holders should
be subject to more detailed background checks, stricter nondisclosure
agreements, and acceptable use policies and be willing to be subject to
financial investigation. l Greater audit of privileged accounts. Privileged account
activity should be monitored and audited at a greater rate and extent than
regular usage.

1
The organization can implement the practice of job rotation, where all
employees change roles and tasks on a regular basis. This improves the overall
security of the organization in a number of ways:

An employee engaged in wrongdoing in a specific position may be found out


when the replacement takes over that position after rotation.

The organization will have a staff that has no single point of failure; every
person on a team will know how to perform all the functions of that team (to
greater or lesser extent). This can be crucial for business continuity and
disaster recovery actions.

This often improves morale, which fosters trust among employees; employees
like having an increased skillset and marketability even if they don’t plan to
leave the organization, and different tasks are intriguing and interesting and
stave off boredom.

2
Read this document from end to end
https://www2.deloitte.com/content/dam/Deloitte/us/Documents/finance/us-advisory-
information-lifecycle-management.pdf

The data lifecycle stages can be described as the following:

Create: The moment the data is created or acquired by the organization.


Store: Near-time storage for further utilization; this takes place almost
simultaneously with creation of the data.

Use: Any processing of the data by the organization.

Share: Dissemination of the data typically considered outside the organization


(internal “sharing” would most often be considered “Use”); this can include
sale of the data, publication, and so forth.

Archive: The data is moved from the operational environment to long-term

3
storage; it is still available for irregular purposes (disaster recovery, for instance,
or possibly to replace operational data that was accidentally deleted) but is no
longer used on a regular basis.

Destroy: Data is permanently removed from the organization with no


way to recover it.

The organization’s security program should be sufficient to protect the data


throughout all phases of the lifecycle with proper security controls for each
phase.

3
Service Level Agreement (SLA) Definition - What does Service Level
Agreement (SLA) mean?
A Service Level Agreement (SLA) is the service contract component between a
service provider and customer. A SLA provides specific and measurable
aspects related to service offerings. For example, SLAs are often included in
signed agreements between Internet service providers (ISP) and customers.
SLA is also known as an operating level agreement (OLA) when used in an
organization without an established or formal provider-customer relationship.

Adopted in the late 1980s, SLAs are currently used by most industries and
markets. By nature, SLAs define service output but defer methodology to the
service provider's discretion. Specific metrics vary by industry and SLA
purpose.
SLAs features include:

• Specific details and scope of provided services, including priorities,


responsibilities and guarantees

4
• Specific, expected and measurable services at minimum or target levels
• Informal or legally binding
• Descriptive tracking and reporting guidelines
• Detailed problem management procedures
• Detailed fees and expenses
• Customer duties and responsibilities
• Disaster recovery procedures
• Agreement termination clauses
• In outsourcing, a customer transfers partial business responsibilities to an
external service provider. The SLA serves as an efficient contracting tool for
current and continuous provider-customer work phases.

4
Cybercrime is a growing epidemic that affects businesses of all sizes.
Organisations have a responsibility to protect the data of their employees and
customers. So they are investing in expensive hardware and software
solutions. Yet businesses don’t realize that without effective management of
those solutions, every component they add to their IT inventory becomes a
new point of vulnerability. Cybercriminals can exploit unaccounted and out-
of-date hardware and software to hack systems. So companies need to put
effective IT asset management solutions in place.

What IT Asset Management (ITAM) Entails


IT managers have to keep track of their IT inventory. They have to deal with
contracts, licenses, updates, and regulatory compliance issues. The use of the
cloud and mobile devices are adding new layers of complexity. In the early
days, managers could get away with using spreadsheets to keep track of their
IT assets. Today most sophisticated operations use some form of IT inventory
management software. These tools are better suited to deal with various
aspects of IT asset management:

5
Hardware Asset Management: IT departments have been dealing with servers
and workstations for a long time. But that doesn’t mean that it has gotten any
easier. A good ITAM practice requires that hardware is properly tagged and
tracked throughout its lifecycle. The firmware of each hardware needs to be
updated regularly. A good IT inventory management software has the
provisions to handle the complexity of dealing with various aspects of
hardware management.

Software Asset Management: Software provides a different set of challenges.


IT departments have to prevent unauthorized software installations. They have
to ensure security updates are regularly applied to installed applications and
access management rules are followed properly. Good ITAM tools can keep
track of software updates, license expirations, and compliance requirements.
Regulatory audits are easier with software asset management.

Cloud Asset Management: Cloud-based services like SaaS, IaaS and PaaS are
relatively new developments. So IT departments are still trying to figure out
how to address various issues. In a pre-cloud environment, teams had total
control over the IT inventory. But cloud environments use the shared
responsibility model. Most ITAM tools are still not highly evolved for cloud
asset management. So IT teams need to pay special attention in this area.

End-User Mobile Device Management: More companies are adopting bring-


your-own-device (BYOB) policies. Even though its great for productivity, its a
nightmare for implementing security. Tracking and monitoring BYOB devices
through IT inventory management is a high priority for IT departments.

Why ITAM is Crucial for Effective Cybersecurity


For any modern organisation, it’s not possible to create a robust cybersecurity
program without having an efficient ITAM solution. There are just too many
tools and services to keep track of.
For example, a single employee might have a PC, a mobile phone, and a tablet.
In addition, the employee might have access to various servers and cloud
applications. If cybercriminals can obtain even one password to any of these
endpoints, they can often use that password to hack into other systems to gain
more valuable information.
Also, cybercriminals can launch sophisticated phishing attacks, exploit software

5
vulnerabilities or steal employee devices. IT teams need to fight battles on all
fronts by keeping software and hardware up-to-date and having the capability
to shut down stolen devices. Recent attacks in the UK shows cybercriminals are
taking advantage of all these vulnerabilities.

British Airways Hack: Financial information of around 380,000 British Airways


passengers were hacked during a 15-day breach in August 2018. Initially, British
Airways didn’t know how the hackers got access to the data as there wasn’t any
internal breach. Later security experts discovered that the scripts for its
baggage claim information page were changed just before the hack started. The
cybercriminals exploited the weaknesses of those scripts to intercept customer
information. This shows an important reason for having ITAM solution. There is
no information available about how BA managed its IT inventory in this case.
But good ITAM solution would make finding vulnerabilities like this easier for
security experts. Experts would be able to discover problems faster using ITAM
historical data. Without proper ITAM, the same task will take significantly
longer or even make the problem untrackable. It will increase the chances of
future attacks.

NHS WannaCry Attack: The WannaCry ransomware attack of UK’s National


Health Services (NHS) caused canceling of 19,500 medical appointments,
locking of 600 computers at GP surgeries and put 5 emergency centers out of
service. The damage could have been worse if a security researcher hadn’t
accidentally discovered the kill-switch to the ransomware. But this attack could
have been prevented in the first place through IT asset management. If NHS
had updated their Windows operating system properly, the WannaCry could
not have caused this havoc.

Establishing a Cyber Resilient Business Using IT Asset Management


IT asset management will not solve cybersecurity problems automatically.
Businesses need to design and implement their IT inventory management
software with cybersecurity assessment in mind.
However, cybersecurity-aware ITAM solutions will help your business in
multiple ways. Here are some of the benefits:

Visibility and Transparency


ITAM solutions designed with cybersecurity objectives will help you find
security risks faster. If you have a configuration management database (CMDB)

5
for your IT assets, you can easily pinpoint when a problem happens. With
regulations like GDPR, this becomes more important as you are legally required
to report your security breaches.

Early Security Threat Detection


Hardware asset management and software asset management tools keep
historical records or logs of various information. This information is a great
resource to recognize irregularities or anomalies. This data can help your
business early detect cyber attacks and take preventive measures.

Data Traceability
Data is the most valuable resource for businesses in the information age. Your
ITAM solution gives you the ability to organize and align the data from your
employees, your customers, and your infrastructure. So you’ll have more
control. It’s an important tool for tracking and securing data.

Cost Optimisation
Cybersecurity is expensive. Most companies stop tracking their hardware or
updating their software due to the associated costs. Initially, an IT inventory
management solution might take resources to set up. But it will save you time
and money in the long-run. It will make tracking and updating hardware and
software assets easier and more efficient.

In Conclusion
No solution can stop all cyber attacks. But an ITAM solution can help your
organisation build the necessary security strategies to improve your chances of
preventing an attack. And a robust ITAM solution can help your business stay
safer.

5
What Is Configuration Management?

Here’s my definition of configuration management: it’s the discipline of


ensuring that all software and hardware assets which a company owns are
known and tracked at all times—any future changes to these assets are
known and tracked. You can think of configuration management like an always
up-to-date inventory for your technology assets, a single source of truth.

Configuration Management & Planning With that defined, let’s talk about how it
works in practice. Configuration management usually spans a few areas. It
often relates to different ideas, like creating “software pipelines” to build and
test our software artifacts. Or it might relate to writing “infrastructure-as-
code” to capture in code the current state of our infrastructure. And it could
mean incorporating configuration management tools such as Chef, Puppet,
and Ansible to store the current state of our servers.

Where Did Configuration Management Originate?

6
When I first started learning about configuration management, I found the
concept super confusing. However, it turns out that there are reasons for the
confusion. But to understand why, we need to look at some history. We (the
Software Industry) Stole the Idea of “Configuration Management” The idea of
configuration management comes from other institutions, such as the military.
We took those ideas and retrofitted them into a software context.

How We Make Software Has Changed Over Time


Configuration management was traditionally a purely manual task, completed
by a systems administrator. The role was a lot of manual work involving
carefully documenting the state of the system. But the industry has changed
completely. These changes came from the popularity of DevOps, increases in
cloud computing, and new automation tooling. Now that we’ve set the scene,
we can dive into the details of configuration management. So let’s get to it!

What the World Looks Like With Configuration Management


Before we explore different tools for configuration management, we need to
know what end results we’ll receive for our efforts.
What are the outcomes of well-implemented configuration management?
Let’s cover the benefits.

Benefit 1: Disaster Recovery


If the worst does happen, configuration management ensures that our assets
are easily recoverable. The same applies to rollbacks. Configuration
management makes it so that when we’ve put out bad code, we can go back to
the state of our software before the change.

Benefit 2: Uptime and Site Reliability


The term “site reliability” refers to how often your service is up. I’ve worked at
companies where each second of downtime would cost thousands—often tens
or even hundreds of thousands. Eek!

A frequent cause of downtime is bad deployments, which can be caused by


differences in running production servers to test servers. With our
configuration managed properly, our test environments can mimic production,
so there’s less chance of a nasty surprise.

6
Benefit 3: Easier Scaling
Provisioning is the act of adding more resources (usually servers) to our running
application. Configuration management ensures that we know what a good
state of our service is. That way, when we want to increase the number of
servers that we run, it’s simply a case of clicking a button or running a script.
The goal is really to make provisioning a non-event.
These are just some of the benefits of configuration management. But there
are some other ones, too. You’ll experience faster onboarding of new team
members, easier collaboration between teams, and extended software lifecycle
of products/assets, among other benefits.

The World Without Configuration Management


Sometimes it’s easier to grasp a concept by understanding its antithesis. What
does trouble look like for configuration management, and what are we trying to
avoid? Let’s take a look. A developer implementing a feature will commonly
install a few bits of software and deploy code. If things are sloppy, this
developer probably makes the team and manager aware of the intention to
come back later to clean it all up—that it’s simply a demonstration and will be
rewritten soon.

But then the deadline starts pressing, and the task of going back through and
rewriting the installation steps as a script gets pushed lower and lower in
priority. Before we know it, several years have passed, and a new developer
gets put on the project. That developer is now left to pick up the pieces, trying
to understand what happened. It’s quite likely they aren’t even going to touch
the configuration of the server. Who knows what it would do!
The above situation is precisely what configuration management helps you
avoid. We don’t want to be left in the dark as a result of developers setting up
software without proper documentation/traceability. Rather, we want to know
the answers to questions like

• What services are we running?


• What state are those services in?
• How did they get to their current state?
• What was the purpose for the changes?

Configuration management can tell us these answers.


That hopefully paints a clearer picture of the problems that configuration

6
management is trying to solve. How Configuration Management Fits in With
DevOps, Continuous Delivery, and More…

Hopefully by now you’re starting to get the hang of what configuration


management is and what it aims to do. Before we go on to discuss tooling, I’d
like to take a moment to address how configuration management fits in with
other software development concepts like agile, DevOps, continuous
integration, continuous delivery, and Docker so that you can understand how
these concepts fit in with the ideas of configuration management.

Is Configuration Management Compatible With Agile?


Yes. Agile software, by definition, reflects the desire to make changes to our
software faster so that we can respond to market demands. Configuration
management helps us to safely manage our changes and keep velocity high.

How Does Configuration Management Fit With DevOps?


DevOps is the extension of agile practices across both the development and
operations departments. In fact, DevOps seeks to unify the goals of both
departments. At some companies, the development department seeks change
while the operations department seeks stability. But companies that embrace
DevOps want both stability of their deployed assets and frequency of change.
However, achieving this outcome requires cultural change.

Like agile, configuration management gives teams the confidence to move


quickly with their changes. Under agile practices, the company gives
configuration management responsibilities to the development teams,
empowering them to provision, configure, and manage their own
infrastructure. You build it, you run it.

Where Do Pipelines Fit Into Configuration Management”?


Software pipelines are the steps (or “value stream,” which we can create with
tools like Plutora) that we usually automate, taking code from commit to
production. Pipelines usually involve steps such as linting code, unit
testing code, integration testing code, and creating artifacts. A software
pipeline therefore is a form of configuration management. When we build
software with tools like Docker, we codify our build instructions into
our Dockerfile. This allows us to better understand the dependencies of our
artifacts.

6
Is Infrastructure-as-Code Configuration Management?
Infrastructure-as-code (or IaC for short) is the practice of ensuring all
provisioned infrastructure is done so through code. The purpose of IaC is to
have a written record of which services exist, where they are located, and
under what circumstance. Configuration management might choose to leverage
aspects of IaC in order to achieve the full understanding of all the technology
assets a company owns.

Is Continuous Integration/Delivery Configuration Management?


Continuous delivery is the process of ensuring that software is always in a
releasable state. You can achieve this through heavy automation and
testing. Continuous integration is the process of bringing separate software
artifacts together into a single location on a frequent basis, for the purposes of
verifying that the code integrates properly. Continuous integration tools, which
are typically servers that run automation-testing suites, act as a form of
configuration management by providing visibility into the steps required to set
up and configure a given software artifact.

That should clear up some of your lingering questions about how configuration
management fits with some practices or ideas that you might be using or are
familiar with. Any discussion of configuration management would be
incomplete, however, without a discussion about tooling. So, let’s take a peek
at the different tools we have at our disposal for implementing configuration
management.

The Importance of Declarative Style in Configuration Management Tools

Next up, we’re going to discuss configuration management tools. But before we
get to that, I need to quickly discuss a concept to consider when comparing
tools. And the concept is declarative style. You’ll hear about this terminology a
lot if you go out and start looking into different configuration management
tools. So, it makes sense to have a firm grasp of what declarative style is, why
it’s important, and why so many people are talking about it.

So, what do we mean by declarative style?


And why is declarative style so important for configuration management?
What Do We Mean by Declarative Style?

6
When it comes to software, having a declarative style means telling your
software the end result you want and then letting the software do the work in
figuring out the way to get there. The opposite of the declarative style would
be a procedural style, where instead of giving an end state, you give
instructions on how to get there. The problem with instructions is that they’re
dependent on the starting state.

You can think of it like this: declarative versus procedural is the difference
between giving a friend your home address and giving them step-by-step
instructions to get to your house from where they are. The problem with giving
step-by-step instructions is that it assumes you know where the friend is
starting, and it doesn’t allow for things to go wrong. It’s hard to replay steps
when you’re in a bad state (i.e., lost!).
Why Is Declarative Style Important for Configuration Management?

By now, you’re probably thinking that declarative style sounds interesting. But
why is it important?

Declarative style is important because configuration management is all


about knowing the current state of your applications. So when we use
configuration management tools, it’s desirable to use a declarative style and
specify the end result that we want, not the steps to get there. This means we
always know what end state we’re trying to achieve and how that’s changed
over time. That’s instead of trying to work out when instructions were run and
dealing with the complexities that may arise if certain instructions have failed.

What Are Configuration Management Tools?

There are many different tools for configuration management. In fact, it can get
confusing, as there are tools that support configuration management without
explicitly being configuration management tools.
For instance, Docker neatly packages up steps needed to set up and run an
application (in a Dockerfile). However, people don’t often consider Docker a
configuration management tool.

To make things clearer, let’s divide up the common tools that might fall under
or relate to configuration management:

6
Configuration Management Tools
These are the tools you see typically associated with configuration
management. Tools like Chef, Ansible, and Puppet provide ways to codify steps
that we require in order to bring an asset in line with a current definition of
how that asset should look. For instance, you might create an Ansible
playbook that ensures that all of our X servers have Y installed within them.

Infrastructure-as-Code Tools
Often also called provisioning tools, IaC tools
include CloudFormation and Terraform. If our configuration management tools
include the setup we need on our assets, our provisioning tools are how we get
those assets. It’s this blurred line that explains why we need to bring these
tools into our discussion of configuration management. And many consider it
an anti-pattern to use configuration management tools for provisioning.

Pipeline Tools
We talked briefly about software delivery pipelines, but implementing them
requires tooling. Popular technologies include Jenkins, CircleCI, and GitLab CI.
By using tools to codify our build process, we make it easy for other developers
to understand how our artifacts are modified and created, which is a form of
configuration management.

Source Control Tools


Source control tools include GitHub, SVN, GitLab, and Bitbucket. While we need
to codify our automation in scripts, if we don’t appropriately track the history
of our changes, then we aren’t really achieving configuration management.

We’re now nearing the end of our introduction to configuration management.


We’ve covered what configuration management is, we know the benefits, and
we’re now up to date on the latest tools. However, all of this information can be
a little overwhelming if you’re asking the simple question of “Where should I
start?”
Let’s break it all down so that you can start your journey into configuration
management.

How Can I Get Started With Configuration Management?

Where to start? Do you begin by researching tools? Implementing some

6
automation? Auditing your existing servers? Talking to others in your company?
Where you start with anything always depends on where you currently are.
That said, only you are aware of your current situation and the limitations and
resources available. Below are three different places you can begin your
journey to effective configuration management:

Audit your software/hardware—What software do you currently have? What’s


the state of it? Is it well documented? Are the setup and run instructions
known for the software?

Perform a tools assessment—Do an assessment of what tools exist on the


market for configuration management. The ones I listed above are a good start.
Identify which tools could help you solve some of your configuration
management problems.

Learn about best practices—Successfully implementing configuration


management isn’t a one-and-done task. It takes time and work to continually
ensure that all new software is appropriately audited and tracked. So you might
want to look into some different key concepts, such as IaC and build and
release pipelines.

It’s Time For Everything-as-Code!


And that’s all! Hopefully that helps to clear things up for you about
configuration management. It’s all about keeping track of the current state of
your software and infrastructure.

There are many ways to implement configuration management, and there are
lots of different tools and processes. So when it comes to strategy, be sure to
take your time assessing options and understanding how you want your
configuration management processes to work.

It will all be worth it in the end, though. Get your configuration management
right and your teams will be safer, more productive, and faster to make
changes!
Good luck—and from now on, audit, track, and write everything-as-code!

6
Change management is a systematic approach to dealing with the transition
or transformation of an organization's goals, processes or technologies. The
purpose of change management is to implement strategies for effecting
change, controlling change and helping people to adapt to change. Such
strategies include having a structured procedure for requesting a change, as
well as mechanisms for responding to requests and following them up.
To be effective, the change management process must take into consideration
how an adjustment or replacement will impact processes, systems, and
employees within the organization. There must be a process for planning and
testing change, a process for communicating change, a process for scheduling
and implementing change, a process for documenting change and a process
for evaluating its effects. Documentation is a critical component of change
management, not only to maintain an audit trail should a rollback become
necessary but also to ensure compliance with internal and external controls,
including regulatory compliance.

This checklist can be used to create a simple change management plan.

1
Types of organizational change
Change management can be used to manage many types of organizational
change. The three most common types are:
Developmental change - Any organizational change that improves on previously
established processes and procedures.
Transitional change - Change that moves an organization away from its current
state to a new state in order to solve a problem, such as mergers and
acquisitions and automation.
Transformational change - Change that radically and fundamentally alters the
culture and operation of an organization. In transformational change, the end
result may not be known. For example, a company may pursue entirely
different products or markets.
Importance and effects of change management
As a conceptual business framework for people, processes and the
organization, change management increases the success of critical projects and
initiatives and improves a company’s ability to adapt quickly.
Business change is constant and inevitable, and when poorly managed has the
potential to cause organizational stress as well as unnecessary, and costly re-
work.
By standardizing the consistency and efficiency of assigned work, change
management assures that the people asset of an organization is not
overlooked. As changes to work occur, change management helps employees to
understand their new roles and build a more process-driven culture.
Change management also encourages future company growth by allowing it to
remain dynamic in the marketplace.
Popular models for managing change
Best practice models can provide guiding principles and help managers align
the scope of proposed changes with available digital and nondigital tools.
Popular models include:
ADKAR: The ADKAR model, created by Prosci founder Jeff Hiatt, consists of five
sequential steps:
Awareness of the need for change;
Desire to participate in and support the change;
Knowledge about how to change;
Ability to implement change and behaviors; and
Reinforcement to sustain the change.
Bridges' Transition Model: Change consultant William Bridges' model focuses
on how people adjust to change. The model features three stages: a stage

1
for letting go, a stage of uncertainty and confusion and a stage for acceptance.
Bridges' model is sometimes compared to the Kübler-Ross five stages of grief
(denial, anger, bargaining, depression and acceptance).
IT Infrastructure Library (ITIL): The U.K. Cabinet Office and Capita plc oversee a
framework that includes detailed guidance for managing change in IT
operations and infrastructure.
Kotter's 8-Step Process for Leading Change: Harvard University professor John
Kotter's model has eight steps:
increasing the urgency for change;
creating a powerful coalition for change;
creating a vision for change, communicating the vision;
removing obstacles;
creating short-term wins;
building on them; and
anchoring the change in corporate culture.
Lewin's Change Management Model: Psychologist Kurt Lewin created a three-
step framework that is also referred to as the Unfreeze-Change-Freeze (or
Refreeze) model.
McKinsey 7S: Business consultants Robert H. Waterman Jr. and Tom Peters
designed this model to holistically look at seven factors that affect change:
shared values;
strategy;
structure;
systems;
style;
staff; and
Popular change management tools
Digital and nondigital change management tools can help change management
officers research, analyze, organize and implement changes. In a small
company, the tools may simply consist of spreadsheets, Gantt
charts and flowcharts. Larger organizations typically use software suites to
maintain change logs digitally and provide stakeholders with an integrated,
holistic view of change and its effects.
Popular change management software applications include:
ChangeGear Change Manager (SunView Software): change management
support for DevOpsand ITIL automation, as well as business roles.
ChangeScout (Deloitte): cloud-based organizational change management
application for evaluating sea changes, as well as incremental changes.

1
eChangeManager (Giva): a cloud-based, stand-alone IT change management
application.
Freshservice (Freshworks): an online ITIL change management solution
featuring workflow customization capabilities and gamification features.
Remedy Change Management 9 (BMC Software): assistance for managers with
planning, tracking and delivering successful changes that are compliant with
ITIL and COBIT.
Change management certifications
Change management practitioners can earn certifications that recognize their
ability to manage projects, manage people and guide an organization through a
period of transition or transformation. Popular certifications for change
management are issued by:
Change Management Institute (CMI): CMI offers Foundation, Specialist and
Master certifications.
Prosci: The Change Management Certification validates the recipient is able to
apply holistic change management methodologies and the ADKAR model to a
project.
Association of Change Management Professionals (ACMP): ACMP offers a
Certified Change Management Professional (CCMP) certification for best
practices in change management.
Management and Strategy Institute (MSI): The Change Management Specialist
(CMS) certification attests to the recipient's ability to design and manage
change programs.
Cornell University's SC Johnson College of Business: The Change Leadership
certification program was developed to authenticate a change agent'sability to
carry out a change initiative. The certification requires four core courses and
two leadership electives.
How change management works
To understand how change management works, it’s best to apply the concepts
and tools to a specific area of business. Below, are examples of how change
management works for project management, software development and IT
infrastructure.
Change management for project management
Change management is an important part of project management. The project
manager must examine change requests and determine the effect a change will
have on the project as a whole. The person or team in charge of change
control must evaluate the effect a change in one area of the project can have
on other areas, including:

1
Scope: Change requests must be evaluated to determine how they will affect
the project scope.
Schedule: Change requests must be assessed to determine how they will alter
the project schedule.
Costs: Change requests must be evaluated to determine how they will affect
project costs. Labor is typically the largest expense on a project, so overages on
completing project tasks can quickly drive changes to the project costs.
Quality: Change requests must be evaluated to determine how they will affect
the quality of the completed project. Changes to the project schedule, in
particular, can affect quality as the workforce may generate defects in work
that is rushed.
Human resources: Change requests must be evaluated to determine if
additional or specialized labor is required. When the project schedule changes,
the project manager may lose key resources to other assignments.
Communications: Approved change requests must be communicated to the
appropriate stakeholders at the appropriate time.
Risk: Change requests must be evaluated to determine what risks they pose.
Even minor changes can have a domino effect on the project and introduce
logistical, financial or security risks.
Procurement: Changes to the project may affect procurement efforts for
materials and contract labor.
Stakeholders: Changes to the project can affect who is a stakeholder, in
addition to the stakeholders' synergy, excitement and support of the project.
When an incremental change has been approved, the project manager will
document the change in one of four standard change control systems to ensure
all thoughts and insight have been captured with the change request. (Changes
that are not entered through a control system are labeled defects.) When a
change request is declined, this is also documented and kept as part of the
project archives.
Change management for software development
In software project management, change management strategies and tools
help developers manage changes to code and its associated
documentation. Agile software development environments actually encourage
changes for requirements and/or the user interface (UI). Change is not
addressed in the middle of an iteration, however; they are scheduled as stories
or features for future iterations.

Contributor(s): Vicki-lynn Brunskill, Mary K. Pratt

1
Change management is a systematic approach to dealing with the transition or
transformation of an organization's goals, processes or technologies. The
purpose of change management is to implement strategies for effecting
change, controlling change and helping people to adapt to change. Such
strategies include having a structured procedure for requesting a change, as
well as mechanisms for responding to requests and following them up.
To be effective, the change management process must take into consideration
how an adjustment or replacement will impact processes, systems, and
employees within the organization. There must be a process for planning and
testing change, a process for communicating change, a process for scheduling
and implementing change, a process for documenting change and a process for
evaluating its effects. Documentation is a critical component of change
management, not only to maintain an audit trail should a rollback become
necessary but also to ensure compliance with internal and external controls,
including regulatory compliance.
This checklist can be used to create a simple change management plan.
Types of organizational change
Change management can be used to manage many types of organizational
change. The three most common types are:
Developmental change - Any organizational change that improves on previously
established processes and procedures.
Transitional change - Change that moves an organization away from its current
state to a new state in order to solve a problem, such as mergers and
acquisitions and automation.
Transformational change - Change that radically and fundamentally alters the
culture and operation of an organization. In transformational change, the end
result may not be known. For example, a company may pursue entirely
different products or markets.
Importance and effects of change management
As a conceptual business framework for people, processes and the
organization, change management increases the success of critical projects and
initiatives and improves a company’s ability to adapt quickly.
Business change is constant and inevitable, and when poorly managed has the
potential to cause organizational stress as well as unnecessary, and costly re-
work.
By standardizing the consistency and efficiency of assigned work, change
management assures that the people asset of an organization is not
overlooked. As changes to work occur, change management helps employees to

1
understand their new roles and build a more process-driven culture.
Change management also encourages future company growth by allowing it to
remain dynamic in the marketplace.
Popular models for managing change
Best practice models can provide guiding principles and help managers align
the scope of proposed changes with available digital and nondigital tools.
Popular models include:
ADKAR: The ADKAR model, created by Prosci founder Jeff Hiatt, consists of five
sequential steps:
Awareness of the need for change;
Desire to participate in and support the change;
Knowledge about how to change;
Ability to implement change and behaviors; and
Reinforcement to sustain the change.
Bridges' Transition Model: Change consultant William Bridges' model focuses
on how people adjust to change. The model features three stages: a stage
for letting go, a stage of uncertainty and confusion and a stage for acceptance.
Bridges' model is sometimes compared to the Kübler-Ross five stages of grief
(denial, anger, bargaining, depression and acceptance).
IT Infrastructure Library (ITIL): The U.K. Cabinet Office and Capita plc oversee a
framework that includes detailed guidance for managing change in IT
operations and infrastructure.
Kotter's 8-Step Process for Leading Change: Harvard University professor John
Kotter's model has eight steps:
increasing the urgency for change;
creating a powerful coalition for change;
creating a vision for change, communicating the vision;
removing obstacles;
creating short-term wins;
building on them; and
anchoring the change in corporate culture.
Lewin's Change Management Model: Psychologist Kurt Lewin created a three-
step framework that is also referred to as the Unfreeze-Change-Freeze (or
Refreeze) model.
McKinsey 7S: Business consultants Robert H. Waterman Jr. and Tom Peters
designed this model to holistically look at seven factors that affect change:
shared values;
strategy;

1
structure;
systems;
style;
staff; and
Popular change management tools
Digital and nondigital change management tools can help change management
officers research, analyze, organize and implement changes. In a small
company, the tools may simply consist of spreadsheets, Gantt
charts and flowcharts. Larger organizations typically use software suites to
maintain change logs digitally and provide stakeholders with an integrated,
holistic view of change and its effects.
Popular change management software applications include:
ChangeGear Change Manager (SunView Software): change management
support for DevOpsand ITIL automation, as well as business roles.
ChangeScout (Deloitte): cloud-based organizational change management
application for evaluating sea changes, as well as incremental changes.
eChangeManager (Giva): a cloud-based, stand-alone IT change management
application.
Freshservice (Freshworks): an online ITIL change management solution
featuring workflow customization capabilities and gamification features.
Remedy Change Management 9 (BMC Software): assistance for managers with
planning, tracking and delivering successful changes that are compliant with
ITIL and COBIT.
Change management certifications
Change management practitioners can earn certifications that recognize their
ability to manage projects, manage people and guide an organization through a
period of transition or transformation. Popular certifications for change
management are issued by:
Change Management Institute (CMI): CMI offers Foundation, Specialist and
Master certifications.
Prosci: The Change Management Certification validates the recipient is able to
apply holistic change management methodologies and the ADKAR model to a
project.
Association of Change Management Professionals (ACMP): ACMP offers a
Certified Change Management Professional (CCMP) certification for best
practices in change management.
Management and Strategy Institute (MSI): The Change Management Specialist
(CMS) certification attests to the recipient's ability to design and manage

1
change programs.
Cornell University's SC Johnson College of Business: The Change Leadership
certification program was developed to authenticate a change agent'sability to
carry out a change initiative. The certification requires four core courses and
two leadership electives.
How change management works
To understand how change management works, it’s best to apply the concepts
and tools to a specific area of business. Below, are examples of how change
management works for project management, software development and IT
infrastructure.
Change management for project management
Change management is an important part of project management. The project
manager must examine change requests and determine the effect a change will
have on the project as a whole. The person or team in charge of change
control must evaluate the effect a change in one area of the project can have
on other areas, including:
Scope: Change requests must be evaluated to determine how they will affect
the project scope.
Schedule: Change requests must be assessed to determine how they will alter
the project schedule.
Costs: Change requests must be evaluated to determine how they will affect
project costs. Labor is typically the largest expense on a project, so overages on
completing project tasks can quickly drive changes to the project costs.
Quality: Change requests must be evaluated to determine how they will affect
the quality of the completed project. Changes to the project schedule, in
particular, can affect quality as the workforce may generate defects in work
that is rushed.
Human resources: Change requests must be evaluated to determine if
additional or specialized labor is required. When the project schedule changes,
the project manager may lose key resources to other assignments.
Communications: Approved change requests must be communicated to the
appropriate stakeholders at the appropriate time.
Risk: Change requests must be evaluated to determine what risks they pose.
Even minor changes can have a domino effect on the project and introduce
logistical, financial or security risks.
Procurement: Changes to the project may affect procurement efforts for
materials and contract labor.
Stakeholders: Changes to the project can affect who is a stakeholder, in

1
addition to the stakeholders' synergy, excitement and support of the project.
When an incremental change has been approved, the project manager will
document the change in one of four standard change control systems to ensure
all thoughts and insight have been captured with the change request. (Changes
that are not entered through a control system are labeled defects.) When a
change request is declined, this is also documented and kept as part of the
project archives.

Change management for software development


In software project management, change management strategies and tools
help developers manage changes to code and its associated
documentation. Agile software development environments actually encourage
changes for requirements and/or the user interface (UI). Change is not
addressed in the middle of an iteration, however; they are scheduled as stories
or features for future iterations.

Handling change management for digital transformation projects


What’s hard about change management for digital transformation projects? A
Capgemini expert explains why the scope of change is daunting and gives
advice.

Change management process


An expert discusses change management and digital transformation.
Version control software tools assist with documentation and prevent more
than one person from making changes to code at the same time. Such tools
have capabilities to track changes and back out changes when necessary.

Change management for IT infrastructure


Change management tools are also used to track changes made to an IT
department's hardware infrastructure. As with other types of change
management, standardized methods and procedures ensure every change
made to the infrastructure is assessed, approved, documented, implemented
and reviewed in a systematic manner.
When changes are made to hardware settings, it may also be referred to as
configuration management (CM). Technicians use configuration management
tools to review the entire collection of related systems and verify the effects a
change made to one system has on other systems.

1
Change management challenges
Companies developing a change management program from the ground up
often face daunting challenges. In addition to a thorough understanding of
company culture, the change management process requires an accurate
accounting of the systems, applications and employees to be affected by a
change. Additional change management challenges include:

Resource management - Managing the physical, financial, human,


informational and intangible assets/resources that contribute to an
organization’s strategic plan becomes increasingly difficult when implementing
change.

Resistance - The executives and employees who are most affected by a change
may resist it. Since change may result in unwanted extra work, ongoing
resistance is common. Transparency, training, planning and patience can help
quell resistance and improve overall morale.

Communication - Companies often fail to consistently communicate change


initiatives or include its employees in the process. Change-related
communication requires an adequate number of messages, the involvement of
enough stakeholders to get the message out and multiple communication
channels.

New technology - The application of new technologies can disrupt an


employee’s entire workflow. Failure to plan ahead will stall change. Companies
may avoid this by creating a network of early learners who can champion the
new technology.

Multiple points of view - In change management, success factors differ for


everyone based on their role in the organization. This creates a challenge in
terms of managing multiple priorities simultaneously.

Scheduling issues - Deciding whether a change program will be long or short-


term, and clearly defining milestone deadlines is complicated. Some
organizations believe that shorter change programs are most effective. Others
prefer a more gradual approach, as it may reduce resistance and errors.

Reference https://searchcio.techtarget.com/definition/change-management

1
Vulnerability management is a pro-active approach to managing network
security. It includes processes for:

Checking for vulnerabilities: This process should include regular network


scanning, firewall logging, penetration testing or use of an automated tool like
a vulnerability scanner.

Identifying vulnerabilities: This involves analyzing network scans and pen test
results, firewall logs or vulnerability scan results to find anomalies that
suggest a malware attack or other malicious event has taken advantage of a
security vulnerability, or could possibly do so.

Verifying vulnerabilities: This process includes ascertaining whether the


identified vulnerabilities could actually be exploited on servers, applications,
networks or other systems. This also includes classifying the severity of a

2
vulnerability and the level of risk it presents to the organization.

Mitigating vulnerabilities: This is the process of figuring out how to prevent


vulnerabilities from being exploited before a patch is available, or in the event
that there is no patch. It can involve taking the affected part of the system off-
line (if it's non-critical), or various other work-arounds.

Patching vulnerabilities: This is the process of getting patches -- usually from


the vendors of the affected software or hardware -- and applying them to all
the affected areas in a timely way. This is sometimes an automated process,
done with patch management tools. This step also includes patch testing,

Reference https://whatis.techtarget.com/definition/vulnerability-and-patch-management

2
Everyone knows that you need to make backups and test them, right? But
have you considered the security issues of backup media after you've
performed your nightly duty?
Backup media requires specialized and focused security controls. Just think
about it, a single backup media can easily contain over 100G Bytes of
confidential, secret, sensitive, proprietary and/or private data that can be
concealed in a jacket pocket or a briefcase. While it may be difficult to near
impossible for someone to swipe one of your network servers, it is merely a
matter of shoplifting and concealment to walk out of your facilities with a
backup media.

Backup media should first and foremost be clearly and distinctly labeled. Not
just with labels defining the content stored on them but with the classification
level of the data. Once labeled, it should retain that label for the lifetime of
the media. Never ever re-use media from a higher classification level to store
data at a lower classification level. Remember that it is nearly always possible
to recover data even after it has been deleted and overwritten on magnetic

3
storage devices and media. Media should be treated with the same -- or
greater -- security precaution warranted by the classification of data it holds.
Once media is classified, it must remain under the proper security controls for
its classification for the lifetime of that media. That means from the moment
the media is written until it is securely destroyed. The activities and events of
media should be logged: its travels/movements, storage locations and chain of
possession should be written down and verified. Media should be transported
securely from the onsite backup devices to the offsite secure storage location.

If you can adopt the mindset that backup media are pocket-sized portable
versions of your organization's data assets, you'll be able to adequately plan
and implement security controls, precautions and deterrents. If you fail to place
importance on backup media management and handling, then you are
effectively handing your IT infrastructure over to anyone who wants access.
Secure media management should be addressed in your security policy and the
exact procedures to perform should be defined in your standards, guidelines
and procedures documentation.

3
What is IT Asset Management?
In the previous chapter we explored the concept of ‘An Asset’ – something of
value that the company owns which has associated benefits and risks.
‘IT Assets’ are things of value owned or managed by IT.
These IT Assets may be software, hardware, systems or services. It follows
that IT Asset Management (ITAM) is the practice of managing these IT Assets
throughout their life in the business for maximum value and minimal risk.
Definitions and Overlap
The diagram (right) shows how ITAM, SAM and HAM intersect.
IT Asset Management (ITAM)
Software Asset Management (SAM)
Hardware Asset Management (HAM)
As we can see in the diagram SAM and HAM are subcomponent parts of the
broader discipline of ITAM.
The terms SAM and ITAM tend to be used interchangeably in the industry –
they are not discreet disciplines but have significant overlap and
dependencies.

4
The key dependency between managing hardware and software is the
platform. The amount of money paid for software can vary dramatically based
on the hardware used to run it. It could be firmware on a switch, the number of
processors on a server or the operating system of a desktop.
Managing hardware and software are inextricably linked. For example if you
want to manage your Windows operating system estate you need to know how
many hardware devices you own, or if you want to manage processor based
server licensing metrics you need to know the hardware configuration of the
server, and so on.
Similarly job titles are usually split between hardware and software. A Software
Asset Manager and Hardware Asset Manager might report to the IT Asset
Manager.
Implementing a good SAM practice will undoubtedly have positive
repercussions on the management of other assets but will fall short of full
ITAM. For example the retirement and disposal of hardware is not typically a
SAM function.
Why SAM gets the limelight
SAM tends to get more exposure and emphasis within the field of ITAM
because:
Software typically constitutes more IT budget than hardware and therefore
represents more value to manage.
Software is riskier (It is often complicated and intangible, It typically contains
more contractual booby traps than hardware)
The most compelling and costly business drivers in ITAM relate to software
Managing Intangible Things
Software can be complicated, expensive and ephemeral, unlike hardware that
collects dust, takes up space and can serve as a doorstop or paperweight when
not in use.
Managing intangible things is not an easy business – try explaining to a non-IT
person in your business how your budget was spent on laptops compared to
something intangible like Client Access Licenses (A licensing concept with no
physical evidence – The right of user to access technology on a server) – one is
significantly more easier to explain that the other.
Similarly, when a member of staff leaves the business and visits the IT
department to return their IT equipment, their laptop and associated cables
and gadgets are physical inventory left on the desk whereas the software they
return is often digital, virtual, sometimes invisible and commonly more
expensive than the hardware. On the basis that you can’t control what you

4
don’t know exists – software represents significant risk and has potential for
enormous waste if not managed correctly.

Management of hardware should not be overlooked. IT hardware in a business


is usually inextricably linked to corporate data. So there is often a compelling
driver to manage hardware effectively if only so that corporate data does not
leak off the premises.
At the time of writing it has just been reported that the Information
Commissioners Office (ICO) in the UK has fined the NHS £200K for selling old
hardware containing patient records on eBay. A third party offered to dispose
of the equipment for free resulting in patient records showing up in the hands
of eBay buyers.

Reference https://www.itassetmanagement.net/2013/08/23/managing-software-asset/

4
Managing third-party cyber risk is critical for businesses, but a lack of
continuous monitoring, consistent reporting, and other blind spots are
creating challenges that could leave organizations vulnerable to data breaches
and other consequences.

Most organizations work with hundreds, if not thousands, of third parties,


creating new risks that must be actively managed.

The financial industry, in particular, has a massive business ecosystem made


up of legal organizations, accounting and human resources firms,
management consulting and outsourcing firms, and information technology
and software providers.
Each of these vendors poses a potential weak spot for cyber defenses if risk is
not actively managed to protect the exchange of data and other sensitive
information.
A BitSight and Center for Financial Professionals (CeFPro) joint study “Third-
Party Cyber Risk for Financial Services: Blind Spots, Emerging Issues & Best

1
Practices” sheds light on how financial institutions are addressing challenges
associated with third-party cyber risk.
“Managing third-party cyber risk has rapidly become the #1 concern for
businesses,” said Jake Olcott, Vice President of Communications and
Government Affairs at BitSight. “Many in the financial sector are taking action
to manage that risk, but as our survey shows, there is vast room for
improvement in key areas like continuous monitoring and effective board
reporting.”

Key findings from the Third-Party Cyber Risk for Financial Services report
Third-party cyber risk is driving key business decisions. Nearly 97 percent of
respondents said that cyber risk affecting third parties is a major issue.
Meanwhile, nearly 80 percent of respondents said they have terminated or
would decline a business relationship due to a vendor’s cybersecurity
performance. 1 in 10 organizations has a role specifically dedicated to vendor,
third-party or supplier risk.

There is a lack of consistent third-party risk measurement and reporting. Only


44 percent of respondents are reporting on this risk to their executives and
boards on a regular basis. This lack of regular reporting could be the reason
why nearly 1 in 5 respondents think boards and executives are not confident or
do not understand their approaches to third-party risk management (TPRM).

A majority of organizations aren’t using critical tools. Respondents reported


that they still rely on tools like annual on-site assessments, questionnaires and
facility tours to assess third-party security posture, giving them limited visibility
into their third-party cyber risk. Meanwhile, only 22 percent of organizations
are currently using a security ratings service to continuously monitor the
cybersecurity performance of third parties, though 30 percent are currently
evaluating security ratings providers.

TPRM challenges and concerns for the future continue to grow. Companies are
concerned with the accuracy and actionability of risk assessment data, as well
as an unclear responsibility for this type of risk management within their
organizations. Looking toward the future, respondents are focused on making
their security programs more effective while staying up-to-date on new
regulations and prioritizing continuous monitoring and visibility.

1
“This report raises a number of interesting questions and challenges for the
industry; with C-suite professionals taking responsibility, it is clear that the vast
majority of respondents’ organizations understand the critical importance of
third-party cyber risk; it is also apparent that there needs to be clarity going
forward, with increased communication up to the Board level,” said Andreas
Simou, Managing Director at CeFPro.
“Although there has been a significant increase in effectiveness, attention, and
resources focused toward third-party cyber risk over the last few years, there is
still much to be done; utilizing more effective tools and techniques to overcome
the ever-increasing challenges being faced within the industry, with third- (and
fourth-) party cyber risk as just one key area to be addressed. The report
highlights a number of potential solutions and ways forward.”

New tools and best practices are becoming readily available to help
organizations address some of the key challenges and concerns uncovered by
the survey.
In order to effectively manage this growing risk and stay ahead of future
challenges, organizations must utilize best practices and trust continuous
monitoring solutions like security ratings to help measure and manage their
cyber risk with third-party risk data that is accurate and actionable.

Reference https://www.helpnetsecurity.com/2019/04/03/third-party-cyber-risk-
management-approaches/

1
In cybersecurity, a sandbox is an isolated environment on a network that
mimics end-user operating environments. Sandboxes are used to safely
execute suspicious code without risking harm to the host device or network.

Using a sandbox for advanced malware detection provides another layer of


protection against new security threats—zero-day (previously unseen)
malware and stealthy attacks, in particular. And what happens in the sandbox,
stays in the sandbox—avoiding system failures and keeping software
vulnerabilities from spreading.

Threats Sandbox Testing Protects Against


Sandbox environments provide a proactive layer of network security defense
against new and Advanced Persistent Threats (APT). APTs are custom-
developed, targeted attacks often aimed at compromising organizations and
stealing data. They are designed to evade detection and often fly under the
radar of more straightforward detection methods.

2
How Does Sandbox Technology Work?
Sandbox testing proactively detects malware by executing, or detonating, code
in a safe and isolated environment to observe that code’s behavior and output
activity. Traditional security measures are reactive and based on signature
detection—which works by looking for patterns identified in known instances
of malware. Because that detects only previously identified threats, sandboxes
add another important layer of security. Moreover, even if an initial security
defense utilize artificial intelligence or machine learning (signature less
detection), these defenses are only as good as the models powering these
solutions – there is still a need to complement these solution with an advanced
malware detection.

Sandbox Security Implementations

There are several options for sandbox implementation that may be more or less
appropriate depending on your organization’s needs. Three varieties of
sandbox implementation include:

Full System Emulation: The sandbox simulates the host machine’s physical
hardware, including CPU and memory, providing deep visibility into program
behavior and impact.

Emulation of Operating Systems: The sandbox emulates the end user’s


operating system but not the machine hardware.

Virtualization: This approach uses a virtual machine (VM) based sandbox to


contain and examine suspicious programs.

Reference https://www.forcepoint.com/cyber-edu/sandbox-security

2
We will cover this topic in a very detailed manner in Incident Response a hands on
approach, along with ISO 27001 and PCI DSS courses. So do not worry about any
reference mentioned here as they will be discussed at a later stage. For now you
should understand what is information security incident management and what
stages are involved in the process.

Information security incident management policy


An organization information security incident management policy should provide the
formally documented principles and intentions used to direct decision-making and ensure
consistent and appropriate implementation of processes, procedures, etc. with regard to this
policy.

Any information security incident management policy should be part of the information
security strategy for an organization. It should also support the existing mission of its parent
organization and be in line with already existing policies and procedures.

An organization should implement an information security incident management policy that


outlines the processes, responsible persons, authority and reporting lines (specifically the
primary point of contact for reporting suspected incidents) when an information security
incident occurs. The policy should be reviewed regularly to ensure it reflects the latest
organizational structure, processes, and technology that can affect incident response. The
policy should also outline any awareness and training initiatives within the organization that
is related to incident response.

1
Involved parties
A successful information security incident management policy should be created and
implemented as an enterprise-wide process. To that end, all stakeholders or their
representatives should be involved in the development of the policy from the initial planning
stages through the implementation of any process or response team. This may include legal
advisors, public relations and marketing staff, departmental managers, security staff, system
and network administrators, ICT staff, helpdesk staff, upper-level management, and, in some
cases, even facilities staff.

An organization should ensure that its information security incident management policy is
approved by a member of top management, with commitment from all of top management.

Ensuring continued management commitment is vital for the acceptance of a structured


approach to information security incident management. Personnel need to recognize an
incident, know what to do and understand the benefits of the approach by the organization.
Management needs to be supportive of the information security incident policy to ensure
that the organization commits to resourcing and maintaining an incident response capability.
The information security incident management policy should be made available to every
employee and contractor and should also be addressed in information security awareness
briefings and training.

2
Information security incident management plan

The aim of an information security incident management plan is to document the activities
and procedures for dealing with information security events, incidents and vulnerabilities,
and communication of them. The plan stems from and is based on the information security
incident management policy.

Overall, the plan documentation should encompass multiple documents including the forms,
procedures, organizational elements and support tools for the detection and reporting of,
assessment and decision making related to, responses to and learning lessons from
information security incidents.
The plan may include a high level outline of the basic flow of incident management activities
to provide structure and pointers to the various detailed components of the plan. These
components will provide the step-by-step instructions for incident handlers to follow using
specific tools, following specific workflows or handling specific types of incidents based on
the situation.
The information security incident management plan comes into effect whenever an
information security event is detected or information security vulnerability is reported.
An organization should use the plan as a guide for the following:
a) responding to information security events;
b) determining whether information security events become information security incidents;
c) managing information security incidents to conclusion;
d) responding to information security vulnerabilities;

3
The key stages of Information Security Incident Management are:

• Detection
• Response
• Mitigation
• Reporting
• Recovery
• Remediation
• Lessons learned

Detection
This is the first step in incident management. You must be able to discover if the
incident really happened or have taken place or not. You may have to deal with
various false positives or other issues along the way.

You can use the following solutions as part of the detection phase:

• Intrusion detection systems (IDSs)/intrusion prevention systems (IPSs)


• Anti-malware solutions
• Log analysis
• Firewalls
• Vulnerability scan results

4
Administrative investigations are conducted usually once the incident is the result of
insider activity from a malicious or disgruntled employee. In such cases, organization
involves the HR, Legal and other relevant departments to take appropriate action as
per the legal, statutory and regulatory requirements.

5
Depending on what kind of crime took place, for example if the crime impacted the
organization internally only and not its customers or externally where its customers
data was compromised; appropriate criminal incident classification and actions are
taken place. That classification determines if outside authorities i.e. police, law
enforcements, and or relevant security services are required to be involved or not.

When law enforcement conducts the investigation, the organization may or may not be
involved in the process; this is the option of the law enforcement body. In many jurisdictions,
law enforcement may request the organization to voluntarily collect or disclose information
about the situation to further the investigation and build a case. Typically, the organization
may opt to participate or not participate in an investigation when informally requested to do
so. However, if the law enforcement entity acquires a warrant or subpoena, which are
governmental/judicial orders to disclose information, then the organization must comply
with
the request to the fullest extent required. Any interference or negligence on the part of the
organization in fulfilling mandated requests may actually constitute additional crimes:
obstruction of justice, contempt of court, interfering with an investigation, and so forth.

6
Unlike criminal proceedings, a civil dispute involves a court but not a prosecutor. An
investigation with the intended purpose of a lawsuit should involve the same degree of
documentation and adherence to detail as a criminal investigation, because the organization
will not be deciding the outcome but will be trusting the court to determine if either party
owes restitution to the other.

Some incidents may involve components of both criminal and civil actions; for instance, if the
organization is hacked by a malicious attacker, the hack itself might be a criminal act
(violating the law), and it might also cause damages for which the victim organization can sue
the attacker. In these situations, the parties to the civil suit can often use the evidence
collected during the criminal proceedings to support their claims. However, civil courts
usually
also allow a greater breadth of evidence that may be presented in a more liberal fashion than
in a criminal case—some of the restrictions placed on law enforcement when collecting
evidence do not apply to victims in civil cases. (For instance, a law enforcement agency might
need to get a court order to conduct network monitoring on a target environment, while the
owner of that environment—the victim organization—is allowed to monitor activity within
the environment and present resulting data without permission from the courts.)

If an organization decides to become involved in a civil suit, it must be understood that the
organization will be bearing the financial burden: attorneys’ and court fees and so forth
(sometimes, depending on the case and the jurisdiction, the winning side of a civil case may
transfer this burden to the loser, but this is not always true and that cost is only recovered

7
Some investigations will be done by or on the behalf of regulatory bodies. When an
organization is involved in regulated activity, that activity necessarily is subject to
investigation by the pertinent regulator(s).

Regulators may conduct their own investigations, require the target organization to acquire
and present information to the regulator, or engage a third party to perform the
investigation. In many jurisdictions, regulatory investigation has the force of law, so it will
have similar processes to criminal investigations but require a much lower threshold of
access (regulators typically do not need warrants, court orders, or subpoenas to gather
evidence) and a much lower burden of evidence to make findings (in some jurisdictions, such
as in North America, EU, and various other applicable countries).

8
There are many industry standards for investigations of all sorts, including IT security and
data investigations; applicable standards for a given organization depend on a host of
variables, such as geographic region/ jurisdiction, the nature of the data in question, the
business of the organization, and so forth. The following is a sample list of standards from
around the world; this list is in no way comprehensive or definitive, and the candidate will
not be required to memorize these standards for certification purposes. However, many of
these standards include common principles and methods of execution, so the candidate is
encouraged to review them for insight into professional investigation approaches and
expectations.

We will cover in detail the complete process of ISO 27043 standard and the incident
investigation process it outlines.

9
ISO 27037 Information technology — Security techniques — Guidelines for identification,
collection, acquisition, and preservation of digital evidence we will cover this standard in
detail during stage 2 and 3 of the digital forensics and threat hunting.

Evidence Collection and Handling


All material associated with an incident could be pertinent to an investigation and used as
evidence. This includes the following:
• Data that may have been compromised.
• Systems (hardware, software, and media) that may have been compromised.
• Data about the incident (all monitoring data from assets reviewing the data/systems that
may have been compromised).
• Information from people with knowledge of the incident.
• Information about the incident scene. With an IT-based incident, the incident scene can
actually involve many geophysical locations and jurisdictions, including the site where the
compromised systems/data resides, the location of the intruder (if unauthorized intrusion
was an element of the incident), and any locations between the compromised systems
and the intruder where resources were used to aid the intruder.
• There are many sources and forms of evidence, and it all needs to be collected, tracked,
and maintained carefully. These are some common practices for handling evidence the
security professional should be aware of:
• Maintain a chain of custody. Evidence needs to be handled and maintained in a secure
fashion, from the time it is collected until it is presented (usually, to a court). The chain of

10
Application of this International Standard requires compliance with national laws, rules and
regulations. It should not replace specific legal requirements of any jurisdiction. Instead, it
may serve as a practical guideline for any DEFR(Digital Evidence First Responder) or DES
(Digital Evidence Specialist) in investigations involving potential digital evidence. It does not
extend to the analysis of digital evidence and it does not replace jurisdiction-specific
requirements that pertain to matters such as admissibility, evidential weighting, relevance
and other judicially controlled limitations on the use of potential digital evidence in courts of
law. In order to maintain the integrity of the digital evidence, users of the International
Standard (ISO) are required to adapt and amend the procedures described in this
International Standard in accordance with the specific jurisdiction’s legal requirements for
evidence.

Although this International Standard does not include forensic readiness, adequate forensic
readiness can largely support the identification, collection, acquisition, and preservation
process of digital evidence. Forensic readiness is the achievement of an appropriate level of
capability by an organization in order for it to be able to identify, collect, acquire, preserve,
protect and analyze digital evidence. Whereas the processes and activities described in this
International Standard are essentially reactive measures used to investigate an incident after
it occurred, forensic readiness is a proactive process of attempting to plan for such events.

• Admissibility: Only evidence that is acceptable to the court may be presented. The court
will inform the practitioner if some evidence is unacceptable.
• Accuracy: The evidence should be true and clear.

11
There are many ways to conduct an investigation and gather evidence. The following is a
basic, non-comprehensive list of common evidence gathering techniques and some of the
benefits and challenges associated with them.

• Automated capture: The organization’s monitoring activity can be used for collecting and
analyzing incident data in addition to the goals of detection and performance
optimization; this is especially true if the organization has a continuous monitoring
program in place. Normal logging can be copied and harvested for evidentiary purposes.

• Interviews: You can solicit information from the people involved with or who have insight
into an incident. However, for all organizations other than law enforcement entities, this
can pose some legal challenges in many jurisdictions. Some aspects that should be
considered when conducting interviews of personnel:

• Record when possible. In some jurisdictions, recording interviews can be


problematic; check your local applicable laws. Be sure to notify the
interview subject that the conversation is being recorded (record the
notification).
• Conduct multiparty interviews. Never have a sole interviewer talk to the
subject.
• Ensure preservation of the subject’s rights. Comply with all applicable laws
regarding interviews. Make sure the subject is aware that they do not have
to partake in the interview (even when the choice to refuse an interview

12
Documentation is critical when handling digital devices that may contain potential digital
evidence. The DEFR (Digital Evidence First Responder) should adhere to the following points
during documentation:

Every activity taken should be documented. This is to ensure that no details have been left
out during the identification, collection, acquisition and preservation processes. It may also
be helpful in a cross-border investigation whereby the potential digital evidence gathered
from another part of the globe can be traced accordingly.

The DEFR should be sensitive of the time and date setting if the digital devices are powered
on. Compare the time setting with a reliable time source, such as a time that is synchronized
with reliable and traceable time source. These time settings should be documented and
noted if any differences are present. Some systems require much user interaction in order to
get the time and date settings. The DEFR should be cautious not to modify the system. Only
properly trained personnel should retrieve these settings.

The DEFR should document anything visible on the digital device screen: active programs and
processes, as well the names of open documents. This documentation should include a
description of what is visible as some malicious programs may masquerade as well-known
software.

Any movement of the digital devices should be documented in accordance with local
requirement.

13
This topic was covered in detail in the previous domains:

Here definitions are covered at a high level.

Intrusion detection system (IDS): A solution that monitors the environment and
automatically recognizes malicious attempts to gain unauthorized access. The IDS will alert
someone within the organization (usually someone in the security office or the IT
department) for analysis and follow-up action.

Intrusion prevention system (IPS): A solution that monitors the environment and
automatically takes action when it recognizes malicious attempts to gain unauthorized
access. It will also typically notify someone within the organization that action has been
taken.

14
This will be covered in a lot of detail in ISO 27001, PCI DSS and stage 2 courses of CODS.

The current trend in security management involves the use of tools that collect information
about the IT environment from many disparate sources to better examine the overall
security of the organization and streamline security efforts. These tools are generally known
as SIEM solutions.

NOTE: There is no formal industry standard defining SIEM solutions, their function, and their
implementation. “SIEM” is a marketing term used by vendors to describe tools that offer
some common functions (described in this section of the module). Practitioners should be
aware that similar tools offering
the same functionality may be termed “SEIM,” and many tools that were previously called
“SEM” or “SIM” may offer the same types of services.

Aggregation: The SIEM tool gathers information from across the environment. This offers a
centralized repository of security data and allows analysts to have a single interface with
which to perform their duties. The SIEM might gather
log data from:
• Firewalls
• IDS/IPS systems
• IT performance monitoring tools
• Network devices (routers/switches/gateways)
• Individual hosts/endpoints

15
Ingress Monitoring
Ingress monitoring refers to surveillance and assessment of all inbound
communications traffic and access attempts. Devices and tools that offer logging and alerting
opportunities for ingress monitoring include the following:

• Firewalls
• Gateways
• Remote authentication servers
• IDS/IPS tools
• SIEM solutions
• Anti-malware solutions

Egress Monitoring
Egress monitoring is used to regulate data leaving the organization’s IT
environment. The term currently used in conjunction with this effort is “DLP”; a marketing
descriptor without standard definition, it is often referred to as “data leak protection” or
“data loss protection,” or some combination of those words. For purposes of addressing this
topic, the term DLP will be used synonymously with egress monitoring.

16
DLP tools function by comparing data leaving the control of the organization against a rule
set to determine whether that action is allowed. The DLP rule set can be defined by the
following:

Signature: Particular types of data might conform to certain strings that are readily
identifiable and can, therefore, be recognized by the tool. For instance, a DLP set to prevent
the egress of credit card information might be taught to search for and sequester any 15–20
string of numeric characters.

Pattern matching: The DLP might be conditioned to look for two-word strings where each
word starts with one uppercase character and the rest are lowercase; in this way, the DLP
might restrict the export of individual
names. This might also be done for the frequency of a given word/words in the context of a
page or throughout a document to prevent the egress of proprietary or confidential
information.

Labeling: Sensitive assets within the environment might be tagged with specific labels that
will be recognized by the DLP tool. For instance, an organization trying to protect proprietary
information might embed labels such as “copyright,” “proprietary,” or “confidential” in data
assets that should not be shared outside the organization.

For DLP solutions to function properly, they usually need to be deployed in conjunction with

17
Backup Storage Strategies

Accurate and comprehensive backups are instrumental to facilitating BCDR efforts; this is an
essential aspect of the availability facet of the CIA Triad. Some backup concepts the
candidate should be familiar with:

Onsite/offsite: There is a risk/benefit tradeoff to deciding the location of the organization’s


backups.

o Onsite: The organization has full control (and responsibility) of the stored data. Cost may
be proportionally higher for the organization, depending on the organization’s core
competencies and type of business. (Example: a small or midsize organization might not have
the data center capacity and skillset internally to support thorough secure backups.)

o Offsite: The data is exposed to additional risk while it is moved from the organization’s
environment to the external environment (in transit). The organization loses some control of
the security governance and controls used to store the data. Cost may be lower or higher
than the onsite option, depending on the nature of the organization and the options offered
by the provider. A provider with the sole focus on secure data storage may be able to scale
services such that secure storage is much more affordable for its clientele, where the same
service would be cost-prohibitive for each individual client.

Full/differential/incremental: The amount of data backed up at any given time can vary

18
Some organizations that seek to minimize downtime and enhance BCDR capabilities utilize
multiple processing sites to obviate the effects of an impact to any single site.

Organizations with extreme sensitivity to downtime—medical providers,


military/intelligence agencies, high-volume online retailers, utilities—
have a greater need to ensure BCDR capabilities are comprehensive and
effective. Here are some techniques for facilitating this practice:
Sufficient spare components: An organization seeking high availability needs to have
sufficient components in inventory to replace/repair any affected elements of the
environment (or at least those components supporting the critical path). This is a security
concern addressed by logistics and budgeting; having too many spares on hand is an
expensive proposition and can negatively impact the organization financially just as much as
an outage might.

Clustering: Systems can be combined to provide constant full capacity to the organization
when one system/element goes down; this is referred to as “clustering” (storage, processing,
and network systems can all be clustered). This can be viewed as duplication/replication of
the systems in the cluster, and the cluster can enhance normal production through load
balancing or merely serve as additional capacity for contingency operations or when
widespread temporary scaling is necessary. Common modes of clustering include “active-
active” (where all systems in the cluster operate in normal production, each handling a
portion of the operational load) and “active-passive” (where at

19
RAID: An organization can use a RAID (sometimes repetitively referred to as a “RAID array,”
or defined as a “redundant array of independent disks”) to enhance availability and diminish
the risk of downtime due to failure of a single storage component. A RAID setup entails
virtualizing a storage volume across several physical disks so that an entire data set is not lost
if a single drive fails. The technique of writing a data set across multiple drives is known as
striping, and some RAID configurations also use a mechanism known as parity bits to allow
recovery of the full data set if one drive fails (the striped data from adjacent drives,
combined with the parity bits can fill in the missing data). There are many RAID
configurations, and the candidate should be familiar with each:

- RAID: Not actually a redundancy configuration, as the array has no parity bits; this
configuration is used for optimizing speed and performance.
- RAID 1: Another method that does not typically use parity bits (and RAID 1 does not even
use striping); instead, the data is fully duplicated across multiple
drives so that any part of the data set can be recovered if a single drive fails. This can be
costly but also serves as a backup for the production data.
- RAID 2: A legacy technique not currently in wide use.
- RAID 3 and 4: Data is striped across multiple drives, and a distinct drive is used to store
parity information. RAID 3 stripes data at the byte level; RAID 4 at the block level. These RAID
configurations may not be optimum for organizations seeking high availability environments,
as the parity drive in each represents a potential single point of failure.
- RAID 5: Both the data and the parity bits are striped across multiple disks; provides high
availability.

20
21
We will cover the ISO 22301 in more details as a new course for ISO 22301 is up the
horizon delivered by our Chief Trainer Muhammad Faisal.

22
23
24
25
26
27
28
29
30
Reference
https://www.drj.com/pre-2006/winter-2000/business-continuity-planning-tabletop-
exercise-white-paper.html

BUSINESS CONTINUITY PLANNING TABLETOP EXERCISE WHITE PAPER


PURPOSE

The purpose of business continuity planning (BCP) tabletop exercising is to demonstrate to


management the ability of one or more critical business processes to continue functionality,
within the required time frame, following an interruption.

OBJECTIVES

The approach to completing the BCP Tabletop Exercise is to first agree with
business process owners and managers on the scope and objectives of the exercise.
Facilitated sessions are then planned for the execution of the tabletop exercise. At a high
level, the planning and execution of these sessions should include:
Selection of relevant scenarios for the tabletop exercise
Identification, notification and scheduling of appropriate personnel
A facilitated walk-through of the scenario, along with discussions on Business Continuity Plan
actions and responsibilities
Capture of tabletop exercise notes, including issues and areas for changes/additions to the
BCP documents

31
Reference
https://www.onsolve.com/blog/how-to-test-your-bc-plan-with-a-structured-walk-
through-test/

How to test your BC plan with a structured walk-through test


Tags
Business Continuity Plan Disaster Preparedness Operations Team Building
The structured walk-through test is usually performed with a single team. A version
of the test can be performed with team leaders to exercise coordination between
teams.
Participants
Business recovery coordinator
Team leaders, customers and alternates
Facilitator
Other personnel as determined necessary
Procedures
Facilitator
• Be sure that the plan, including exhibits, has been distributed to each participant
and has been updated before the exercise
• Meet with all participants to explain the purpose and scope of the test and explain
the sections of the plan to be exercised
• Describe the disaster scenario (situation) to be used for the exercise backdrop,
including:

32
Reference
https://smallbusiness.chron.com/conduct-testing-business-continuity-
plan-4526.html

It is important for your firm to have a business continuity plan because in the event of a
disaster that causes a business shutdown--a fire or flood for example--you’ll be able to
minimize losses, down times and the impact on your customers. Once you have developed
your business continuity plan, or BCP, it is just as important to test your plan. Testing verifies
the effectiveness of your plan, trains plan participants on what to do in a real scenario and
identifies areas where the plan needs to be strengthened.
1. Conduct a plan review at least quarterly. Gather your team of key business continuity plan
participants--division leaders or department heads--regularly to review the business
continuity plan. Discuss the elements of the plan with a focus on the discovery of any areas
where the plan can be strengthened. Train new managers regarding the plan and incorporate
any new feedback.
2. Conduct disaster role-playing (“table-top”) sessions that allow plan participants to “walk
through” the facets of the BCP, gaining familiarity with their responsibilities given a specific
emergency scenario(s). Conduct the dry-run training to document errors and identify
inconsistencies for correction and improvements. Schedule at least two to three of these
sessions each year.
3. Perform a simulation of a possible disaster scenario. Include business leaders, partners,
vendors, management and staff in the BCP test simulation. Test data recovery, staff safety,
asset management, leadership response, relocation protocols and loss recovery procedures.

33
Reference https://www.mha-it.com/2016/11/02/disaster-recovery-testing/

Functional Exercise, Drill or Parallel Test: Involves the actual mobilization of personnel to
other sites in an attempt to establish communications and perform processing as set forth in
the plans.For Business Continuity, this typically involves determining if employees can
actually deploy the procedures defined in the Business Continuity plan.
For Disaster Recovery plans, the goal is to determine whether critical systems can be
recovered at the alternate processing site.
Once the individual Business Continuity and Disaster Recovery functional tests are successful,
a parallel test for both may be conducted where the business functions relocate to an
alternate site and use the recovered systems and applications.

34
Full interruptions: The most comprehensive type of testing. In a full-scale test, a real-life
situation is simulated as closely as possible. Comprehensive planning is a prerequisite to this
type of test to ensure that business operations are not negatively affected. The exercise
provides proof of the entire integration of the business continuity plans, relocation plans,
and technical recovery plans. This type of exercise should only be attempted once the
exercises described above have been successfully completed and the organization is
comfortable with the functional capability of the recovery strategies.

35
Security concerns and risks differ depending on location; the organization should take this
into account when personnel are required to work outside the organization’s control (that is,
everywhere but inside the organization’s facilities/campus). Some security aspects to
consider when personnel are traveling/ working remotely:

Encryption: Devices and data that are physically moved to any location outside the
organization’s control can benefit from the additional protection of encryption; this can
protect the organization from loss of data due to interception in transit or physical theft/loss
of a device. However, if personnel are traveling internationally, encryption options may be
limited by law in some jurisdictions.

Secure remote access: If personnel are going to connect to the organization’s environment
from off-site facilities, the organization needs to create a secure mechanism for doing.

Additional jurisdictional concerns: Data moved across borders may be subject to different
statutory/contractual regulation.

Personnel protection: Personnel need to be protected according to the specific security


conditions of geographical areas where they may be traveling. The organization should
provide location specific orientation material for travelers, additional personal training,
medical/life insurance, and physical protection elements as needed.

Condition monitoring: When personnel are traveling, someone remaining at the

36
Health and human safety is the paramount concern of all security efforts; ensuring personnel
are properly trained and aware of safety and security threats and risks is essential.

This effort should include the following:


• Location-specific orientation, training, and awareness for travelers (see previous topic in
this module).
• Emergency procedures (see next topic in this module).
• Incident reporting procedures (see Module 5 in this domain).
• Users’ role(s) in incident detection and response.
• How to recognize attack attempts that directly target individual users (phishing, social
engineering, etc.).

37
All emergency/BCDR planning should take into account personnel safety as the highest
priority. Elements of the security program specific to personnel safety should include the
following:

• Fire detection/suppression systems designed to protect human health and safety first and
foremost. All egress paths from the facility should be equipped with deluge systems. Fire
marshals (and alternates) should be assigned per workspace and fully trained and
practiced.

• Evacuation of personnel should be practiced on a regular basis; all personnel should be


aware of emergency exits and procedures.

• Coordination with all applicable external entities (law enforcement, fire department,
medical response, etc.) should be performed prior to any actual event so that ready
communication and familiarity is established.

• The organization’s BCDR team needs to consider all localized threats (natural disaster/
weather applicable to the particular location, etc.) when making the response plan and in
designing the thresholds for initiating the response.

• Asset protection activities must not put personnel in jeopardy.

• If the organization’s BCDR response includes relocating critical personnel to a

38
Personnel should have a means to report to the organization if they are ever put under
duress (threatened or hindered in movement). This is especially true for travelers, senior
management, and critical personnel, all who may be subject to crimes that target those roles
(kidnapping, terror attacks, etc.).

Personnel should be able to convey duress situations in a subtle manner (that is, with code
words other than, “I’m under duress”) that can be worked into normal communications and
can be remember while the subject is under extreme stress. Duress codes should be able to
be conveyed by several methods of communication (verbal and otherwise). Personnel
receiving duress codes should have training and practice in the actions to undertake in those
circumstances.
Duress codes should change on a regular basis, but if personnel convey expired codes, a
response process should still be initiated.

39
40
1
2
3
4
5
6
Reference https://www.dataversity.net/how-you-should-approach-the-secure-
development-lifecycle/#

How You Should Approach the Secure Development Lifecycle


Is your development process producing secure software? Ensuring that their
software is secure is one of the main challenges developers face daily. It is not
enough to test the software only at the required stages, which can result in
overlooking minor vulnerabilities. The attackers are always ready to exploit even the
slightest flaw.
One of the key strategies you can use to secure your software is a Secure Software
Development Lifecycle (Secure SDLC or SDL). Read on to learn about the SDL, why it
is important, and how you can implement it.
What Is Secure SDLC and Why Is Important for You?
Secure Development Lifecycle (SDL) is the process of including security artifacts in
the Software Development Lifecycle (SDLC). SDLC, in turn, consists of a detailed plan
that defines the process organizations use to build an application from inception
until decommission.

7
Development teams use different models such as Waterfall, Iterative or Agile.
However, all models usually follow these phases:
Planning and requirements
Architecture and design
Test planning
Coding
Testing the code and results
Release and maintenance
Developers usually performed security-related tasks only at the testing stage,
resulting in discovering issues too late or not at all. With time, teams started to
integrate security activities to catch vulnerabilities early in the development cycle.
With this in mind, the concept of secure SDLC started. Secure SDLC integrates
activities such as penetration testing, code review, and architecture analysis into all
steps of the development process.
The main benefits of adopting a secure SDLC include:
Makes security a continuous concern—including all stakeholders in the security
considerations
Helps detect flaws early in the development process—reducing business risks for the
organization
Reduces costs—by detecting and resolving issues early in the lifecycle.

7
Reference https://www.dataversity.net/how-you-should-approach-the-secure-
development-lifecycle/#

How Does Secure SDLC work?


Most companies will implement a secure SDLC simply by adding security-related
activities to their development process already in place. For example, they can
perform an architecture risk analysis during the design phase.
There are seven phases in most SDLCs although they may vary according to the
methodology used, such as Agile or Waterfall:
Concept
Planning
Design and Development
Testing
Release
Sustain
Disposal
For example, a development team implementing the waterfall methodology may

8
follow the following scheme:
Each security activity should correspond with a phase in the SDLC, as follows:

8
Reference https://www.dataversity.net/how-you-should-approach-the-secure-
development-lifecycle/#

Some considerations to take into account when implementing a Secure SDLC are:
SDL Discovery
The goal should be to determine the security objectives required by the software,
what are the possible threats and what regulations the organization needs to
follow.
When working on the scope of the SDL, a development team should focus on
deliverables such as security milestones, required certifications, risk assessments,
key security resources, and required third-party resources.
Security Baselines
Here you should list what are the requirements your product need to comply with.
For example, only using approved cryptography and libraries or using multifactor
authentication. A Gap Analysis, contrasting the product’s features against the
baseline, is useful to identify the areas not complying with the security baseline.
When finding a gap, this needs to be addressed as early as possible in the lifecycle.
Since companies release a product based on the percentage of compliance with the

9
baseline, is important to work on gaps through the development process.
Security Training and Awareness
The company should provide security training sessions for developers, designers,
architects, and QA. They can focus on secure design principles, security issues, web
security or encryption.
Security awareness sessions are not geared specifically for the development team,
involving everyone that is connected to the project within the organization. Sessions
should be easy in terms of technical level and can include topics such as the various
cybersecurity threats or risk impact and management.
Threat Modeling
Modeling the software components to identify and manage threats early in the
development lifecycle. This helps the team to develop an incident response plan from
the beginning, planing the appropriate mitigations early before the damage becomes
more complicated to manage. Typically follows four steps, preparation, analysis,
determine mitigations and validation. This activity can have different approaches
such as protecting specific critical processes, exploit weaknesses or focus on the
system design.
Third-Party Software Tracking
Since third-party software can be open source or commercial use, the team needs to
list all third-party tools used in the project. This inventory should be done in the early
stages of the development cycle. There are software tools that track and list the
third-party components, sending you an alert if any component needs upgrading or
has licensing issues.
Security Design and Peer Review
The development team should ensure the software is built with the most secure
features. When reviewing the functional feature design, the developer should include
a security design review, thinking like an attacker to discover the feature
vulnerabilities. When reviewing the code, developers need to be aware of the most
common coding security pitfalls. They can follow a checklist for secure coding, for
example, that ensures that important security events are logged, check the
permeability of the authentication process, or validate the user input.
Security Testing
While code review focus on functionality, security testing checks how vulnerable is
the new product to attacks. Some of the testing activities include:
Static Analysis—identifies the exact location of weaknesses by analyzing the software
without executing it.
Dynamic Analysis—identifies weaknesses by running the software, helping find
infrastructure flaws and patch errors.
Vulnerability Scanning—injects malicious inputs against running software to check
how the program reacts. Mostly used to scan applications with a web interface.
Fuzzing—involves giving invalid, random data to a program, to check for access

9
protocols and file formats. The test helps find bugs that humans often miss by
generating random input and try all possible variations.
Third-party penetration testing—the tester simulates an attack to discover coding or
system configuration flaws, and discover vulnerabilities a real attacker can exploit. It
is required that the tester is an external party not connected to the team.
Data Disposal and Retention
Usually at the end of a product’s life companies dispose of old products, or data that
they don’t need to use anymore. Many companies delete or overwrite encryption
keys, in a process called “crypto-shredding”. While getting rid of outdated data is a
necessity, there are concerns when comes to keep the confidentiality of the
information. Some regulations such as GDPR have specific requirements for data
disposal and retention.
What’s Next?
Implementing secure SDL helps you follow security best practices, integrating security
activities and checkups across the development cycle. This will help to increase your
product and company security posture.

9
To ensure software development success, organizations should choose appropriate
development lifecycle methodologies to guide them in properly completing the
phases involved in software development. As software development projects have
become more complex, a number of methodologies have been created to manage
this complexity, such as waterfall, agile, spiral, and a number of others. As a
systems development effort goes through a lifecycle, the methodology can guide
the engineers and developers in completing the phases properly. The theme of our
presentation is that security needs to be involved in the phases, and therefore, the
methodologies.
The software development lifecycle (SDLC) is a framework that can guide the phases
of a software development project from inception to defining the functional
requirements to implementation. As the word “development” implies, this lifecycle
ends at the implementation phase. Regardless of the methodology used, the SDLC
outlines the phases a software development project needs go to through.
Organizations need to choose methodologies carefully, as the model chosen should
be based on the requirements of the organization. As with any other project,
understanding the requirements ahead of time is paramount for the success of the
project itself. For example, some models work better with long-term, complex

10
projects, while others are more suited for short-term projects. However, the key point
being made here is that a formalized SDLC needs to be utilized, but the entire process
needs to involve security. The best security is always what is designed into the
system, not what is added later.

10
11
12
Reference : https://airbrake.io/blog/sdlc/waterfall-model

First introduced by Dr. Winston W. Royce in a paper published in 1970, the waterfall
model is a software development process. The waterfall model emphasizes that a
logical progression of steps be taken throughout the software development life
cycle (SDLC), much like the cascading steps down an incremental waterfall. While
the popularity of the waterfall model has waned over recent years in favor of
more agile methodologies, the logical nature of the sequential process used in the
waterfall method cannot be denied, and it remains a common design process in the
industry.
Throughout this article we’ll examine what specific stages make up the core of the
waterfall model, when and where it is best implemented, and scenarios where it
might be avoided in favor of other design philosophies.
Some more specific takes on SDLC include:
Rapid Application DevelopmentTest-Driven DevelopmentSoftware Development Life
CycleIterative ModelExtreme ProgrammingScaled Agile FrameworkAgile
ModelScrumRational Unified ProcessBig Bang ModelV-ModelConceptual
ModelKaizen ModelKanban ModelSpiral ModelThe Six Stages of Falling Water

13
Actually implementing a waterfall model within a new software project is a rather
straightforward process, thanks in large part due to the step-by-step nature of the
method itself. There are minor differences in the numbers and descriptions of the
steps involved in a waterfall method, depending on the developer you ask (and even
the year during which you ask him or her). Regardless, the concepts are all the same
and encompass the broad scope of what it takes to start with an idea and develop a
full-scale, live application.
Requirements: During this initial phase, the potential requirements of the application
are methodically analyzed and written down in a specification document that serves
as the basis for all future development. The result is typically a requirements
document that defines what the application should do, but not how it should do it.
Analysis: During this second stage, the system is analyzed in order to properly
generate the models and business logic that will be used in the application.
Design: This stage largely covers technical design requirements, such as programming
language, data layers, services, etc. A design specification will typically be created
that outlines how exactly the business logic covered in analysis will be technically
implemented.
Coding: The actual source code is finally written in this fourth stage, implementing all
models, business logic, and service integrations that were specified in the prior
stages.
Testing: During this stage, QA, beta testers, and all other testers systematically
discover and report issues within the application that need to be resolved. It is not
uncommon for this phase to cause a “necessary repeat” of the
previous coding phase, in order for revealed bugs to be properly squashed.
Operations: Finally, the application is ready for deployment to a live environment.
The operations stage entails not just the deployment of the application, but also
subsequent support and maintenance that may be required to keep it functional and
up-to-date.
The Advantages of the Waterfall Model
While the waterfall model has seen a slow phasing out in recent years in favor of
more agile methods, it can still provide a number of benefits, particularly for larger
projects and organizations that require the stringent stages and deadlines available
within these cool, cascading waters.
Adapts to Shifting Teams: While not necessarily specific to the waterfall model only,
using a waterfall method does allow the project as a whole to maintain a more
detailed, robust scope and design structure due to all the upfront planning and
documentation stages. This is particularly well suited to large teams that may see
members come and go throughout the life cycle of the project, allowing the burden
of design to be placed on the core documentation and less on any individual team
member.
Forces Structured Organization: While some may argue this is a burden rather than a

13
benefit, the fact remains that the waterfall model forces the project, and even the
organization building said project, to be extraordinarily disciplined in its design and
structure. Most sizable projects will, by necessity, include detailed procedures to
manage every aspect of the project, from design and development to testing and
implementation.
Allows for Early Design Changes: While it can be difficult to make design changes
later in the process, the waterfall approach lends itself well to alterations early in the
life cycle. This is great when fleshing out the specification documents in the first
couple stages with the development team and clients, as alterations can be made
immediately and with minimal effort, since no coding or implementation has actually
taken place up to that point.
Suited for Milestone-Focused Development: Due to the inherent linear structure of a
waterfall project, such applications are always well-suited for organizations or teams
that work well under a milestone- and date-focused paradigm. With clear, concrete,
and well understood stages that everyone on the team can understand and prepare
for, it is relatively simple to develop a time line for the entire process and assign
particular markers and milestones for each stage and even completion. This isn’t to
suggest software development isn’t often rife with delays (since it is), but waterfall is
befitting the kind of project that needs deadlines.
The Disadvantages of the Waterfall Model
While some things in software development never really change, many others often
fall by the wayside. While Dr. Royce’s initial proposal of what is now known as the
waterfall model was groundbreaking when first published back in 1970, over four
decades later, a number of cracks are showing in the armor of this once heralded
model.
Nonadaptive Design Constraints: While arguably a whole book could be written on
this topic alone, the most damning aspect of the waterfall model is its inherent lack
of adaptability across all stages of the development life cycle. When a test in stage
five reveals a fundamental flaw in the design of the system, it not only requires a
dramatic leap backward in stages of the process, but in some cases, can be often lead
to a devastating realization regarding the legitimacy of the entire system. While most
experienced teams and developers would (rightfully) argue that such revelations
shouldn’t occur if the system was properly designed in the first place, not every
possibility can be accounted for, especially when stages are so often delayed until the
end of the process.
Ignores Mid-Process User/Client Feedback: Due to the strict step-by-step process
that the waterfall model enforces, another particularly difficult issue to get around is
that user or client feedback that is provided late into the development cycle can
often be too little, too late. While project managers can obviously enforce a process
to step back to a previous stage due to an unforeseen requirement or change coming
from a client, it will be both costly and time-consuming, for both the development

13
team and the client.
Delayed Testing Period: While most of the more modern SDLC models attempt to
integrate testing as a fundamental and always-present process throughout
development, the waterfall model largely shies away from testing until quite late into
the life cycle. This not only means that most bugs or even design issues won’t be
discovered until very late into the process, but it also encourages lackadaisical coding
practices since testing is only an afterthought.
In spite of going through an explicit testing phase during implementation of
a waterfall model project, as discussed above, this testing is often too little, too
late. In addition to the normal testing phase, you and your team should strongly
consider introducing an effective error management tool into the development life
cycle of your project. Airbrake’s error monitoring software provides real-time error
monitoring and automatic exception reporting for all your development projects.
Airbrake’s state of the art web dashboard ensures you receive round-the-clock status
updates on your application’s health and error rates. No matter what you’re working
on, Airbrake easily integrates with all the most popular languages and frameworks.
Plus, Airbrake makes it easy to customize exception parameters, while giving you
complete control of the active error filter system, so you only gather the errors that
matter most.

13
A method that programmers use to write programs allowing considerable influence
on the quality of the finished products in terms of coherence, comprehensibility,
freedom from faults, and security. This methodology uses extensive uses of
subroutines and block structures that can be heavily reused. Structured
programming also promotes discipline, allows introspection, and provides
controlled flexibility. It requires defined processes and develops code into modules
that are reused, and each phase is subject to reviews and approvals. It also provides
a structured approach for security to be added as a formalized, involved approach.

14
A nested version of the original waterfall method, the development of each phase is
carefully designed using the waterfall model, but the distinguishing feature of the
spiral model is that in each phase we add four sub-stages, based on what is known
as the Deming plan, do, check, act (PDCA) model. Specifically, a risk assessment
review (Check) is done at each phase. The estimated costs to complete and the
schedules are revised each time the risk assessment is performed. We can consider
this model to be an improvement of the waterfall methodology based on being able
to address, at each phase, the results of the risk sub-phase assessment. At this
point, a decision is made to continue or cancel the project.

15
This methodology is focused on controlling and, at best, avoiding defects and bugs
in the software. The emphasis is to write the code correctly the first time rather
than trying to find the problems once they are already there and trying to address
them later. Essentially, cleanroom software development focuses on defect
prevention rather than defect removal. To allow this to happen, more time is spent
in the early phases, focusing heavily on the assumption that the time spent in other
phases, such as testing, is theoretically reduced. The basic premise, therefore, is
that quality is achieved through proper design rather than testing and remediation
later. In terms of security, the same pattern applies, if risk considerations are
addressed up front, security becomes an integral part of the system as a design
rather than adding it later. This is always preferred as far as security is concerned.
Security should always be designed into the system based on requirements rather
than being retrofitted later.

16
Reference https://searchsoftwarequality.techtarget.com/definition/iterative-development

Iterative development is a way of breaking down the software development of a


large application into smaller chunks. In iterative development, feature code is
designed, developed and tested in repeated cycles. With each iteration, additional
features can be designed, developed and tested until there is a fully functional
software application ready to be deployed to customers.
Typically iterative development is used in conjunction with incremental
development in which a longer software development cycle is split into smaller
segments that build upon each other.
Iterative and incremental development are key practices in Agile
development methodologies. In Agile methodologies, the shorter development
cycle, referred to as an iteration or sprint, is time-boxed (limited to a certain
increment of time, such as two weeks). At the end of the iteration, working code is
expected that can be demonstrated for a customer.
Iterative development contrasts with a traditional waterfall method in which each
phase of the software development life cycle is “gated.” Coding doesn’t begin until
design of the entire software application is complete and has gone through a phase

17
gate review. Likewise, testing doesn’t begin until coding is complete and has passed
necessary phase gate reviews.
The purpose of working iteratively is to allow more flexibility for changes. When
requirements and design of a major application are done in the traditional method
(sometimes referred to as BDUF or Big Design Up Front), there can be unforeseen
problems that don’t surface until development begins. By working iteratively, the
project team goes through a cycle where they evaluate with each iteration, and
determine what changes are needed to produce a satisfactory end product.

17
In prototyping, the objective is to build a simplified version of the entire application,
release it for review, and use the feedback from the stakeholders to review to build
a second, much better version. This is repeated until the owner and stakeholders
are satisfied with the final product. Prototyping is broken down into a step-by-step
process that includes initial concept, design and implementation of initial
prototype, refining the prototype until acceptable to the owner, and complete and
release final version.

18
A refined form of the above prototyping methodology that is ideal for web
application development, it allows for the basic functionality of a desired system or
component to be formally deployed in a quick time frame. The maintenance phase
is set to begin after the deployment. The goal is to have the process be flexible
enough so the application is not based on the state of the organization at any given
time. As the organization grows and the environment changes, the application
evolves with it rather than being frozen in time.

19
Also, a refined form of prototyping, rapid application development (RAD) requires
strict time limits on each phase and relies on efficient tools that enable quick
development. The goal is to produce quality code quickly. While this sounds
attractive, it must be handled properly because the quick development process may
be a disadvantage if decisions are made so rapidly that it leads to poor design.

20
Originally invented to enhance the development of large mainframe systems, joint
analysis development (JAD) has become very useful in today’s environments. The
premise is to have facilitation techniques that become an integral part of the
management process that helps developers to work directly with owners and
stakeholders to develop a working application. This is a novel idea that involves all
stakeholders in the entire process. The success of this methodology is based on
having key players communicating at all critical phases of the project. The focus, is
in having the people who actually perform the job work together with those who
have the best understanding of the technologies available to design the best
solution. In other words, facilitation techniques bring together a team of
stakeholders, including owner, expert systems developers, technical experts, and
security professionals, throughout the development lifecycle. As we have
mentioned, this needs to involve security as well. While input from the owner may
result in a more functional program, the involvement of large numbers of
stakeholder may help in addressing the security requirements, or at least that is the
goal.

21
The exploratory model uses a set of requirements built with what is currently
available. A big part of this model requires assumptions to be made as to how the
system might work, and further insights and suggestions by interested parties,
including security, are combined to create a usable system. Because of the lack of
structure being the basis for this model, security requirements need to take priority
to address the requirements properly. The security professionals need to ensure the
requirements are addressed appropriately.

22
23
Reference https://searcherp.techtarget.com/definition/CASE-computer-aided-software-
engineering

Computer-aided software engineering (CASE) describes a broad set of labor-saving


tools used in software development. They create a framework for managing
projects and are intended to help users stay organized and improve productivity.
There was more interest in the concept of CASE tools years ago, but less so today, as
the tools have morphed into different functions, often in reaction to software
developer needs. The concept of CASE also received a heavy dose of criticism after
its heyday.

Features of CASE tools


CASE tools, which are sometimes called integrated CASE or I-CASE tools, cover all
aspects of the software development lifecycle, which includes writing the code,
implementation and maintenance. The tools help in every aspect of development
work: managing, modeling, error-checking, version control, designing, diagraming
tools, prototyping and other aspects associated with software engineering.

24
Compilers and testing tools are also considered part of the CASE tool set.

Everything is centralized in a CASE repository, which provides an integrated system


for project management information, code specifications, test cases and results,
design specifications, diagrams and reports. This setup provides a place for teams
and managers to keep track of what has been accomplished and what still needs to
be done. This information is often displayed graphically so users can quickly find what
they need, as well as get a quick overview of the project.

History of CASE tools and criticism


CASE software tools emerged as a significant product category in the 1980s. They
were developed in a response to a need to bring order to large software
development projects, and vendors claimed they would improve IT productivity and
reduce errors.

A short explanation of CASE tools.


The U.S. government, a major builder of custom development projects, spent millions
on CASE tools. But the government later became a critic of vendor claims about their
capabilities. "Little evidence yet exists that CASE tools can improve software quality
or productivity," wrote the Government Accountability Office in a 1993 report on the
use of CASE tools by the U.S. Defense Department.
About a decade later, in 2002, a research paper also noted problems with CASE
deployments. It found evidence of "a conceptual gap between the software
engineers who develop CASE tools and the software engineers who use them,"
according to the paper, "Empirical Study of Software Developers' Experiences," by
Ahmed Seffah and Rex B. Kline, computer science researchers at Concordia
University.

Uses of CASE tools evolved


CASE tools have evolved to accommodate visual programming, object-oriented
programming and Agile software development processes.

Tools that fit in the CASE category are widely available, but this umbrella term or
approach doesn't have the relevance it once did in describing software engineering
tools. Developers may be more likely to think in terms of specific tool categories, such
as visual modeling and simulation software, system architecture tools and
diagramming tools such as Microsoft Visio.

24
This model is based on a process of using standardized and building blocks to
assemble, rather than develop the application. The components are made up of
sets of standardized data and standardized methods of processing that data. These
sets, when used together, offer scheduling and cost-effective benefits to the
development process and the team members involved. From a security perspective,
the advantage might be that components have previously been tested for security
functionality and assurance effectiveness. This is very similar to object-oriented
programming (OOP) where objects and classes may be designed with security
methods initially and then reused as required.

25
In this model, an application is built from already existing and tested components.
The reuse model is best suited for projects using object oriented development
because objects can be created, exported, reused, or modified as required. From a
security perspective, the components would then be chosen based on the known
effectiveness
of the security characteristics.

26
This discipline of software development is based on having several values and
characteristics of software development. The values are simplicity, communication,
and feedback all combined into the process. Despite the
name, extreme programming is an attempt to use a structured approach to
software development, relying on subprojects of limited and defined scope and
developers always working in pairs. The team produces the software in a series of
small, fully integrated releases that are supposed to fulfill the owner-defined needs.
This implies that the owners need to be involved in defining the needs in the first
place. It makes sense, as well, to involve security in defining those needs ahead of
the developers programming the requirements. As we have mentioned earlier, this
model relies on simplicity of the process, communication between all involved
stakeholders, including security, and feedback to ensure requirements are
addressed properly.

27
The trends in software development combined with specific organizational needs
have shown that companies tend to combine different software development
methodologies to fit the specific design and development requirements. For
example, an application may need a certain set of activities to take place to achieve
success, or the organization may require certain standards or processes to meet
industry or government requirements. In these cases, it would make sense to
combine several models to allow that organization to develop the proper
requirements in the most cost-effective and efficient way. However, as we have
seen in all models, security needs to be included as part of the process from the
start, to the end of not only the software development process, but also to the end
of the SLC. Security, therefore, must be included, regardless of methodologies used.
As security professionals know, the best type of security is what is designed into
the system, not what is added later. Regardless of methodology used, security
needs to be included right at the beginning as requirements for security
functionality needs to be understood at that point. All stakeholders, including
owners, must also be involved in determining those requirements. Historically,
development has focused on functionality rather than security; therefore, it is
critically important to educate those individuals responsible for the development,

28
the managers who oversee the projects, and the owners that are accountable for the
protection of valuable assets. Development today is much more focused on security,
and it is important for an organization to streamline the process of development of
systems and applications and involve security in the early phases and throughout the
SDLC.

28
We will cover ISO/IEC 21827:2008 later in CODS so that our students understand and
can relate to CMM in further detail inshAllah.

29
We will cover this later inshAllah within CODS later in 2020-2021.

The Software Engineering Institute (SEI) released the Capability Maturity Model for
Software (CMM or SW-CMM) back in 1991. Even though software development has
evolved in many ways since then, this model is very useful in allowing an
organization to measure their current capability in software development and also
to formulate a plan by which they can get better. The CMM focuses on quality
management processes and contains five maturity levels that contain required
measurement parameters within each maturity level. The five levels describe an
evolutionary path from chaotic and unstructured processes to mature, disciplined,
and optimized software processes. The whole purpose of using CMM is to allow
organizations to mature to a higher level of quality in software development. So, to
summarize, the CMM framework as shown in Figure 8.6 establishes a basis for
evaluation of the reliability and improvement of the software development
environment.

Initial: At the initial level, it typically means that good practices can be repeated, but
they may be unorganized and chaotic. If an activity is not repeated, there is no

30
reason to improve it. Therefore, organizations would
be able to show that they have policies, procedures, and practices and commit to
using them so that the organization can perform software development in a
consistent manner.

Repeatable: In this level, best practices for software development are repeatable and
can be rapidly transferred across various groups in the organization without
problems. Practices need to be defined in such way so that the organizations allows
for transfer of processes across project boundaries. This can provide for
standardization and repeatable processes across the entire organization.

Defined: At the defined level, standard processes are formalized and all new
developments happen with new, stricter, and standardized processes. The processes
are well-understood and are very proactive. Managed: At this level, quantitative and
measurable objectives are established for tasks. Quantitative measures are
established, calculated, and maintained to form a baseline from which an assessment
is possible. This can ensure that the best practices are followed and deviations from
those measured objectives are reduced.

Optimizing: At the final level of the CMM, practices are continuously improved to
enhance the organization’s capability, and they are also optimized. This level also
focuses on continuous improvement, and feedback from one phase will reach and
positively impact development in other phases, all ensuring positive future results.

30
Reference https://www.cio.com/article/2439314/change-management-change-
management-definition-and-solutions.html

We will do hands on implementation of ISO 20000 later inshAllah as it will during ISO
20000 Lead Implementer course.

What is change management?


In modern IT, change management has many different guises. Project managers
view change management as the process used to obtain approval for changes to the
scope, timeline, or budget of a project. Infrastructure professionals consider change
management to be the process for approving, testing, and installing a new piece of
equipment, a cloud instance, or a new release of an application. ITIL,
ISO20000, PMP, Prince2, as well as other methodologies and standards, prescribe
the process to gain approval and make changes to a project or operating
environment.

The Association of Change Management Professionals (ACMP), PROSCI,


the Innovation and Organizational Change Management Institute (IOCMI), and
others view change management from an organizational perspective. While each

31
group has its own approaches, frameworks and language, these groups all address
the human side of change in organizational contexts.
The following article focuses on change management from an organizational
perspective, to distinguish it from the process-based changes of ITIL, Prince2, and so
on. Here, “change” refers to any event or program the enterprise undertakes that
causes major disruption to daily operations — for example, a new ERP
installation or digital transformation. The clearest definition of this type of
organizational change management (OCM) is provided by Sheila Cox of Performance
Horizons who states: "Organizational change management ensures that the new
processes resulting from a project are actually adopted by the people who are
affected.”

What are the benefits of change management?


Change management reduces the risk that a new system or other change will be
rejected by the enterprise. By itself OCM does not reduce costs or increase sales.
Instead, it increases the teamwork required for the enterprise accept the change and
operate more efficiently.

When is organizational change management needed?


OCM is needed whenever the enterprise undertakes a program or event that
interrupts day-to-day operations. Such an undertaking will impact:
The work content of individual jobs. Many jobs require individuals or groups to
perform tasks repeatedly. An accounting department has daily, weekly, monthly, and
annual activities. Over time, most people become comfortable with the tools
provided and the rhythm of the work calendar. Even simple changes may disrupt the
workflow and be disconcerting for the staff.
The roles of individual employees. Many people view their value to the organization
as being a good technical architect, programmer, or security specialist. When asked
to take on a different role, they may become very uncomfortable. People with
excellent technical skills often struggle when asked to become managers. Rather than
performing all of the tasks, they have to learn to work through other people. Once
they are no longer rewarded for the skills that made them successful, employees may
question their purpose.
The organization itself. Executive teams debate major changes for months before
making final decisions, enabling each member to gain a deeper understanding of the
effects the change will have on the enterprise. Even if they don’t agree with the final
decision, they have time to determine whether to accept the new direction or to
depart gracefully. Individuals lower in the hierarchy rarely have time to process major
changes. Executives do not want employees to worry about events that may never
happen until it is clear the change will take place. In addition, tighter insider trading
enforcement prohibits executives from sharing information about upcoming mergers,

31
acquisitions, or divestitures. As such, individuals who are not part of the executive
team have much less time to prepare for the planned change and may decide to
leave while the change is undertaken, making change management more difficult.
What are the requirements for change management success?
Organization change management programs require several things to be successful:
The right executive sponsor. Sponsorship is critical. The OCM sponsor is responsible
for developing the case for change and obtaining the necessary OCM resources. For
this, the sponsor needs the support of the CEO to make it clear that the effort is
important.
The sponsor must understand the case for change clearly enough have a detailed
discussion about the challenges that created the need for a different way of
operating. She should be confident enough to confront skeptics and close enough to
the details to justify the approach selected and the reasons the alternatives were
rejected.
The sponsor needs to understand the impact on the staff. Good sponsors are
concerned about the people who will be affected by the change. These sponsors
communicate honestly while treating everyone fairly and respectfully. Rather than
merely relating the facts, they take the time to listen to people and to empathize with
the individuals who dislike the new way of operating. If people are to be terminated
or reassigned, sponsors should know when it will happen and how everyone will be
treated. They explain why the change was necessary, and do what they can to
smooth the transition for individuals whose jobs are transformed. The best sponsors
help everyone losing a job find the next opportunity
Cultural willingness to adapt and change. All organizations resist change to some
degree, but ones that follow the dictum “if it ain’t broke don’t fix it” often need a
major wake-up call to behave differently. The public revelation of sexual misconduct
allegations against Harvey Weinstein provides a dramatic example of a call to address
a long standing problem. A number of enterprises that had done little to stop sexual
harassment suddenly took action.
Skilled change management teams embrace the organization’s emotional energy.
They use company stories, language, and behavior to emphasize those parts of the
current culture that are aligned with the planned change. These teams celebrate
behaviors they wish to encourage by publicly recognizing individuals exhibiting these
behaviors. Change management teams use every opportunity to reinforce the way
the change helps the enterprise.
Individual willingness to change. Individuals must be willing to examine new
information and adopt new behaviors and approaches. Since most people prefer the
status quo, this can be difficult. Typically, most people only accept changes that make
sense and improve their job content or their work environment.
Rewards and consequences. Major changes need to be reinforced by rewards and
consequences. Individual performance plans with specific, measurable results need

31
to reinforce the desired future state. Individuals who meet their objectives need to
be rewarded appropriately and those that do not need to face consequences.
A consulting firm, wanting broader market recognition, encouraged all partners to
speak at industry conferences and to write for industry publications. Several partners
became very successful at both. While their articles and talks generated new
business, the client revenue each partner managed actually decreased. When the
compensation plan did not reward them sufficiently for the additional firm revenue
to offset their decreased client revenue, they were very unhappy. The firm’s
leadership team had to adjust the compensation plan quickly in order to prevent the
partners from leaving.
Why is change management difficult?
It takes a great deal of time to change attitudes and behaviors. Application
implementations, even large ones, are easier to plan and manage; project managers
know when a module is tested or server installed. OCM managers have a much
harder time measuring progress; gauging support can be tricky. Just when it appears
that a key individual supports the change, the person raises another objection and
returns to old behaviors.
Executives, often assume that everyone impacted will find the business case so
compelling they will automatically accept the new way of operating. But most people
resist change or are unpredictable. This creates several difficulties for the OCM team:
Change management is not deterministic. Unlike computer programs, people can be
unpredictable and illogical. OCM activities that are effective with one group may be
ineffective with another. Messages may resonate with some people but not with
others.
Change management is a contact sport. The OCM team needs to interact one on one
with individuals who will need to change. Emails, videos, and other mass
communication can reinforce a message, but these don’t make people feel the
enterprise cares about their difficulties. Change is personal; sometimes people whose
jobs have been transformed need someone else to listen to their frustrations before
they will accept the new reality.
Midlevel and frontline staff must be engaged. Midlevel and frontline staff can make
or break a major program. Since they understand the operational details of the
current processes, they can anticipate potential problems and likely customer
reactions. Individuals who are not sensitive to the disruption that major change can
create often believe it is more efficient to involve fewer people early in the process.
While involving more people in the change process creates additional work for the
OCM team, it also builds commitment. Midlevel and frontline staff who see their
suggestions accepted are more likely to support the final result.
Cultural differences can make OCM difficult. Cultural norms are different around the
globe. The OCM effort needs to be aware of local customs even with a global system
intended to standardize enterprise operations. Care needs to be taken to be sensitive

31
to these and other cultural norms:
Communications style. Denmark, Germany, Israel, Netherlands, and U.S. are very
direct. India, Japan, Pakistan, and the Philippines tend to be indirect and believe it is
very important for both parties to save face. In these cultures, individuals avoid
saying no, and frequently mean, “I understand” rather than “I agree” when they say
“yes.”
Time orientation. Meetings in Germany, Switzerland, and the U.S. start and end
when scheduled. Little time is dedicated to introductions, even when some attendees
are meeting each other for the first time. Spain, Thailand, Brazil, and the Caribbean
are less concerned about time. Things can wait until later in the day or even
tomorrow. In such countries it is impolite to rush into a business discussion; only
after the host and the visitor have shared refreshments and pleasantries can business
begin.
Egalitarianism. Australia, Canada, Israel, New Zealand, U.S. have little hierarchy with
almost everyone on a first-name basis. Conversely, hierarchy is very important in
India, Iran, Japan, Saudi Arabia and other countries. Junior staff in these countries
invariably defer to the senior person.
Violating cultural norms can cause great resentment. The best OCM teams are very
sensitive to local cultural norms even when the people at headquarters demand a
standard project rollout and standard OCM program globally.
Change management may be an afterthought. With major IT efforts, the project
team is often consumed by business process changes, interfaces to other systems,
data cleanup, etc. If the OCM effort is not started concurrently with the rest of the
program, it may only be started when the program team experiences resistance from
end users. Even enterprises that assert that OCM is critical sometime reduce or
eliminate the OCM budget if the overall program gets too expensive.
Change management can be started too early. The OCM effort needs to be tightly
coupled to the rest of the change program. This is particularly difficult with major IT
programs when the OCM efforts begin before new system details have been finalized.
In the absence of tangible information about the new system, the OCM team either
sounds vague or describes what they hope the new system will do. When the new
system fails to materialize quickly or has less functionality than anticipated,
supporters often become disillusioned.
OCM and the change program may be disconnected. The rational and emotional
cases for change need to be integrated tightly. Frequently, executives communicate a
rational, logical case for change that lacks emotional appeal. People respond to calls
to action that make them feel they are part of something that is more important than
any single person and are energized by visions that capture their hearts as well as
their minds.
Meg Whitman, former CEO of Hewlett-Packard, integrated appeals to the heart and
mind. As she discussed in her post, “The Power of Transparent Communication,” she

31
and her team attempted to build a strong connection to HP’s history and traditions.
They reinforced the “HP Way” cultural value that the quality of the work is as
important as position in the hierarchy.

How should a change management team be structured?


The OCM team should be integrated with the team responsible for implementing the
change. The OCM sponsor should be a senior executive, often the CEO. The sponsor
is the cheerleader who describes why the change is important and how it will help
the enterprise. This person acquires necessary resources, establishes OCM goals
along with consequences for failure to support the change.
The OCM sponsor is supported by an OCM project manager who directs the day-to-
day activities of the OCM team. The OCM project manager works closely with the
overall program manager responsible for implementing the change. Together the
OCM project manager and the overall program manager coordinate training,
communications, and supporter recognition.
OCM staff, known as OCM Champions, are supporters of the change who “sell” the
benefits to specifics departments, business units, and individuals. They start working
with their target group shortly after the program team begins planning. As part of
change training, these Champions explain how the change will help the individuals
affected.
After implementation, Champions continue to make sure the change is supported
and used by the individuals whose jobs have changed. They continue to espouse the
benefits of the change and pay particularly attention to anyone having difficulty with
the change. Sometimes they merely listen; in other cases they obtain additional
training or other help for the struggling individual.
The best Champions are well respected even though they may not be very high in the
organization chart. They wield informal power as opinion leaders, performing their
duties competently and with grace. Many have been with the enterprise for a long
time. Frequently, they serve as informal coaches to new employees who may be
more senior in the hierarchy. They motivate others, inspiring them to do a good job.
Other employees seek them out to determine if the people leading a major initiative
will be persistent enough to make the change stick.
Change Targets are the groups and individuals who need to change their behaviors
and their attitudes. They are the recipients of training necessary to implement the
change. As they become supporters of the change, they are usually recognized for
their support.
For more on leading change, see "8 secrets of effective change leaders."
What are the major steps in a change management program?
Organization change management programs typically have fewer tasks and greater
complexity than the program they are supporting. The OCM program has to adapt
and change on the fly to accommodate the vagaries of human nature as supporters

31
backslide and skeptics become supporters.
While there are different approaches to OCM, most can be summarized in the four
major steps below:
Engage. The program begins when the sponsor creates a vision describing how the
enterprise will operate after the change has been implemented. This vision should
include the benefits that will accrue to the enterprise and should describe how the
change will affect the staff. Ideally, improvements to the work environment will be
obvious to the majority of the staff.
As part of engagement, the OCM team discusses the coming change with potential
supporters to determine their willingness to support the change and to create a
sense of urgency to implement the change. The OCM team also identifies likely
skeptics and attempts to determine their concerns. In many cases, the team will
commission a formal change readiness assessment to gain a more precise
understanding of the enterprise’s willingness to change.
Plan. The OCM team identifies all departments, business units and groups that will
need to change along with key stakeholders in each. In parallel, the OCM team
analyzes how the various parts of the change will impact the way that people
perform their jobs. This analysis enables the OCM team to answer the most common
question posed during a major change, “What’s in it for me?”
As it becomes more obvious which stakeholders support the change, which are
undecided, and which don’t support the change, the OCM team creates a change
plan with specific actions for each individual and group. Individual OCM members are
assigned to work with individual stakeholders based in part on the strength of the
relationship between the OCM team member and the specific stakeholder.
During this phase, the OCM team begins to assess the degree to which stakeholders
accept the change. At this point, acceptance measures are informal and based on
impressions from meeting behavior, one-on-one discussions and other interactions.
Rollout. During implementation, the OCM team communicates with individuals at all
levels in the enterprise to gain their support for the change. Communications
typically begin with a formal announcement from the CEO, supported by videos,
emails, work station log-on announcements, town hall meetings, etc. The OCM team
hopes to empower supporters and help individuals or groups become successful
quickly. The OCM group identifies and celebrates successes publicly and rewards
individuals responsible for each success.
As the rollout continues, attitudinal surveys are frequently employed to better gauge
employee acceptance and commitment to the change. Special interventions are
created and used for individuals and groups that appear reluctant to accept the
change.
Reinforce. Because people rarely behave as others would like them to behave, the
OCM team regularly revisits and updates change goals, rewards, communications and
consequences. Experience is the best teacher. Repeated interactions with individual

31
stakeholders usually reveals their degree of acceptance, enabling the OCM team to
adjust its approach as necessary
Tasks, projects and behaviors that support the change should be part of individual
performance plans. Items in the performance plan need to be clear, measureable and
achievable. In addition, these items need to be weighted appropriately against the
other goals in the performance plan.
Change management is rarely straightforward. The OCM plan may be depicted as a
Gantt chart using the same tools as the IT project plan. However, in practice, OCM
activities rarely have clear tasks, precedents and durations. Most OCM teams cycle
through the four steps above multiple times during any OCM effort. Lessons learned
at any point are incorporated into the OCM vision and communications. OCM work is
not complete until the change is fully implemented and adopted by the people
affected.
Who offers organizational change management certification?
A wide variety of universities and associations offer change management certificates
and certifications. These include:
The University of Virginia’s Darden School offers a “Managing Individual and
Organizational Change” certificate designed to create resilient leaders and adaptable
teams that can guide enterprise change.
Prosci’s Change Management Certification Program is built around a change
management methodology with supporting tools that participants apply to a current
project.
Cornell University’s SC Johnson College of Business offers a series of online courses
leading to a certificate as well as professional development credits with the Society
for Human Resources Management (SHRM)
The Society for Human Resources Management (SHRM) offers two certifications: the
SHRM Certified Professional (SHRM-CP) and the SHRM Senior Certified Professional
(SHRM-SCP).
MIT Sloan School offers “Leading Change in Complex Organizations” as part of
a Management and Leadership certification.
Michigan State University’s Eli Broad College of Business offers a
professional Certificate in Change Management focusing on helping organizations
alter existing processes, grow, introduce new products, reorganize or undertake other
actions to be more competitive.
Stanford University’s Organizational Renewal program focuses on design thinking and
innovation to implement change within an enterprise.
The Association for Talent Development focuses on improving efficiency and service
quality through a six step change model taught through case studies.
Northwestern University’s School of Professional Studies focus on structured change
approaches used to introduce new products, improved quality, IT systems, etc.
For more information on additional change management cert opportunities, see "7

31
change management certifications to boost your IT career."
Why do individuals resist change?
Resistance is a natural part of the change process. When expectations are disrupted,
individuals often feel uncomfortable. Even positive changes such as a marriage or the
birth of a child can cause discomfort. Here are some of the reasons why employees
resist change and how it affects the change management process:
Inability. Individuals may lack the necessary skills or knowledge to operate in the new
environment. Fear of the unknown can keep people from fully participating in
training. Some worry they will not be able to understand how to operate the new
system and will be overshadowed by smarter colleagues. Other groups may lack the
resources to operate in the changed environment. This can become a problem during
an acquisition if the acquiring company folds a department from the acquired
company into its department without appropriately increasing staffing. When
acquisitions are justified by claiming the merged companies will eliminate redundant
jobs, management is sometimes tempted to eliminate staff before the merger is fully
complete. Mergers that occur on paper but not in reality disappoint customers,
fragment staff loyalty, and erode IT service levels. For more on this see “Half-baked
mergers.”
Unwillingness. People who don’t believe in the change usually resist the change.
Reasons vary but can include: They see no value to the new way of operating; they
believe the change is too difficult; they perceive the change as too risky. Other
people may believe the wrong option was selected. Still others worry their job will be
less important and they will no longer be experts.
Change fatigue. Change requires a great deal of mental effort. People who switch
languages as they travel from country to country find themselves drained at the end
of each day even if everyone they visit attempts to speak the traveler’s native
language. The mental effort to understand the words spoken by individuals who do
not speak a language well requires intense concentration. Too many new systems,
reorganizations, mergers, or other changes can also create change fatigue. After a
time, most people crave stability; at some point few people will make the extra effort
required to undertake one more change.
Personal issues. Few people lead perfect lives and most worry about something.
Individuals close to retirement, facing divorce, serious illness, or other personal
issues frequently resist all changes in order to feel they retain some control over their
life. Intellectually, these individuals may understand the reasons for the change but
emotionally they often find it difficult or impossible to embrace the change. Handling
each special case with compassion builds support for the change while insensitive
handing can turn the rest of the enterprise against the change.
Resistance is not necessarily a sign of disloyalty or incompetence. Usually, it shows
that the resisting individuals either don’t agree with the vision or lack the ability to
implement the change. The best change management programs encourage people to

31
discuss their concerns and never suppress dissent. After all, issues cannot be
addressed if the OCM team does not know they exist.

31
Reference https://en.wikipedia.org/wiki/Integrated_product_team

An integrated product team (IPT) is a multidisciplinary group of people who are


collectively responsible for delivering a defined product or process.[1]

IPTs are used in complex development programs/projects for review and decision
making. The emphasis of the IPT is on involvement of all stakeholders (users,
customers, management, developers, contractors) in a collaborative forum. IPTs
may be addressed at the program level, but there may also be Oversight IPTs
(OIPTs), or Working-level IPTs (WIPTs).[2] IPTs are created most often as part of
structured systems engineering methodologies, focusing attention on
understanding the needs and desires of each stakeholder.

IPTs were introduced to the U.S. DoD in 1995 as part of the major acquisition
reforms to the way goods and services were acquired.[3]

32
Reference https://www.managementstudyguide.com/integrated-product-and-process-
development.htm

Integrated Product and Process Development - Meaning, Advantages and Key


Factors
Introduction
Objective of any organization is to provide customer satisfaction by building product
and services, which not only satisfy needs and want but also create value for them.
This requires product design based on the customer feedback and production
process which not only minimizes cost but also provides a competitive advantage.
However, most organizations tend to follow conventional production method and
process.
However, in the global age of new technology and competition organization have to
re-invent the way they cater to needs of customer, focus on specialization and
customization is ever increasing. Given this scenario it is imperative for the
organization to integrate technology and innovation within the framework of
integrated product and process development.
Integrated Product and Process Development (IPPD)

33
Integrated product and process development combines the product design
processes along with the process design process to create a new standard for
producing competitive and high-quality products.
Integration of new technologies and methods provide a complete new dimension to
product design process. This process starts with defining of the requirements of
products based on the customer feedback while considering the design layout and
other constraints. Once the finer details are finalized, they are fed into CAD models
where extensive testing and modeling are done to get the best product.
With integration of production method and technology with product design, it is
natural for integration of product design and process design. Therefore, integrated
product and process development can be defined as a process starting from product
idea to development of final product through modern technology and process
management practices while minimizing cost and maximizing efficiency.
Advantages of Integrated Product and Process Development (IPPD)
Organization stands to benefit greatly from the implementation of IPPD. Some of the
advantages are as follows:
Using modern technologies and implement logical steps in production design, the
actual production is likely to come down, thereby reducing product delivery time.
Through optimum usage of resources and using efficient process, organizations are
able to minimize cost of production thus improving profitability of the organization.
Since extensive uses of CAD model are employed chances are of product or design
failure are greatly reduced thus reducing risk for organization.
As the focus is solely in delivering value to customer, quality is paramount
importance and achieved through technology and methods.
Key Factors for IPPD
There are certain factors, which can vastly improve IPPD. These factors are as follows:
IPPD success is greatly dependent on agreement on the end objective which is the
successful address to customer requirements. All the stakeholders and management
should be aligned to the single objective.
Since this is a scientific approach, its success dependent on building up of plan,
implementation of plan and constant review of the implemented plan.
With implementation of modern methods and technology comes usage of modern
tools and systems. This tools, and systems need to be integrated within the
organization framework.
Skilled manpower is another essential; therefore, organization need to make
investment in human capital.
Customer is the focal point of IPPD. Therefore, constant feedback from them is
essential for IPPD to be a success.
Therefore, IPPD is approach design to address all the concern of modern organization
in the globalized world.

33
34
Reference https://www.atlassian.com/devops

What is DevOps?
DevOps is a set of practices that automates the processes between software
development and IT teams, in order that they can build, test, and release software
faster and more reliably. The concept of DevOps is founded on building a culture of
collaboration between teams that historically functioned in relative siloes. The
promised benefits include increased trust, faster software releases, ability to solve
critical issues quickly, and better manage unplanned work.

DevOps gave us an edge


“DevOps has helped us do very frequent releases, giving us an edge on time to
market. We are now able to make daily product releases as opposed to 6-month
releases, and push fixes to our customers in a span of a few hours.”
— Hamesh Chawla, VP of Engineering at Zephyr

At Atlassian, DevOps is the next most famous portmanteau (combining of two

35
words) next to Brangelina (Brad Pitt and Angelina Jolie), bringing together the best of
software development and IT operations. And like our jokes, it requires some
explaining.
At its essence, DevOps is a culture, a movement, a philosophy.
It's a firm handshake between development and operations that emphasizes a shift in
mindset, better collaboration, and tighter integration. It unites agile, continuous
delivery, automation, and much more, to help development and operations
teams be more efficient, innovate faster, and deliver higher value to businesses and
customers.
Who's doing DevOps?
Chef is the company behind the Chef Automate platform for DevOps workflows. Tens
of thousands of developers use Chef to test, automate, and manage infrastructure. At
the forefront of the DevOps evolution, the Seattle-based company has been releasing
products like Chef, InSpec, Habitat, and Chef Automate to advance new ways of
developing and shipping software and applications. To experiment with and refine its
own DevOps practices, Chef relies on the Atlassian platform.
Learn more
History of DevOps
The DevOps movement started to coalesce some time between 2007 and 2008, when
IT operations and software development communities got vocal about what they felt
was a fatal level of dysfunction in the industry.
They railed against the traditional software development model, which called for
those who write the code to be organizationally and functionally apart from those
who deploy and support that code.
Developers and IT/Ops professionals had separate (and often competing) objectives,
separate department leadership, separate key performance indicators by which they
were judged, and often worked on separate floors or even separate buildings. The
result was siloed teams concerned only with their own fiefdoms, long hours, botched
releases, and unhappy customers.
Surely there’s a better way, they said. So the two communities got together and
started talking – with people like Patrick Dubois, Gene Kim, and John Willis driving
the conversation.
What began in online forums and local meet-ups is now a major theme in the
software zeitgeist, which is probably what brought you here! You and your team are
feeling the pain caused by siloed teams and broken lines of communication within
your company.
You’re using agile methodologies for planning and development, but still struggling to
get that code out the door without a bunch of drama. You’ve heard a few things
about DevOps and the seemingly magical effect it can have on teams and think “I
want some of that magic.”
The bad news is that DevOps isn’t magic, and transformations don’t happen

35
overnight. The good news is that you don’t have to wait for upper management to
roll out a large-scale initiative. By understanding the value of DevOps and making
small, incremental changes, your team can embark on the DevOps journey right
away. Let’s look at each of these benefits in detail.
Infrastructure as code allowed us to perform 10x more builds without adding a single person
to our team.— Michael Knight, Build Engineer at Atlassian
What's in it for you?
Collaboration and trust
Culture is the #1 success factor in DevOps. Building a culture of shared responsibility,
transparency and faster feedback is the foundation of every high performing DevOps
team.
Teams that work in siloes often don't adhere to the 'systems thinking' of DevOps.
'Systems thinking' is being aware of how your actions not only affect your team, but
all the other teams involved in the release process. Lack of visibility and shared goals
means lack of dependency planning, misaligned priorities, finger pointing, and 'not
our problem' mentality, resulting in slower velocity and substandard quality. DevOps
is that change in mindset of looking at the development process holistically and
breaking down the barrier between Dev and Ops.
Release faster and work smarter
Speed is everything. Teams that practice DevOps release more frequently, with higher
quality and stability.
Lack of automated test and review cycles block the release to production and poor
incident response time kills velocity and team confidence. Disparate tools and
processes increase OPEX, lead to context switching, and slow down momentum.
Through automation and standardized tools and processes, teams can increase
productivity and release more frequently with fewer hiccups.
Accelerate time to resolution
The team with the fastest feedback loop is the team that thrives. Full transparency
and seamless communication enable DevOps teams to minimize downtime and
resolve issues faster than ever before.
If critical issues aren't resolved quickly, customer satisfaction tanks. Key issues slip
through the cracks in the absence of open communication, resulting in increased
tension and frustration among teams. Open communication helps Dev and Ops
teams swarm on issues, fix incidents, and unblock the release pipeline faster.
Better manage unplanned work
Unplanned work is a reality that every team faces–a reality that most often impacts
team productivity. With established processes and clear prioritization, the Dev and
Ops teams can better manage unplanned work while continuing to focus on planned
work.
Transitioning and prioritizing unplanned work across different teams and systems is
inefficient and distracts from work at hand. However, through raised visibility and

35
proactive retrospection, teams can better anticipate and share unplanned work.
The CALMS Framework for DevOps
Culture
Automation
Lean
Measurement
Sharing
Culture
If we could sum up DevOps culture in one word, it’d be “collaboration” – and if we
were allowed two words, they’d be “cross-functional collaboration.” (Ok, that’s more
like three words.)
All the tooling and automation in the world are useless if they aren’t accompanied by
a genuine desire on the part of development and IT/Ops professionals to work
together. Because DevOps doesn’t solve tooling problems. It solves human problems.
Therefore, it’s unlikely you’ll poke your head out of the cubicle one day, look around,
and discover that teams at your company embody DevOps culture. But there are
simple things you can do to nurture it.
Think of DevOps much like agile, but with the operations included. Forming project-
or product-oriented teams to replace function-based teams is a step in the right
direction. Include development, QA, product management, design,
operations, project management, and any other skill set the project requires. At
Atlassian, we even embed marketing with our product teams.
Few things foster collaboration like sharing a common goal and having a plan to reach
it together. At some companies, switching suddenly to project-based teams is too
much, too soon. So take smaller steps. Development teams can – and should – invite
appropriate members of the operations team to join sprint planning sessions, daily
stand-ups, and sprint demos. Operations teams can invite key developers. It’s an agile
and organic way to keep on the pulse of each other’s projects, ideas, and struggles.
The time spent listening and cross-pollinating subject-area knowledge pays for itself
by making release management and emergency troubleshooting far more efficient.
And speaking of emergencies, they’re an effective test of DevOps culture. Do
developers, operations, and customer support swarm on the problem and resolve it
as a team? Does everyone start with the assumption that their teammates made the
best decisions possible with the information and resources they had at the time? Is
the incident post-mortem about fixing processes instead of pointing fingers? If the
answer is “yes,” that’s a good indication that your team functions with DevOps
culture at its core.
Note that the most successful companies are on board with DevOps culture across
every department, and at all levels of the org chart. They have open channels of
communication, and talk regularly. They make sure everyone’s goals are aligned, and
adjust as needed. They assume that keeping customers happy is just as much product

35
management’s responsibility as it is the development team’s responsibility. They
understand that DevOps isn’t one team’s job. It’s everyone’s job.
Teams that practice DevOps deploy 30x more frequently, have 60x fewer failures, and
recover 160x faster.— Puppet Labs 2016 State of DevOps Report
Automation
Investing in automation eliminates repetitive manual work, yields repeatable
processes, and creates reliable systems.
Build, test, deploy, and provisioning automation are typical starting points for teams
who don’t have them in place already. And hey: what better reason for developers,
testers, and operators to work together than building systems to benefit everyone?
Teams new to automation usually start with continuous delivery: the practice of
running each code change through a gauntlet of automated tests, often facilitated by
cloud-based infrastructure, then packaging up successful builds and promoting them
up toward production using automated deploys. As you might guess, continuous
delivery is not a quick and easy thing to set up, but the return on investment is well
worth it.
Why? Computers execute tests more rigorously and faithfully than humans. These
tests catch bugs and security flaws sooner, allowing developers to fix them more
easily. And the automated deploys alert IT/Ops to server “drift” between
environments, which reduces or eliminates surprises when it’s time to release.
Another of DevOps’ major contributions is the idea of “configuration as code.”
Developers strive to create modular, composable applications because they are more
reliable and maintainable. That same thinking can be extended to the infrastructure
that hosts them, whether it lives in the cloud or on the company's own network.
True, systems are always changing. But we can create a facade of immutability by
using code for provisioning so that re-provisioning a compromised server becomes
faster than repairing it – not to mention more reliable. It reduces risk, too. Both
development and operations can incorporate new languages or technologies via the
provisioning code, and share the updates with each other. Compatibility issues
become immediately apparent, instead of manifesting in the middle of a release.
“Configuration as code” and “continuous delivery” aren’t the only types of
automation seen in the DevOps world, but they’re worth special mention because
they help break down the wall between development and operations. And when
DevOps uses automated deploys to send thoroughly tested code to identically
provisioned environments, “Works on my machine!” becomes irrelevant.
Lean
When we hear “lean” in the context of software, we usually think about eliminating
low-value activities and moving quickly – being scrappy, being agile. Even more
relevant for DevOps are the concepts of continuous improvement and embracing
failure.
A DevOps mindset sees opportunities for continuous improvement everywhere.

35
Some are obvious, like holding regular retrospectives so your team’s processes can
improve. Others are subtle, like A/B testing different on-boarding approaches for new
users of your product.
We have agile development to thank for making continuous improvement a
mainstream idea. Early adopters of the agile methodology proved that a simple
product in the hands of customers today is more valuable than a perfect product in
the hands of customers six months from now. If the product is improved
continuously, customers will stick around.
And guess what: failure is inevitable. So you might as well set up your team to absorb
it, recover, and learn from it (some call this “being anti-fragile”). At Atlassian, we
believe that if you’re not failing once in a while, you’re not trying hard enough.
We challenge our teams with big, hairy, audacious goals and make sure they have the
autonomy and the resources to meet them. We hire smart, ambitious people and
expect them to fail sometimes.
In the context of DevOps, failure is not a punishable offense. Teams assume that
things are bound to go pear-shaped at some point, so they build for fast detection
and rapid recovery. (Read up on Nexflix’s Chaos Monkey for an excellent example.)
Postmortems focus on where processes fell down and how to strengthen them – not
on which team member f'ed up the code. Why? Because continuous improvement
and failure go hand in hand.
DevOps has evolved so that development owns more operations – and that’s how Chef
works. We can’t just throw it over the wall anymore. Our engineers are responsible for QA,
writing, and running their own tests to get the software out to customers.— Julian Dunn,
Product Manager at Chef
Measurement
It’s hard to prove your continuous improvement efforts are actually improving
anything without data. Fortunately, there are loads of tools and technologies for
measuring performance like how much time users spend in your product, whether
that blog post generated any sales, or how often critical alerts pop up in your logs.
Although you can measure just about anything, that doesn’t mean you have to (or
should) measure everything. Take a page from agile development and start with the
basics:
How long did it take to go from development to deployment?
How often do recurring bugs or failures happen?
How long does it take to recover after a system failure?
How many people are using your product right now?
How many users did you gain / lose this week?
With a solid foundation in place, it’s easier to capture more sophisticated metrics
around feature usage, customer journeys, and service level agreements (SLAs). The
information you get comes in handy when it’s time for road mapping and spec’ing
out your next big move.

35
All this juicy data will help your team make decisions, but it’s even more powerful
when shared with other teams – especially teams in other departments. For example,
your marketing team wants shiny new features they can sell. But meanwhile, you’re
seeing high customer churn because the product is awash in technical debt. Providing
user data that supports your roadmap – even if it’s light on features and heavy on
fixes - makes it easier to build consensus and get buy in from stakeholders.
Devops isn't any single person's job. It's everyone's job.— Christophe Capel, Principal
Product Manager, Jira Service Desk
Sharing
The long-standing friction between development and operations teams is largely due
to a lack of common ground. We believe that sharing responsibility and success will
go a long way toward bridging that divide. Developers can win instant goodwill by
helping to carry one of operations’ biggest burdens: the pager. DevOps is big on the
idea that the same people who build an application should be involved in shipping
and running it.
This doesn’t mean that you hire developers and simply expect them to be excellent
operators as well. It means that developers and operators pair with each other in
each phase of the application’s lifecycle.
Teams that embrace DevOps often have a rotating role whereby developers address
issues caught by end users while, at the same, troubleshooting production problems.
This person responds to urgent customer-reported issues, creating patches when
necessary, and works through the backlog of customer-reported defects. The
“developer on support” learns a lot about how the application is used in the wild.
And by being highly available to the operations team, the development teams builds
trust and mutual respect.
Slogging through the rough patches together makes celebrating successes all the
more sweet. You’ll know DevOps culture has taken hold at your company when you
see the development team bring bagels for the operations team on release day.
Positive feedback from peers motivates us as much as our paychecks and career
ambitions. Publicly recognizing a teammate who detected a nasty bug before it went
live means a lot. If your department has discretionary budget for employee kudos,
don’t let it go unused!

35
Reference https://en.wikipedia.org/wiki/Secure_coding

Secure coding is the practice of developing computer software in a way that guards
against the accidental introduction of security vulnerabilities. Defects, bugs and
logic flaws are consistently the primary cause of commonly exploited software
vulnerabilities.[1] Through the analysis of thousands of reported vulnerabilities,
security professionals have discovered that most vulnerabilities stem from a
relatively small number of common software programming errors. By identifying
the insecure coding practices that lead to these errors and educating developers on
secure alternatives, organizations can take proactive steps to help significantly
reduce or eliminate vulnerabilities in software before deployment.

36
Today’s architectures rely heavily on software and applications. The architecture
itself includes hardware resources, including the typical components such as the
central processing unit (CPU), memory, input/output processing, and storage. The
operating system, which is fundamental to any technology architecture, is
responsible for controlling not only the hardware resources, but also in providing
security mechanisms to protect them, as well as providing resource access
permissions and safeguards against misuse. Applications are used by the
architecture to allow the interaction and interface to the users. Applications today
provide much more functionality than ever, also making them very easy to exploit
by attackers through vulnerabilities that may exist in the functionality
provided. Security controls, therefore, need to be designed and built into the
software to allow the users more control over the functionality, but at the same
time, protect against exploits and vulnerabilities, and ultimately to protect the value
of the information being processed through the application. There are many
vulnerabilities and exploits that can be introduced in the application, such as when
a buffer overflow attack takes advantage of improper parameter checking within
the application.

37
Another such example might be inadequate data validation that can lead to all kinds
of escalation of privileges and other exploits. Today’s software environments are also
distributed, meaning they are
connected to many other environments, architectures, networks, etc. Distributed
applications provide a particular challenge in terms of adequate security due to the
complexity of the information being
passed by components in the distributed architectures. The architectures that
software is part of today are complex and ever changing. The functionality that
software provides today is much more complex as well, and so protecting from a
security perspective is also very challenging. Protecting the application itself and the
environment that it run in begins with designing security into the functionality of the
application that is written in some sort of programming language.

37
During development phases, developers need to write code in some sort of
programming language. There are many programming languages that have been
developed over the years. A programming
language is a set of instructions that tell the computer what operations to perform.
Programming languages have evolved in generations, and each language is
characterized into one of the typical generations characterized below. Those in the
earlier classification level are closer in form to the binary language of the computer.
Both machine and assembly languages are considered low-level languages. As
programming languages have evolved, they have become easier and more similar to
the language people use to communicate. In other words, they have become higher
level languages. High-level languages are easier to use by developers than low-level
languages and in some cases, can be used to produce programs more quickly and
more efficiently. In addition, high-level languages are considered to be more
beneficial because they enforce coding standards and development methods that
can enforce a better level of more security. On the other hand, higher-level
languages can also work against proper security as they can automate certain
functions and provide complicated functionality for the application, implemented
by the programming environment or tool, the internal details of which may be

38
poorly understood by the designers and developers. As a result, it may be
possible that high-level languages may introduce possibilities of security
vulnerabilities in ways that may not be apparent to the designers, developer, and
security professionals.

38
Reference https://www.includehelp.com/basics/generations-of-programming-
language.aspx

Generations of programming language


Programming languages have been developed over the year in a phased manner.
Each phase of developed has made the programming language more user-friendly,
easier to use and more powerful. Each phase of improved made in the development
of the programming languages can be referred to as a generation. The programming
language in terms of their performance reliability and robustness can be grouped
into five different generations,
First generation languages (1GL)
Second generation languages (2GL)
Third generation languages (3GL)
Fourth generation languages (4GL)
Fifth generation languages (5GL)
1. First Generation Language (Machine language)
The first generation programming language is also called low-level programming
language because they were used to program the computer system at a very low

39
level of abstraction. i.e. at the machine level. The machine language also referred to
as the native language of the computer system is the first generation programming
language. In the machine language, a programmer only deals with a binary number.
Advantages of first generation language
They are translation free and can be directly executed by the computers.
The programs written in these languages are executed very speedily and efficiently by
the CPU of the computer system.
The programs written in these languages utilize the memory in an efficient manner
because it is possible to keep track of each bit of data.
2. Second Generation language (Assembly Language)
The second generation programming language also belongs to the category of low-
level- programming language. The second generation language comprises assembly
languages that use the concept of mnemonics for the writing program. In the
assembly language, symbolic names are used to represent the opcode and the
operand part of the instruction.
Advantages of second generation language
It is easy to develop understand and modify the program developed in these
languages are compared to those developed in the first generation programming
language.
The programs written in these languages are less prone to errors and therefore can
be maintained with a great case.
3. Third Generation languages (High-Level Languages)
The third generation programming languages were designed to overcome the various
limitations of the first and second generation programming languages. The languages
of the third and later generation are considered as a high-level language because
they enable the programmer to concentrate only on the logic of the programs
without considering the internal architecture of the computer system.
Advantages of third generation programming language
It is easy to develop, learn and understand the program.
As the program written in these languages are less prone to errors they are easy to
maintain.
The program written in these languages can be developed in very less time as
compared to the first and second generation language.
Examples: FORTRAN, ALGOL, COBOL, C++, C
4. Fourth generation language (Very High-level Languages)
The languages of this generation were considered as very high-level programming
languages required a lot of time and effort that affected the productivity of a
programmer. The fourth generation programming languages were designed and
developed to reduce the time, cost and effort needed to develop different types of
software applications.
Advantages of fourth generation languages

39
These programming languages allow the efficient use of data by implementing the
various database.
They require less time, cost and effort to develop different types of software
applications.
The program developed in these languages are highly portable as compared to the
programs developed in the languages of other generation.
Examples: SOL, CSS, coldfusion
5. Fifth generation language (Artificial Intelligence Language)
The programming languages of this generation mainly focus on constraint
programming. The major fields in which the fifth generation programming language
are employed are Artificial Intelligence and Artificial Neural Networks
Advantages of fifth generation languages
These languages can be used to query the database in a fast and efficient manner.
In this generation of language, the user can communicate with the computer system
in a simple and an easy manner.
Examples: mercury, prolog, OPS5

39
Reference

Procedural programming is a programming paradigm, derived from structured


programming,[citation needed] based on the concept of the procedure call. Procedures,
also known as routines, subroutines, or functions, simply contain a series of
computational steps to be carried out. Any given procedure might be called at any
point during a program's execution, including by other procedures or itself. The first
major procedural programming languages appeared circa 1957–1964,
including Fortran, ALGOL, COBOL, PL/I and BASIC.[1] Pascal and C were published
circa 1970–1972.
Computer processors provide hardware support for procedural programming
through a stack register and instructions for calling procedures and returning from
them. Hardware support for other types of programming is possible, but no attempt
was commercially successful (for example Lisp machines or Java
processors).[contradictory]

Procedures and modularity[edit]


Main article: Modular programming

40
Modularity is generally desirable, especially in large, complicated programs. Inputs
are usually specified syntactically in the form of arguments and the outputs delivered
as return values.
Scoping is another technique that helps keep procedures modular. It prevents the
procedure from accessing the variables of other procedures (and vice versa),
including previous instances of itself, without explicit authorization.
Less modular procedures, often used in small or quickly written programs, tend to
interact with a large number of variables in the execution environment, which other
procedures might also modify.
Because of the ability to specify a simple interface, to be self-contained, and to be
reused, procedures are a convenient vehicle for making pieces of code written by
different people or different groups, including through programming libraries.
Comparison with other programming paradigms[edit]
Imperative programming[edit]
Procedural programming languages are also imperative languages, because they
make explicit references to the state of the execution environment. This could be
anything from variables (which may correspond to processor registers) to something
like the position of the "turtle" in the Logo programming language.
Often, the terms "procedural programming" and "imperative programming" are used
synonymously. However, procedural programming relies heavily on blocks and scope,
whereas imperative programming as a whole may or may not have such features. As
such, procedural languages generally use reserved words that act on blocks, such
as if, while, and for, to implement control flow, whereas non-structured imperative
languages use goto statements and branch tables for the same purpose.
Object-oriented programming[edit]
The focus of procedural programming is to break down a programming task into a
collection of variables, data structures, and subroutines, whereas in object-oriented
programming it is to break down a programming task into objects that expose
behavior (methods) and data (members or attributes) using interfaces. The most
important distinction is that while procedural programming uses procedures to
operate on data structures, object-oriented programming bundles the two together,
so an "object", which is an instance of a class, operates on its "own" data structure.[2]
Nomenclature varies between the two, although they have similar semantics:

Functional programming[edit]
The principles of modularity and code reuse in practical functional languages are
fundamentally the same as in procedural languages, since they both stem
from structured programming. So for example:
Procedures correspond to functions. Both allow the reuse of the same code in
various parts of the programs, and at various points of its execution.
By the same token, procedure calls correspond to function application.

40
Functions and their invocations are modularly separated from each other in the same
manner, by the use of function arguments, return values and variable scopes.
The main difference between the styles is that functional programming languages
remove or at least deemphasize the imperative elements of procedural programming.
The feature set of functional languages is therefore designed to support writing
programs as much as possible in terms of pure functions:
Whereas procedural languages model execution of the program as a sequence of
imperative commands that may implicitly alter shared state, functional programming
languages model execution as the evaluation of complex expressions that only
depend on each other in terms of arguments and return values. For this reason,
functional programs can have a free order of code execution, and the languages may
offer little control over the order in which various parts of the program are executed.
(For example, the arguments to a procedure invocation in Scheme are executed in an
arbitrary order.)
Functional programming languages support (and heavily use) first-class
functions, anonymous functions and closures, although these concepts are being
included in newer procedural languages.
Functional programming languages tend to rely on tail call optimization and higher-
order functions instead of imperative looping constructs.
Many functional languages, however, are in fact impurely functional and offer
imperative/procedural constructs that allow the programmer to write programs in
procedural style, or in a combination of both styles. It is common
for input/output code in functional languages to be written in a procedural style.
There do exist a few esoteric functional languages (like Unlambda) that
eschew structured programming precepts for the sake of being difficult to program in
(and therefore challenging). These languages are the exception to the common
ground between procedural and functional languages.
Logic programming[edit]
In logic programming, a program is a set of premises, and computation is performed
by attempting to prove candidate theorems. From this point of view, logic programs
are declarative, focusing on what the problem is, rather than on how to solve it.
However, the backward reasoning technique, implemented by SLD resolution, used to
solve problems in logic programming languages such as Prolog, treats programs as
goal-reduction procedures. Thus clauses of the form:
H :- B1, …, Bn.have a dual interpretation, both as procedures
to show/solve H, show/solve B1 and … and Bnand as logical implications:
B1 and … and Bn implies H.Experienced logic programmers use the procedural
interpretation to write programs that are effective and efficient, and they use the
declarative interpretation to help ensure that programs are correct.

40
Reference https://medium.com/@cancerian0684/what-are-four-basic-principles-of-object-
oriented-programming-645af8b43727

There are 4 major principles that make an language Object Oriented. These are
Encapsulation, Data Abstraction, Polymorphism and Inheritance. These are also
called as four pillars of Object Oriented Programming.
Encapsulation
Encapsulation is the mechanism of hiding of data implementation by restricting
access to public methods. Instance variables are kept private and accessor methods
are made public to achieve this.
For example, we are hiding the name and dob attributes of person class in the
below code snippet.
Encapsulation — private instance variable and public accessor methods.
public class Employee {
private String name;
private Date dob; public String getName() {
return name;
} public void setName(String name) {

41
this.name = name;
} public Date getDob() {
return dob;
} public void setDob(Date dob) {
this.dob = dob;
}
}Abstraction
Abstract means a concept or an Idea which is not associated with any particular
instance. Using abstract class/Interface we express the intent of the class rather than
the actual implementation. In a way, one class should not know the inner details of
another in order to use it, just knowing the interfaces should be good enough.
Inheritance
Inheritances expresses “is-a” and/or “has-a” relationship between two objects. Using
Inheritance, In derived classes we can reuse the code of existing super classes. In
Java, concept of “is-a” is based on class inheritance (using extends) or interface
implementation (using implements).
For example, FileInputStream "is-a" InputStream that reads from a file.
Polymorphism
It means one name many forms. It is further of two types — static and dynamic.
Static polymorphism is achieved using method overloading and dynamic
polymorphism using method overriding. It is closely related to inheritance. We can
write a code that works on the superclass, and it will work with any subclass type as
well.
Example
Java collections framework has an interface
called java.util.Collection, ArrayList and TreeSet are two different implementation of
this interface. ArrayList maintains the insertion order of elements
while TreeSet orders its elements by their natural order or comparator(if supplied).
Now if we write a method that accepts a collection and prints its elements, the actual
object (ArrayList or TreeSet) at runtime will decide the behavior of this method.
Polymorphic print method
public void print(Collection<String> collection) {
for (String s : collection) {
System.out.println("s = " + s);
}
}Passing an ArrayList
Collection<String> collection1 = new ArrayList<>();
collection1.add("A");
collection1.add("D");
collection1.add("B");
collection1.add("C");

41
print(collection1); (1)elements will be printed as per the insertion order of elements
into arraylist
Program output
s=A
s=D
s=B
s = CPassing an TreeSet
Collection<String> collection2 = new TreeSet<>();
collection2.add("A");
collection2.add("D");
collection2.add("B");
collection2.add("C");
print(collection2); (1)elements will be printed as per the natural order
Program output
s=A
s=B
s=C
s = DWe just saw that print() method’s behavior is determined by the actual type of
object passed to it at run time. That’s polymorphism!
Important Facts
Other than objects of type java.lang.Object, all java objects are polymorphic i.e. they
pass the IS-A test for their own type as well as for class Object.
A reference variable’s type determines the methods that can be invoked on the
object that variable is referencing to. In the example above, print() method can only
invoke methods that are listed on Collection interface irrespective the type of actual
object passed to this method.
Polymorphic method invocation applies only to the instance methods (not to static
methods, not to variables). Only overriden instance methods are dynamically invoked
based on the real object’s type at runtime.

41
Reference

Polyinstantiation in computer science is the concept of type (class, database row or


otherwise) being instantiated into multiple independent instances (objects, copies).
It may also indicate, such as in the case of database polyinstantiation, that two
different instances have the same name (identifier, primary key).

Operating system security[edit]


In Operating system security, polyinstantiation is the concept of creating a user or
process specific view of a shared resource. I.e. Process A cannot affect process B by
writing malicious code to a shared resource, such as UNIX directory /tmp.[1][2]
Polyinstantiation of shared resources have similar goals as process isolation, an
application of virtual memory, where processes are assigned their own
isolated virtual address space to prevent process A writing into the memory space
of process B.

Database[edit]

42
In databases, polyinstantiation is database-related SQL (structured query language)
terminology. It allows a relation to contain multiple rows with the same primary key;
the multiple instances are distinguished by their security levels.[3] It occurs because of
mandatory policy. Depending on the security level established, one record contains
sensitive information, and the other one does not, that is, a user will see the record's
information depending on his/her level of confidentiality previously dictated by the
company's policy[4]

Cryptography[edit]
In cryptography, polyinstantiation is the existence of a cryptographic key in more than
one secure physical location.

42
in object-oriented systems, objects are encapsulated. Encapsulation protects the
object by denying direct access to view or interact with what is located inside the
object, this is referred to as data hiding. It is not possible to see what is contained in
the object because it is encapsulated. Encapsulation can be used to protect the
object, since it does not allow any other object to see data
from outside. This makes sense from a security perspective because no object
should be able to access or see another object’s data.

43
The trend in computing over the last number of decades has been to move toward
a new age of distributed computing. Distributed computing allows the sharing of
resources. The same concept of distributed environments can be applied in
software development. Distributed development architectures allow applications to
be divided into logical objects that are called components, and each component can
exist in different locations. The components can then communicate
with each other, and programs can call the components as required. This
development architecture allows applications to download code from remote
machines onto a user’s local host in a manner that is seamless to the user.
Applications today can be built using this distributed architecture constructed with
software systems that are based on distributed objects. Examples may include
Common Object
Request Broker Architecture (CORBA), Java Remote Method Invocation (JRMI),
Enterprise JavaBean (EJB), and Distributed Component Object Model (DCOM).

A distributed object-oriented system allows parts of the system to be located on


separate computers within a network. The object system itself is a compilation of
reusable self-contained objects of code

44
designed to perform specific business functions. Objects can communicate with each
other, even though they may reside on different machines across the network. To
standardize this process, the Object Management Group (OMG) created a standard
for finding objects, initiating objects, and sending requests to the objects. The
standard is called the Object Request Broker (ORB), which is part
of the Common Object Request Broker Architecture (CORBA) mentioned above.

44
Reference https://en.wikipedia.org/wiki/Common_Object_Request_Broker_Architecture

The Common Object Request Broker Architecture (CORBA) is a standard defined by


the Object Management Group (OMG) designed to facilitate the communication of
systems that are deployed on diverse platforms. CORBA enables collaboration
between systems on different operating systems, programming languages, and
computing hardware. CORBA uses an object-oriented model although the systems
that use the CORBA do not have to be object-oriented. CORBA is an example of
the distributed object paradigm.

CORBA enables communication between software written in different languages


and running on different computers. Implementation details from specific operating
systems, programming languages, and hardware platforms are all removed from the
responsibility of developers who use CORBA. CORBA normalizes the method-call
semantics between application objects residing either in the same address-space
(application) or in remote address-spaces (same host, or remote host on a
network). Version 1.0 was released in October 1991.
CORBA uses an interface definition language (IDL) to specify the interfaces that

45
objects present to the outer world. CORBA then specifies a mapping from IDL to a
specific implementation language like C++ or Java. Standard mappings exist
for Ada, C, C++, C++11, COBOL, Java, Lisp, PL/I, Object
Pascal, Python, Ruby and Smalltalk. Non-standard mappings exist
for C#, Erlang, Perl, Tcl and Visual Basic implemented by object request
brokers (ORBs) written for those languages.
The CORBA specification dictates there shall be an ORB through which an application
would interact with other objects. This is how it is implemented in practice:
The application simply initializes the ORB, and accesses an internal Object Adapter,
which maintains things like reference counting, object (and reference) instantiation
policies, and object lifetime policies.
The Object Adapter is used to register instances of the generated code classes.
Generated code classes are the result of compiling the user IDL code, which
translates the high-level interface definition into an OS- and language-specific class
base for use by the user application. This step is necessary in order to enforce CORBA
semantics and provide a clean user process for interfacing with the CORBA
infrastructure.
Some IDL mappings are more difficult to use than others. For example, due to the
nature of Java, the IDL-Java mapping is rather straightforward and makes usage of
CORBA very simple in a Java application. This is also true of the IDL to Python
mapping. The C++ mapping requires the programmer to learn datatypes that predate
the C++ Standard Template Library (STL). By contrast, the C++11 mapping is easier to
use, but requires heavy use of the STL. Since the C language is not object-oriented,
the IDL to C mapping requires a C programmer to manually emulate object-oriented
features.
In order to build a system that uses or implements a CORBA-based distributed object
interface, a developer must either obtain or write the IDL code that defines the
object-oriented interface to the logic the system will use or implement. Typically, an
ORB implementation includes a tool called an IDL compiler that translates the IDL
interface into the target language for use in that part of the system. A traditional
compiler then compiles the generated code to create the linkable-object files for use
in the application. This diagram illustrates how the generated code is used within the
CORBA infrastructure:
This figure illustrates the high-level paradigm for remote interprocess
communications using CORBA. The CORBA specification further addresses data
typing, exceptions, network protocols, communication timeouts, etc. For example:
Normally the server side has the Portable Object Adapter (POA) that redirects calls
either to the local servants or (to balance the load) to the other servers. The CORBA
specification (and thus this figure) leaves various aspects of distributed system to the
application to define including object lifetimes (although reference counting
semantics are available to applications), redundancy/fail-over, memory management,

45
dynamic load balancing, and application-oriented models such as the separation
between display/data/control semantics (e.g. see Model–view–controller), etc.
In addition to providing users with a language and a platform-neutral remote
procedure call (RPC) specification, CORBA defines commonly needed services such as
transactions and security, events, time, and other domain-specific interface models.

45
Reference https://en.wikipedia.org/wiki/Library_(computing)

Read this article further to clear your concepts. Though the definition below is more than
enough.

"Software library" redirects here. It is not to be confused with library software.


This article is about a software development concept. For a repository of digital
assets, see Digital library.
Illustration of an application which uses libvorbisfile to play an Ogg Vorbis file
In computer science, a library is a collection of non-volatile resources used
by computer programs, often for software development. These may include
configuration data, documentation, help data, message templates, pre-written
code and subroutines, classes, values or type specifications. In IBM's OS/360 and its
successors they are referred to as partitioned data sets.
A library is also a collection of implementations of behavior, written in terms of a
language, that has a well-defined interface by which the behavior is invoked. For
instance, people who want to write a higher level program can use a library to
make system calls instead of implementing those system calls over and over again.

46
In addition, the behavior is provided for reuse by multiple independent programs. A
program invokes the library-provided behavior via a mechanism of the language. For
example, in a simple imperative language such as C, the behavior in a library is
invoked by using C's normal function-call. What distinguishes the call as being to a
library function, versus being to another function in the same program, is the way
that the code is organized in the system.
Library code is organized in such a way that it can be used by multiple programs that
have no connection to each other, while code that is part of a program is organized to
be used only within that one program. This distinction can gain a hierarchical notion
when a program grows large, such as a multi-million-line program. In that case, there
may be internal libraries that are reused by independent sub-portions of the large
program. The distinguishing feature is that a library is organized for the purposes of
being reused by independent programs or sub-programs, and the user only needs to
know the interface and not the internal details of the library.
The value of a library lies in the reuse of the behavior. When a program invokes a
library, it gains the behavior implemented inside that library without having to
implement that behavior itself. Libraries encourage the sharing of code in
a modular fashion, and ease the distribution of the code.
The behavior implemented by a library can be connected to the invoking program at
different program lifecycle phases. If the code of the library is accessed during the
build of the invoking program, then the library is called a static library.[1] An
alternative is to build the executable of the invoking program and distribute that,
independently of the library implementation. The library behavior is connected after
the executable has been invoked to be executed, either as part of the process of
starting the execution, or in the middle of execution. In this case the library is called
a dynamic library (loaded at runtime). A dynamic library can be loaded and linked
when preparing a program for execution, by the linker. Alternatively, in the middle of
execution, an application may explicitly request that a module be loaded.
Most compiled languages have a standard library although programmers can also
create their own custom libraries. Most modern software systems provide libraries
that implement the majority of the system services. Such libraries
have commoditized the services which a modern application requires. As such, most
code used by modern applications is provided in these system libraries.

46
Library Benefits
The benefits of libraries are many. Software libraries can contain well coded objects
that are implemented properly, well-secured, and kept up to date with security
patches and an iterative feedback mechanism to address bugs and faults as they are
identified. Software libraries can also have the following advantages:

Increased dependability: Reused software that has been developed and tested as
such can be more dependable than new software. This is because the software can
be tested to reveal any design and implementation faults and therefore, these can
be fixed and then reused over and over again.

Reduced process risk: If software exists, organizations know exactly the cost of
creating that software. This is an important factor for project management as it
reduces the margin of error in estimating project costs. This is particularly true in
large-scale development projects.

Effective use of specialists: Instead of developers doing the same work on different
projects, specialists can develop reusable software on different projects. These

47
specialists can develop reusable software that encapsulates their knowledge. This can
include security specialists.

Standards compliance: Some standards, such as user interface standards, can be


implemented as a set of standard reusable components. For example, if menus in a
user interface are implemented using reusable components, all applications present
the same menu formats to users. The use of standard user interfaces improves
dependability as users are less likely to make mistakes when presented with a familiar
interface.

Accelerated development: In many cases, bringing a system to market as early as


possible is often more important than overall development costs. Reusing well
developed software can speed up system production because both development and
validation time should be reduced.

47
A programming tool or software development tool is a program or application that
software developers use to create, debug, maintain, and also, support development
efforts and applications. Typically, programming tools exist such as the following:
• Binary compatibility analysis tools
• Bug databases
• Build tools
• Code coverage
• Compilation and linking tools
• Debuggers
• Documentation generators
• Library interface generators
• Integration tools
• Memory debuggers
• Revision control tools
• Scripting languages
• Search tools
• Source code editors
• Source code generation tools

48
• Static code analysis tools
• Unit testing tools

The focus of the security professional needs to be on awareness of the existence and
availability of these toolsets and how they may pertain to the security of the systems
that the security professional is being asked to manage and maintain. Reliance on
experts in this area as needed to help better understand the impact of the use of one
or more of these items in a production system is very important for the overall
security to be addressed properly in the system.

48
Integrated development environments (IDEs) combine the features of many tools
and capabilities into one environment for use by the developer and other
stakeholders. Integrated development environments are designed to maximize
developer productivity by providing re-usable components with similar user
interfaces.

Integrated development environments also present a single architecture in which all


development may be done. The environment typically consists of a source code
editor, build automation tools, and debuggers. They may also have a class browser,
an object browser, and a class hierarchy diagram for use in object-oriented software
development. Sometimes, version control is also included as part of the
environment to help organizations manage the development of a graphical user
interface (GUI). An IDE for OOP usually features a class browser, tools to produce
class hierarchy diagrams, and an object inspector. By using such a comprehensive
toolset, developers can realize many benefits, including more efficient access and
use of system resources. From a security perspective, more efficient use of security
controls can also be a benefit.

49
Reference http://docs.flatpak.org/en/latest/basic-concepts.html

Runtimes
Runtimes provide the basic dependencies that are used by applications. Each
application must be built against a runtime, and this runtime must be installed on a
host system in order for the application to run (Flatpak can automatically install the
runtime required by an application). Multiple different runtimes can be installed at
the same time, including different versions of the same runtime.

Runtimes are distribution agnostic and do not depend on particular distribution


versions. This means that they provide a stable, cross-distribution base for
applications, and allow applications to continue to work irrespective of operating
system updates.

50
Since applications represent the largest attack vector, there are a number of
weaknesses and threats that are important to be aware of and address. These
include but are not limited to the following list described below. Secure coding
practices in development environments need to be addressed to limit the exposure
to the same list. Proper awareness, education, training and security skills need to
become part of the culture of the development environment to address security
properly. The security professional needs to be heavily involved in addressing and
minimizing the risks associated with the following topics.

51
Reference https://www.onespan.com/blog/social-engineering-win-battle-trust-infographic

Please review this info graphics as it explains all the key concepts.

52
Reference https://www.cloudflare.com/learning/security/threats/buffer-overflow/

What is buffer overflow?


Buffer overflow is an anomaly that occurs when software writing data to a buffer
overflows the buffer’s capacity, resulting in adjacent memory locations being
overwritten. In other words, too much information is being passed into a container
that does not have enough space, and that information ends up replacing data in
adjacent containers. Buffer overflows can be exploited by attackers with a goal of
modifying a computer’s memory in order to undermine or take control of program
execution.

What’s a buffer?
A buffer, or data buffer, is an area of physical memory storage used to temporarily
store data while it is being moved from one place to another. These buffers typically
live in RAM memory. Computers frequently use buffers to help improve
performance; most modern hard drives take advantage of buffering to efficiently
access data, and many online services also use buffers. For example, buffers are
frequently used in online video streaming to prevent interruption. When a video is

53
streamed, the video player downloads and stores perhaps 20% of the video at a time
in a buffer and then streams from that buffer. This way, minor drops in connection
speed or quick service disruptions won’t affect the video stream performance.

Buffers are designed to contain specific amounts of data. Unless the program utilizing
the buffer has built-in instructions to discard data when too much is sent to the
buffer, the program will overwrite data in memory adjacent to the buffer.

Buffer overflows can be exploited by attackers to corrupt software. Despite being


well-understood, buffer overflow attacks are still a major security problem that
torment cyber-security teams. In 2014 a threat known as ‘heartbleed’ exposed
hundreds of millions of users to attack because of a buffer overflow vulnerability in
SSL software.

How do attackers exploit buffer overflows?


An attacker can deliberately feed a carefully crafted input into a program that will
cause the program to try and store that input in a buffer that isn’t large enough,
overwriting portions of memory connected to the buffer space. If the memory layout
of the program is well-defined, the attacker can deliberately overwrite areas known
to contain executable code. The attacker can then replace this code with his own
executable code, which can drastically change how the program is intended to work.
For example if the overwritten part in memory contains a pointer (an object that
points to another place in memory) the attacker’s code could replace that code with
another pointer that points to an exploit payload. This can transfer control of the
whole program over to the attacker’s code.

Who is vulnerable to buffer overflow attacks?


Certain coding languages are more susceptible to buffer overflow than others. C and
C++ are two popular languages with high vulnerability, since they contain no built-in
protections against accessing or overwriting data in their memory. Windows, Mac
OSX, and Linux all contain code written in one or both of these languages.
More modern languages like Java, PERL, and C# have built-in features that help
reduce the chances of buffer overflow, but cannot prevent it altogether.

How to protect against buffer overflow attacks


Luckily, modern operating systems have runtime protections which help mitigate
buffer overflow attacks. Let’s explore 2 common protections that help mitigate the
risk of exploitation:
Address space randomization - Randomly rearranges the address space locations of
key data areas of a process. Buffer overflow attacks generally rely on knowing the
exact location of important executable code, randomization of address spaces makes

53
that nearly impossible.
Data execution prevention - Marks certain areas of memory either executable or
non-executable, preventing an exploit from running code found in a non-executable
area.
Software developers can also take precautions against buffer overflow vulnerabilities
by writing in languages that have built-in protections or using special security
procedures in their code.
Despite precautions, new buffer overflow vulnerabilities continue to be discovered by
developers, sometimes in the wake of a successful exploitation. When new
vulnerabilities are discovered, engineers need to patch the affected software and
ensure that users of the software get access to the patch.
What are the different types of buffer overflow attacks?
There are a number of different buffer overflow attacks which employ different
strategies and target different pieces of code. Below are a few of the most well-
known.
Stack overflow attack - This is the most common type of buffer overflow attack and
involves overflowing a buffer on the call stack*.
Heap overflow attack - This type of attack targets data in the open memory pool
known as the heap*.
Integer overflow attack - In an integer overflow, an arithmetic operation results in an
integer (whole number) that is too large for the integer type meant to store it; this
can result in a buffer overflow.
Unicode overflow - A unicode overflow creates a buffer overflow by inserting
unicode characters into an input that expect ASCII characters. (ASCII and unicode are
encoding standards that let computers represent text. For example the letter ‘a’ is
represented by the number 97 in ASCII. While ASCII codes only cover characters from
Western languages, unicode can create characters for almost every written language
on earth. Because there are so many more characters available in unicode, many
unicode characters are larger than the largest ASCII character.)
*Computers rely on two different memory allocation models, known as the stack and
the heap; both live in the computer’s RAM. The stack is neatly organized and holds
data in a Last-In, First-Out model. Whatever piece of data was most recently placed in
the stack will be the first to come out, kind of like how the last bullet inserted into an
ammunition magazine will be the first to be fired. The heap is a disorganized pool of
extra memory, data does not enter or leave the heap in any particular order. Since
accessing memory from the stack is much faster than accessing from the heap, the
heap is generally reserved for larger pieces of data or data that a programmer wants
to manage explicitly.

53
As we have explained above, today, technology environments are equipped with
scripting and programming tools as part of their functional environments. The
ability to provide more functionality
in application environments is so that these functions can be performed by the
users themselves, instead of having them be programmed into the application by
developers. These tools may allow all computer users to create their own utilities
and reusable elements. This can be negative from a security perspective as users
now have access to very powerful capabilities that may be misused
by the users as they are not focused on security or have security training. They may
not be aware of the increased risk as a result of their increased functionality. If this
type of unsupervised functionality is allowed, then a single user may have complete
control over an application or process. This may violate separation of duties
requirements. Putting powerful tool and capabilities at the user level requires
mitigation of the increased risks that this may pose.

54
Reference https://blog.insiderattack.net/covert-channels-and-data-exfiltration-
a7c73f01dc8c

The above is a very detailed article which is recommended to be read slowly at least 2-3
times. For the sake of CISSP exam you only need to understand the definition and how the
attack vector works. But we will cover all of these attacks during network and web
application hacking inshAllah.

A covert channel may be defined as a communication channel that allows processes


to transfer information in such a way to violate some security policy or requirement.
This is an information flow issue. Even though there are protection mechanisms in
place, if unauthorized information can be transferred using a signaling mechanism
or a storage mechanism, using some way that is not normally considered to be able
to communicate, then a covert channel may exist. In simplified terms, it is any flow
of information, unintentional or inadvertent, that enables an unauthorized
observer to have access to the sensitive information. This may allow the observer to
infer more sensitive information than is allowed.

There are two defined types of covert channels, storage and timing.

55
A storage covert channel involves the direct or indirect reading of storage locations
by one process and a direct or indirect reading of the same storage location by
another process. Typically, a covert storage channel involves memory locations or
sectors on a disk that may be shared by two subjects at different security levels. This
could include hard drive space, cache, or other typically used memory types in
computer architectures.

A timing covert channel depends upon being able to influence the rate or timing
issue that some other process is able to acquire resource. Examples of this may be
the CPU, memory, or I/O devices. The variation in rate may be used to pass signals
that may be used to infer more sensitive information. Essentially, the process signals
information to another process by modulating its own use of system resources in
such a way that this manipulation affects the real response time observed by the
second process and therefore, may signal sensitive information.

Timing channels may be very difficult to detect as a result.

55
Reference https://www.owasp.org/index.php/Testing_for_Input_Validation
Please read the above article as it will provide a good knowledge. We will cover this attack
in a practical manner during our web application hacking inshAllah.

A number of attacks that use input from the user and somehow inject or modify
such input currently exist and are known. The known ones may be able to be
detected by various detection systems and can be possibly protected against.
However, new attacks relying on configuring user input in unusual ways may not be
detected. For example, an attack that redirected a web browser to an alternate site
might be caught by a firewall through the detection of the uniform resource locator
(URL) of an inappropriate website. If, however, the URL was expressed in a Unicode
format rather than ASCII, the firewall would likely fail to recognize the content,
whereas the web browser would convert the information without difficulty. Here is
another example. Many websites allow query access to databases but place filters
on the requests to control access as part of access control. When requests using the
Structured Query Language (SQL) are allowed, the use of certain syntactical
structures in the query can fool the filters into seeing the query as a comment
instead of an instruction, and as a result, the resulting query may be submitted to

56
the database engine and retrieve more information than was intended. In another
instance, a site that allows users to input information for later retrieval by other
users, such as a blog, may fail to detect when such input comes in the form of active
scripting. This is the basis of a very well-known type of attack known as cross-site
scripting. As we’ve seen above, technically, buffer overflows are also a form of
malformed input.

56
All architectures use memory to process information and data. Memory
management involves sections of memory allocated to one process for a while,
then de-allocated, then reallocated to other processes. This can include random
access memory (RAM), cache, or simply hard drive space.

The problem from a security perspective is that because residual information may
remain when a section of memory is reassigned to a new process after a previous
process is finished with it, that information remaining on that object may be very
sensitive. The architecture should ensure that memory is zeroed out completely or
overwritten completely before it should be allocated to a new process. As a result,
there should be no sensitive information that remains residually in memory carrying
over from one process to another. While memory locations are of primary concern
in this regard, developers should also be careful with the reuse of other resources
that can contain sensitive information such as buffers, disk space, and other shared
resources. Other examples of storage that may be very vulnerable to this type of
problem is the paging or swap file on the disk. It is frequently left unprotected and
may contain an enormous amount of sensitive information. Note that this is a
perfect example of storage covert channel, as discussed earlier.

57
Reference https://www.owasp.org/index.php/Unsafe_Mobile_Code

Unsafe Mobile Code


This is a Vulnerability. To view all vulnerabilities, please see the Vulnerability
Category page.
Mobile code, such as a Java Applet, is code that is transmitted across a network and
executed on a remote machine. Because mobile code developers have little if any
control of the environment in which their code will execute, special security
concerns become relevant. One of the biggest environmental threats results from
the risk that the mobile code will run side-by-side with other, potentially malicious,
mobile code. Because all of the popular web browsers execute code from multiple
sources together in the same JVM, many of the security guidelines for mobile code
are focused on preventing manipulation of your objects' state and behavior by
adversaries who have access to the same virtual machine where your program is
running.

58
Access Violation
The program violates secure coding principles for mobile code by returning a private
array variable from a public access method.
Returning a private array variable from a public access method allows the calling code
to modify the contents of the array, effectively giving the array public access and
contradicting the intentions of the programmer who made it private.
Example
The following Java Applet code mistakenly returns a private array variable from a
public access method.
public final class urlTool extends Applet { private URL[] urls; public URL[] getURLs() { return
urls; } ... } .
Dangerous Array Declaration
The program violates secure coding principles for mobile code by declaring an array
public, final, and static.
In most cases an array declared public, final, and static is a bug. Because arrays are
mutable objects, the final constraint requires that the array object itself be assigned
only once, but makes no guarantees about the values of the array elements. Since
the array is public, a malicious program can change the values stored in the array. In
most situations the array should be made private.
Example
The following Java Applet code mistakenly declares an array public, final, and static.
public final class urlTool extends Applet { public final static URL[] urls; ... } Dangerous Public
Field
The program violates secure coding principles for mobile code by declaring a member
variable public but not final.
All public member variables in an Applet and in classes used by an Applet should be
declared final to prevent an attacker from manipulating or gaining unauthorized
access to the internal state of the Applet.
Example
The following Java Applet code mistakenly declares a member variable public but not
final.
public final class urlTool extends Applet { public URL url; ... } Inner Class
The program violates secure coding principles for mobile code by making use of an
inner class.
Inner classes quietly introduce several security concerns because of the way they are
translated into Java bytecode. In Java source code, it appears that an inner class can
be declared to be accessible only by the enclosing class, but Java bytecode has no
concept of an inner class, so the compiler must transform an inner class declaration
into a peer class with package level access to the original outer class. More
insidiously, since an inner class can access private fields in their enclosing class, once
an inner class becomes a peer class in bytecode, the compiler converts private fields

58
accessed by the inner class into protected fields.
Example
The following Java Applet code mistakenly makes use of an inner class.
public final class urlTool extends Applet { private final class urlHelper { ... } ... } Public
finalize() Method
The program violates secure coding principles for mobile code by declaring a
finalize()method public.
A program should never call finalize explicitly, except to call super.finalize() inside an
implementation of finialize(). In mobile code situations, the otherwise error prone
practice of manual garbage collection can become a security threat if an attacker can
maliciously invoke one of your finalize() methods because it is declared with public
access. If you are using finalize() as it was designed, there is no reason to declare
finalize() with anything other than protected access.
Example
The following Java Applet code mistakenly declares a public finalize() method.
public final class urlTool extends Applet { public void finalize() { ... } ... } Risk Factors
Talk about the factors that make this vulnerability likely or unlikely to actually happen
Discuss the technical impact of a successful exploit of this vulnerability
Consider the likely [business impacts] of a successful attack

58
Reference https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use

Time of Check vs. Time of Use (TOCTOU) is seemingly a very common type of attack
that occurs when control information changes between the time the system
security functions check the contents of variables and the time the variables
actually are used during operations. Control information is information that is used
to make security decisions. This might be a very good example. A user logs onto a
system in the morning and later is dismissed. As a result of the termination, the
security administrator removes the user from the user database and disables the
account. However, because the user did not log off, the account still has access to
the system and, as far as the system is concerned, still has privileges. Here is
another example. A connection between two machines may drop. If an attacker
manages to attach to one of the ports used for this link before the failure is
detected, the invader can hijack the session by pretending to be one of the trusted
hosts. A good way to deal with this would be to force periodic re-authentication on
a regular basis.

59
A similar attack to the above is called a between-the-lines entry. This occurs when
the telecommunication lines used by an authorized user are tapped into and data
falsely inserted or injected. To avoid this, the telecommunication lines should be
physically secured so that they cannot be accessed by unauthorized individuals, and
users should not leave telecommunication lines open
when they finished with them and those lines are not being used anymore.

60
Reference http://www.4threatsremoval.com/what-is-backdoors-top-10-backdoors/

A backdoor, also known as a trapdoor, is a way to access a computer software


without getting detected or stopped by security programs installed on the PC. This
term can refer to both the legitimate means of access as well as the applications
that are used for remote attacks by hackers. Legitimate undocumented backdoors
are used in order to allow the administrator to enter a computer system for
troubleshooting, upkeep, or other reasons. Secret portals that are used by cyber
criminals, on the other hand, can allow them to enter computers for malicious
purposes.
Built-in legitimate backdoors are generally used for maintenance. Some of them are
protected by a username and a password. In most cases, users have no idea that
these backdoors exist, and only the makers of the programs are aware of
them. Backdoors were a hot topic back in 2013, when Edward Snowden released
the information about the NSA’s effort to make companies install backdoors into
their software. These backdoors would allow the intelligence agencies to enter
systems unnoticed and access the data necessary to them.

61
The 10 biggest software backdoors of all time
Back Orifice
The DSL backdoor that wouldn’t die
The PGP full-disk encryption backdoor
Backdoors in pirated copies of commercial WordPress plug-ins
The Joomla plug-in backdoor
The ProFTPD backdoor
The Borland Interbase backdoor
The Linux backdoor ( insert a subtle backdoor)
The tcpdump backdoor
The Flawed hardening backdoor
Although administrative backdoors also create vulnerabilities in software and
systems, the real trouble comes when hackers install their own backdoors onto the
computer. Cyber criminals can use backdoors in order to access the system without
the user’s notice and steal their personal data as well as drop other malicious
applications onto the PC. Both of these actions can lead to serious consequences. If
cyber crooks gain access to the user’s personal information, this can result in identity
theft or financial losses, while the addition of other malware onto the computer
could result in system damage, data corruption, browsing issues, and more.
What should be a priority for computer users is that they install reliable anti-malware
tools capable of detecting backdoors and eliminating them from the system and keep
these tools updated. This would ensure that they do not have to deal with the above-
mentioned problems. If you think that there may be malicious programs on your PC
already, you should download and implement the malware prevention and removal
tool as soon as you can. It will help you detect backdoors and other threats, clean
your computer, and keep it safeguarded from future attacks.

61
Reference https://www.owasp.org/index.php/Source_Code_Analysis_Tools

Source Code Analysis Tools


Source code analysis tools, also referred to as Static Application Security Testing
(SAST) Tools, are designed to analyze source code and/or compiled versions of code
to help find security flaws.
Some tools are starting to move into the IDE. For the types of problems that can be
detected during the software development phase itself, this is a powerful phase
within the development life cycle to employ such tools, as it provides immediate
feedback to the developer on issues they might be introducing into the code during
code development itself. This immediate feedback is very useful, especially when
compared to finding vulnerabilities much later in the development cycle.
Strengths and Weaknesses
Strengths
Scales well -- can be run on lots of software, and can be run repeatedly (as with
nightly builds or continuous integration)
Useful for things that such tools can automatically find with high confidence, such
as buffer overflows, SQL Injection Flaws, and so forth

62
Output is good for developers -- highlights the precise source files, line numbers, and
even subsections of lines that are affected
Weaknesses
Many types of security vulnerabilities are difficult to find automatically, such as
authentication problems, access control issues, insecure use of cryptography, etc. The
current state of the art only allows such tools to automatically find a relatively small
percentage of application security flaws. However, tools of this type are getting
better.
High numbers of false positives.
Frequently can't find configuration issues, since they are not represented in the code.
Difficult to 'prove' that an identified security issue is an actual vulnerability.
Many of these tools have difficulty analyzing code that can't be compiled. Analysts
frequently can't compile code because they don't have the right libraries, all the
compilation instructions, all the code, etc.
Important Selection Criteria
Requirement: Must support your programming language, but not usually a key factor
once it does.
Types of vulnerabilities it can detect (out of the OWASP Top Ten?) (plus more?)
How accurate is it? False Positive/False Negative rates?
Does the tool have an OWASP Benchmark score?
Does it understand the libraries/frameworks you use?
Does it require a fully buildable set of source?
Can it run against binaries instead of source?
Can it be integrated into the developer's IDE?
How hard is it to setup/use?
Can it be run continuously and automatically?
License cost for the tool. (Some are sold per user, per org, per app, per line of code
analyzed. Consulting licenses are frequently different than end user licenses.)

62
Reference https://www.redhat.com/en/topics/security/api-security

What is API security?


API security is the protection of the integrity of APIs—both the ones you own and
the ones you use. But what does that mean?
Well, you’ve probably heard of the Internet of Things (IoT), where computing power
is embedded in everyday objects. The IoT makes it possible to connect your phone
to your fridge, so that when you stop at the grocery store on the way home you
know exactly what you need for that impromptu dinner party in an hour. Or
maybe you’re part of a DevOps team, using microservices and containers to build
and deploy legacy and cloud-native apps in a fast-paced, iterative way. APIs are one
of the most common ways that microservices and containers communicate, just like
systems and apps. As integration and interconnectivity become more important, so
do APIs.

Why is API security important?


Businesses use APIs to connect services and to transfer data. Broken, exposed, or

63
hacked APIs are behind major data breaches. They expose sensitive medical,
financial, and personal data for public consumption. That said, not all data is the
same nor should be protected in the same way. How you approach API security will
depend on what kind of data is being transferred.
If your API connects to a third party application, understand how that app is
funneling information back to the internet. To use the example above, maybe you
don’t care if someone finds out what’s in your fridge, but if they use that same API to
track your location you might be more concerned.
What is web API security? REST API security vs. SOAP API security.
Web API security is concerned with the transfer of data through APIs that are
connected to the internet. OAuth (Open Authorization) is the open standard for
access delegation. It enables users to give third-party access to web resources
without having to share passwords. OAuth is the technology standard that lets you
share that Corgi belly flop compilation video onto your social networks with a single
"share" button.

Most API implementations are either REST (Representational State Transfer) or SOAP
(Simple Object Access Protocol).
REST APIs use HTTP and support Transport Layer Security (TLS) encryption. TLS is a
standard that keeps an internet connection private and checks that the data sent
between two systems (a server and a server, or a server and a client) is encrypted and
unmodified. This means that a hacker trying to expose your credit card information
from a shopping website can neither read your data nor modify it. You know if a
website is protected with TLS if the URL begins with "HTTPS" (Hyper Text Transfer
Protocol Secure).
REST APIs also use JavaScript Object Notation (JSON), which is a file format that
makes it easier to transfer data over web browsers. By using HTTP and JSON, REST
APIs don’t need to store or repackage data, making them much faster than SOAP
APIs.

SOAP APIs use built-in protocols known as Web Services Security (WS Security).
These protocols define a rules set that is guided by confidentiality and
authentication. SOAP APIs support standards set by the two major international
standards bodies, the Organization for the Advancement of Structured Information
Standards (OASIS) and the World Wide Web Consortium (W3C). They use a
combination of XML encryption, XML signatures, and SAML tokens to verify
authentication and authorization. In general, SOAP APIs are praised for having more
comprehensive security measures, but they also need more management. For these
reasons, SOAP APIs are recommended for organizations handling sensitive data.
What are some of the most common API security best practices?
You probably don’t keep your savings under your mattress. Most people their money

63
in a trusted environment (the bank) and use separate methods to authorize and
authenticate payments. API security is similar. You need a trusted environment with
policies for authentication and authorization.
Here are some of the most common ways you can strengthen your API security:
Use tokens. Establish trusted identities and then control access to services and
resources by using tokens assigned to those identities.
Use encryption and signatures. Encrypt your data using a method like TLS(see
above). Require signatures to ensure that the right users are decrypting and
modifying your data, and no one else.
Identify vulnerabilities. Keep up with your operating system, network, drivers, and
API components. Know how everything works together and identify weak spots that
could be used to break into your APIs. Use sniffers to detect security issues and track
data leaks.
Use quotas and throttling. Place quotas on how often your API can be called and
track its use over history. More calls on an API may indicate that it is being abused. It
could also be a programming mistake such as calling the API in an endless loop. Make
rules for throttling to protect your APIs from spikes and Denial-of-Service attacks.
Use an API gateway. API gateways act as the major point of enforcement for API
traffic. A good gateway will allow you to authenticate traffic as well as control and
analyze how your APIs are used.
API management and security
Finally, API security often comes down to good API management. Many API
management platforms support three types of security schemes. These are:
An API key that is a single token string (i.e. a small hardware device that provides
unique authentication information).
Basic Authentication (APP ID / APP Key) that is a two token string solution (i.e.
username and password).
OpenID Connect (OIDC) that is a simple identity layer on top of the popular OAuth
framework (i.e. it verifies the user by obtaining basic profile information and using an
authentication server).
When you select an API manager know which and how many of these security
schemes it can handle, and have a plan for how you can incorporate the API security
practices outlined above.
Why choose Red Hat for API management and security
Data breaches are scary, but you can take steps toward better security. APIs are worth
the effort, you just need to know what to look for. A lot of it comes down
to continuous security measures, asking the right questions, knowing which areas
need attention, and using an API manager that you can trust. We are here to help.
At Red Hat, we recommend our award-winning Red Hat 3scale API Management. It
includes:
An API manager which manages the API, applications, and developer roles

63
A traffic manager (an API gateway) that enforces the policies from the API manager
An identity provider (IDP) hub that supports a wide range of authentication protocols
At the API gateway, Red Hat 3scale API Management decodes timestamped tokens
that expire; checks that the client identification is valid; and confirms the signature
using a public key.

63
Reference https://searchapparchitecture.techtarget.com/definition/REST-REpresentational-
State-Transfer

REST (REpresentational State Transfer)


REST (REpresentational State Transfer) is an architectural style for developing web
services. REST is popular due to its simplicity and the fact that it builds upon
existing systems and features of the internet's Hypertext Transfer Protocol (HTTP) in
order to achieve its objectives, as opposed to creating new standards, frameworks
and technologies.
Advantages of REST
A primary benefit of using REST, both from a client and server's perspective, is REST-
based interactions happen using constructs that are familiar to anyone who is
accustomed to using the internet's HTTP.
An example of this arrangement is REST-based interactions all communicate their
status using standard HTTP status codes. So, a 404 means a requested resource
wasn't found; a 401 code means the request wasn't authorized; a 200 code means
everything is OK; and a 500 means there was an unrecoverable application error on

64
the server.

Similarly, details such as encryption and data transport integrity are solved not by
adding new frameworks or technologies, but instead by relying on well-known Secure
Sockets Layer (SSL) encryption and Transport Layer Security (TLS). So, the entire REST
architecture is built upon concepts with which most developers are already familiar.

REST is also a language-independent architectural style. REST-based applications can


be written using any language, be it Java, Kotlin, .NET, AngularJS or JavaScript. As long
as a programming language can make web-based requests using HTTP, it is possible
for that language to be used to invoke a RESTful API or web service. Similarly, RESTful
web services can be written using any language, so developers tasked with
implementing such services can choose technologies that work best for their
situation.

The other benefit of using REST is its pervasiveness. On the server side, there are a
variety of REST-based frameworksfor helping developers create RESTful web services,
including RESTlet and Apache CXF. From the client side, all of the new JavaScript
frameworks, such as JQuery, Node.js, Angular and EmberJS, all have standard libraries
built into their APIs that make invoking RESTful web services and consuming the XML-
or JSON-based data they return a relatively straightforward endeavor.

Disadvantages of REST
The benefit of REST using HTTP constructs also creates restrictions, however. Many of
the limitations of HTTP likewise turn into shortcomings of the REST architectural
style. For example, HTTP does not store state-based information between request-
response cycles, which means REST-based applications must be stateless and any
state management tasks must be performed by the client.
Similarly, since HTTP doesn't have any mechanism to send push notifications from
the server to the client, it is difficult to implement any type of services where the
server updates the client without the use of client-side polling of the server or some
other type of web hook.
From an implementation standpoint, a common problem with REST is the fact that
developers disagree with exactly what it means to be REST-based. Some software
developers incorrectly consider anything that isn't SOAP-based to be RESTful. Driving
this common misconception about REST is the fact that it is an architectural style, so
there is no reference implementation or definitive standard that will confirm whether
a given design is RESTful. As a result, there is discourse as to whether a given API
conforms to REST-based principles.
Alternatives to REST
Alternate technologies for creating SOA-based systems or creating APIs for invoking

64
remote microservices include XML over HTTP (XML-RPC), CORBA, RMI over IIOP and
the Simple Object Access Protocol (SOAP).
Each technology has its own set of benefits and drawbacks, but the compelling
feature of REST that sets it apart is the fact that, rather than asking a developer to
work with a set of custom protocols or to create a special data format for exchanging
messages between a client and a server, REST insists the best way to implement a
network-based web service is to simply use the basic construct of the network
protocol itself, which in the case of the internet is HTTP.
This is an important point, as REST is not intended to apply just to the internet;
rather, its principles are intended to apply to all protocols,
including WEBDAV and FTP.
REST vs. SOAP
The two competing styles for implementing web services are REST and SOAP. The
fundamental difference between the two is the philosophical approach the two have
to remotely invocations.
REST takes a resource-based approach to web-based interactions. With REST, you
locate a resource on the server, and you choose to either update that resource,
delete it or get some information about it.
With SOAP, the client doesn't choose to interact directly with a resource, but instead
calls a service, and that service mitigates access to the various objects and resources
behind the scenes.
SOAP has also built a large number of frameworks and APIs on top of HTTP, including
the Web Services Description Language (WSDL), which defines the structure of data
that gets passed back and forth between the client and the server.
Some problem domains are served well by the ability to stringently define the
message format or can benefit from using various SOAP-related APIs, such as WS-
Eventing, WS-Notification and WS-Security. There are times when HTTP cannot
provide the level of functionality an application might require, and in these cases,
using SOAP is preferable.
REST URIs and URLs
Most people are familiar with the way URLs and URIs work on the web. A RESTful
approach to developing applications asserts that requesting information about a
resource should be as simple as invoking its URL.
For example, if a client wanted to invoke a web service that listed all of the quizzes
available here at TechTarget, the URL to the web service would look something like
this:
www.techtarget.com/restfulapi/quizzes
When invoked, the web service might respond with the following JSON string listing
all of the available quizzes, one of which is about DevOps:
{ "quizzes" : [ "Java", "DevOps", "IoT"] }
To get the DevOps quiz, the web service might be called using the following URL:

64
www.techtarget.com/restfulapi/quizzes/DevOps
Invoking this URL would return a JSON string listing all of the questions in the DevOps
quiz. To get an individual question from the quiz, the number of the question would
be added to the URL. So, to get the third question in the DevOps quiz, the following
RESTful URL would be used:
www.techtarget.com/restfulapi/quizzes/DevOps/3
Invoking that URL might return a JSON string such as the following:
{ "Question" : {"query":"What is your DevOps role?", "optionA":"Dev",
"optionB":"Ops"} }
As you can see, the REST URLs in this example are structured in a logical and
meaningful way that identifies the exact resource being requested.
JSON and XML REST data formats
The example above shows JSON used as the data exchange format for the RESTful
interaction. The two most common data exchange formats are JSON and XML, and
many RESTful web services can use both formats interchangeably, as long as the
client can request the interaction to happen in either XML or JSON.
Note that while JSON and XML are popular data exchange formats, REST itself does
not put any restrictions on what the format should be. In fact, some RESTful web
services exchange binary data for the sake of efficiency. This is another benefit to
working with REST-based web services, as the software architect is given a great deal
of freedom in terms of how best to implement a service.
REST and the HTTP methods
The example above only dealt with accessing data.
The default operation of HTTP is GET, which is intended to be used when getting data
from the server. However, HTTP defines a number of other methods, including PUT,
POST and DELETE.
The REST philosophy asserts that to delete something on the server, you would
simply use the URL for the resource and specify the DELETE method of HTTP. For
saving data to the server, a URL and the PUT method would be used. For operations
that are more involved than simply saving, reading or deleting information, the POST
method of HTTP can be used.
History of REST
REST was first coined by computer scientist Roy Fielding in his year-2000 Ph.D.
dissertation at the University of California, titled Architectural Styles and the Design
of Network-based Software Architectures.
Chapter 5 of the dissertation, "Representational State Transfer (REST)," described
Fielding's beliefs about how best to architect distributed hypermedia systems.
Fielding noted a number of boundary conditions that describe how REST-based
systems should behave. These conditions are referred to as REST constraints, with
four of the key constraints described below:
Use of a uniform interface (UI). As stated earlier, resources in REST-based systems

64
should be uniquely identifiable through a single URL, and only by using the
underlying methods of the network protocol, such as DELETE, PUT and GET with
HTTP, should it be possible to manipulate a resource.
Client-server-based.In a REST-based system, there should be a clear delineation
between the client and the server. UI and request-generating concerns are the
domain of the client. Meanwhile, data access, workload management and security
are the domain of the server. This separation allows loose coupling between the
client and the server, and each can be developed and enhanced independent of the
other.
Stateless operations.All client-server operations should be stateless, and any state
management that is required should happen on the client, not the server.
RESTful resource caching. The ability to cache resources between client invocations is
a priority in order to reduce latency and improve performance. As a result, all
resources should allow caching unless an explicit indication is made that it is not
possible.
Developing REST APIs in Java
To accommodate the growing popularity of REST-based systems, a number of
frameworks have arisen to assist developers in the creation of RESTful web services.
Some of the more popular open source frameworks for creating Java-based, RESTful
web services include Apache CXF, Jersey, Restlet, Apache Wink, Spring Data and
JBoss' RESTeasy.
The general approach of each of these frameworks is to help developers build
RESTful web services using semantics with which Java developers are familiar,
including Java Platform (Enterprise Edition), the Servlet API and annotations, while at
the same time, offering built-in classes and methods that make it easier to keep in
line with the basic tenets of REST.
REST and the IoT
Given the near ubiquity of REST APIs and the explosive number of devices on the
Internet of Things (IoT), it seems to be a perfect pairing. Compact formats based on
JSON, EXI and CBOR (Concise Binary Object Representation), a JSON offshoot, are
used and RESTful APIs are likewise compact.

64
65
Basic authentication is the easiest of the three to implement because the majority
of the time, it can be implemented without additional libraries. Everything that is
needed to implement basic authentication is usually included in standard
framework or language library. The problem with basic authentication is that it is
basic, and it offers only the absolute lowest security options of the available
common protocols, so depending on requirement, it may not be enough as there
are no advanced options for using this protocol. Recommendations are that basic
authentication should never be used without Transport Layer Security (TLS)
(formerly known as SSL) encryption because the username and password
combination can easily be deduced otherwise.

66
Reference https://oauth.net/core/1.0a/

You can read the above link to get a clear understanding on how Oauth works. For CISSP
exam sake you need to understand how OAuth 1.0 really works. But if you want to become
a network and web application pentester, then you should read the above link. Remember
every protocol which is secure now can always be compromised or is waiting to be
compromised. And that’s what we want you to think and strive for.

OAuth 1.0a is the most secure of the three common protocols. The protocol uses a
cryptographic signature that is usually HMAC-SHA1 value that combines the token
secret, nonce, and other request-based security information. The great advantage
of OAuth 1 is that the token secret is never sent across the wire, which completely
eliminates the possibility of anyone seeing the password while in transit. This is the
only one of the three protocols that can be safely used without TLS, although
recommendations are always that TLS should be used based on the sensitivity of
the information being transferred. However, as with any level of increased security,
it usually demands a price. The price is that generating and validating signatures can
become a complex process. What needs to be used are specific algorithms and a
considerable set of procedures that need to be followed. However, as levels of

67
security have become much more needed, this issue has really disappeared as every
major programming language now has a library to handle this type of activity.

67
This is the next evolution of what was discussed above, and OAuth 2’s current
specification removes signatures so there is no requirement to use cryptographic
algorithms to create, generate, and validate signatures. All the encryption is now
handled by TLS, which is a requirement. As a drawback, however, there may not be
as many OAuth 2 libraries as there are OAuth 1a libraries, so integrating this
protocol into your API may be more challenging. However, this is changing rapidly.
Also useful, could be the use of a solution such as Key Management Interoperability
Protocol (KMIP) V1.1. Client certificates.

68
Reference https://www.owasp.org/index.php/REST_Security_Cheat_Sheet

You can read the above link as it is recommended. We will cover all the key details during
our network and web application hacking classes inshAllah.

69
Secure coding practices can be referred to as developing software with the focus on
securing against known and possibly vulnerabilities that may exist in the
environment the applications will be running in. There are many vulnerabilities that
exist and addressing them all would be impossible. However, if development of
code follows a culture of focusing on security, many of the vulnerabilities can be
effectively mitigated. The requirement is for these secure coding practices to be
integrated into the SDLC, and coding mitigating controls as the applications are
being written is an effective way of dealing with many vulnerabilities.

70
Reference https://www.sciencedirect.com/topics/computer-science/trusted-computing-
base

Read the above research paper for added knowledge. Otherwise below para is good
enough.

But we recommend that you read the research paper so that you have the mind set
to think like an expert inshAllah.

The trusted computing base (TCB) is the collection of all the hardware, software,
and firmware components within an architecture that are specifically responsible
for security. The TCB is a term that is usually associated with security kernels and
the reference monitor. The TCB is the collection of all of the hardware, software,
and firmware within a computer system that contains all elements of the system
responsible for supporting the security policy and the isolation of objects. When
designed and coded properly, all of the security features within a system becomes
the TCB, and therefore, can support adequate security requirements. Again, if
designed and developed properly, the TCB can contain a very good trusted path

71
(secure methods to gain access) and a trusted shell (the environment supporting the
security is secure). The trusted path is a communication channel between the user or
program and the TCB. The TCB is responsible for providing the protection
mechanisms necessary to ensure that the trusted path cannot be compromised in
any way. The trusted shell implies that any activity taking place within the shell, or
communication channel, is isolated to that channel and cannot be interacted with
either from inside or outside by an untrusted party or entity.

71
The security kernel, as mentioned above, is the implementation of the reference
monitor concept. It is made up of all of the components of the TCB (the software,
hardware, and firmware), and it is responsible for implementing and enforcing the
reference monitor idea. A security kernel is responsible for enforcing the security
policy. It must be a strict implementation of a reference monitor
mechanism. The architecture of a kernel operating system is typically layered, and
the kernel should be at the lowest and most primitive level. It is a small portion of
the operating system through which all references to information and all changes to
authorizations must pass. The kernel implements access control and information
flow control between implemented objects according to the security policy. To be
implemented properly and securely, the security kernel must meet three basic
fundamental requirements:

• Completeness: All accesses to information must go through the kernel.


• Isolation: The kernel itself must be protected from any type of unauthorized
access.
• Verifiability: The kernel must be proven to meet design specifications.

72
To address confidence and assurance of security capabilities of the components that
make up the TCB, there are various measurement systems that can be used to verify
the level of security capabilities. These measurement systems are called evaluation
criteria. A number of them exist such as the Trusted Computer System Evaluation
Criteria (TCSEC) and the current Common Criteria standards.

72
Read the article for added knowledge https://en.wikipedia.org/wiki/Process_state

It is highly recommended as we will covering this topic in greater detail during our stage 2
and stage 3 of the CODS. During pentesting, forensics and threat hunting inshAllah.
Especially after the compromises and hacks of intel CPU recently, we will be working
extensively in various projects to exploit the Processors privilege states inshAllah in CODS
advance track.

Limiting processors so they can only do certain activities and capabilities can be a
security control. This can be referred to as privilege states. The processor privilege
states protect the processor and the activities it can perform. The earliest method
of doing this was to record the processor state in a
register that could only be altered when the processor was operating in a privileged
state. Instructions such as I/O requests were designed to include a reference to this
register. If the register was not in a privileged state, the instructions were aborted
and not performed.

The hardware itself typically controls entry into the privilege mode. For example,
there are certain newer processors that prevent system code and data from being

73
overwritten. The idea is to have the privilege level mechanism prevent memory
access by programs or data from less privileged to more privileged levels, but only if
the controls are invoked and properly managed in software. In other words,
hardware and software can work together to allow privileged access through
processor states. The privileged levels are typically referenced in a ring architecture.

As an example, many operating systems use two processor access modes:


• User state, or sometimes referred to as problem state
• Supervisor state, or sometimes referred to as kernel state

Normal user applications should run in user mode, and operating system functions
will run in supervisor mode. The privileged processor mode is called kernel mode.
The kernel mode allows the
processor access to all system memory, resources, and CPU instructions
appropriately. Applications should run in a non-privileged mode or what is referred to
as user, or problem, state and have a limited set of capabilities, limited access to
system data, and denied direct access to hardware resources. Advantages of this
architecture is that problematic application software cannot disrupt the system
ability to function properly. One of the major challenges of modern processing is that
operating systems and applications may be most effective if run in supervisor or
kernel mode at all times. Here’s an example, when a user mode program calls a
system service, such as reading a document from storage, the processor intercepts
the call and switches the calling request to supervisor mode. When the operation is
complete, the operating system switches the mode
back to user mode and allows the user mode program to continue. Earlier, we
mentioned that many of these architectures are set up in ring architecture. Under the
most secure operating policy, the
operating system and device drivers operate at ring level 0, also known as kernel-
level or system-level privilege. At this privilege level, there are no restrictions on what
a program can do. Because
programs at this level have unlimited access, security professionals should be
concerned about the source of device drivers for machines that contain sensitive
information. Applications and services should operate at ring level 3, also known as
user-level or application-level privilege. Operating system code runs in kernel mode,
it is critical that kernel mode components be carefully designed to ensure they do not
violate security features. For example, if a system administrator installs a third-party
device driver, it operates in kernel mode and then has access to all operating system
data.
Here’s the importance of understanding the security ramifications of this type of
architecture. If the device driver installation software also contains malicious code,
that code will also be installed and could open the system to unauthorized accesses

73
as a result.

73
A common problem with technology architectures is referred to as buffer overflows.
This is where an application has been subjected to much more information than its
buffer can handle. The problem is inadequate bounds checking, or ineffective
parameter checking, which may lead to buffer overflows. As we’ve said, a buffer
overflow is caused by improper bounds checking on input to an application.
Essentially, the program fails to see if too much data is provided for an allocated
space of memory, referred to as a buffer. In order to run, programs need to be
loaded into memory, but if there is an overflow the data has to go somewhere. If
the attack has been done creatively, that data could be malicious code that is
loaded, and as a result, it may run as if it were the program itself, allowing exploits
by an attacker. Buffer overflows must be corrected by developers or by directly
patching fixes. Sometimes, they may be detected by reverse engineering the
application’s code, also referred to as disassembling programs, and looking at the
actual operations of the application itself. The fix to buffer overflows is to patch for
known buffer overflow conditions and also to enforce proper bounds checking and
enforcement, and in some cases, proper error checking.

74
Another security risk exists when all parameters, such as input, have not been fully
checked for accuracy and consistency by the systems. This lack of parameter
checking can lead to many attacks including buffer overflow attacks. To counter this
vulnerability, systems can include some type of buffer bounds controls. Complete
and effective parameter checking is something that needs to be designed, coded,
and implemented by the developers and involves checking the input data to make
sure the program does not allow unwanted characters, length, data types, and
formats. This may be referred to as proper input data validation.

75
Computer architectures today are multitasking. This means they can host multiple
processes that are running at the same time. A process is defined as part of a
computer program that is being executed in memory. Multitasking architectures
today are capable of running multiple processes at the same time. For these
different processes to run at the same time, they must be managed in such a way
that they are able to access resources as needed, but at the same time do so
without impacting any of the other processes that are running at the same time.
This can become very complicated at times as processes can share memory, data,
and system resources all at the same time. In other words, they may be contending
for system resources all at the same time, while trying to complete their tasks.

To maintain the integrity of the operating system and of each of the processes and
the data that is being accessed, it is important that accesses to resources is
managed properly at all times. This requires the processes to be isolated from each
other. This, very appropriately, is called process isolation. This need to isolate
processes from one another within the computer architecture has to
be managed to ensure that it is happening effectively and thoroughly, without
exceptions and problems. This is handled by the operating system. It is the

76
operating system that takes care of process isolation, but it needs to partner with the
CPU to enforce the process isolation through the use of interrupts and time slicing.

76
Reference https://www.renesas.com/us/en/support/technical-resources/engineer-
school/mcu-programming-peripherals-04-interrupts.html

What Is an Interrupt?
We introduced the concept of interrupts in the second session of this series, in our
discussion of timers. Consider now a similar analogy to illustrate how things might
work without an interrupt: if you are boiling eggs, and you want to take them off
the stove in 10 minutes, then one way to do it is to check the clock every now and
then to see if the time is up. The same is true in embedded systems: if you want to
wait for a specific state change to occur before doing some operation, then one way
to do this is to periodically check the state. Or again, if your program is waiting for a
GPIO input level; to change from 0 to 1 before executing some step, then one way
to proceed is to periodically check the GPIO value. This approach—periodic
checking—is referred to as polling.
While polling is a simple way to check for state changes, there's a cost. If the
checking interval is too long, there can be a long lag between occurrence and
detection—and you may miss the change completely, if the state changes back
before you check. A shorter interval will get faster and more reliable detection, but

77
also consumes much more processing time and power, since many more checks will
come back negative.

An alternative approach is to utilize interrupts. With this method, the state change
generates an interrupt signal that causes the CPU to suspend its current operation
(and save its current state), then execute the processing associated with the
interrupt, and then restore its previous state and resume where it left off.

How an MCU Processes Interrupts


Interrupts can originate from both MCU-internal and MCU-external devices. An
interrupt from an external switch or sensor, for example, is sometimes called
"attached interrupt", as it is generated by an external device that is attached to an
IRQ (interrupt request) pin on the MCU. When the relevant state change occurs, the
external device sends an interrupt request signal to this pin, and this in turn
generates a notification to the MCU's interrupt controller (on the RX63N, this
controller is called the "ICUb").
In contrast, interrupts from on-chip peripherals—internal timers, GPIO lines, UARTs,
etc.—are referred to as "peripheral interrupts." These interrupt signals generate
direct notification to the interrupt controller, with no need for pin attachments.
The interrupt controller's job is to pass these interrupt requests to the CPU in a
coordinated way. When multiple interrupts occur, the controller must send these to
the CPU in the appropriate order, based on their relative priorities. And the controller
must also be aware of which interrupts are currently masked (disabled), so that it can
ignore these interruptions completely.

When the CPU receives an interrupt request from the controller, it stops executing
the program it is working on, and automatically saves all of the relevant working
information so that it can later resume from where it left off. It then loads and
executes the interrupt processing program that corresponds with the interrupt
request that it received. After completing this processing, the CPU restores the saved
information and resumes from where it stopped. (See Figure 2) Note that saving and
resuming are handled automatically by the CPU; programmers need not concern
themselves with these details.

77
Encapsulating a process means that you isolate that process so that no other
process is able to see, understand, or interact with the internal functions of the
process itself. This act of encapsulating forces processes to interact with each other
through well-defined interfaces that can be overseen and managed by the
operating system properly. Encapsulation effectively hides the process and its
functions from other processes, thereby allowing it to engage in data hiding. Data
hiding is what it sounds like, hiding data from other processes so that each of the
processes running at the same time do not interfere with each other.

78
Reference https://cs.nyu.edu/courses/spring09/V22.0202-002/lectures/lecture-02.html

Please read the research paper via the above url to have a good understanding.

Time multiplexing allows the operating system to provide structured access by


processes to resources according to a controlled and tightly managed time
schedule. This schedule is defined as a short period of time, or a time slice, which
will grant access to the system resources required by the process and then
terminate that access once the time period has expired. That resource then
becomes available to another process, again based on a time slice. Multitasking and
multi-processor architectures that are common today create an additional layer of
performance but also complexity with regard to time slicing or multiplexing. Due to
the fact that each CPU in a computer can have more than one core, or more than
one processor, the ability for the computer to process multiple requests for access
to resources from processes simultaneously continues to increase, and therefore,
needs to be managed properly. This is referred to as multitasking.

79
Naming distinctions are used to ensure that each process is assigned a unique
identity within the context of the operating system and its architecture. This means
that each process will be given a unique name and Process ID, or PID, ensuring that
when it is referenced by the operating system, there is no confusion as to which
process is being accessed by which resources. This allows all processes to be
referenced properly as they execute their tasks.

80
Reference https://en.wikipedia.org/wiki/Virtual_memory

In computing, virtual memory (also virtual storage) is a memory


management technique that provides an "idealized abstraction of the storage
resources that are actually available on a given machine"[1] which "creates the
illusion to users of a very large (main) memory."[2]
The computer's operating system, using a combination of hardware and software,
maps memory addresses used by a program, called virtual addresses, into physical
addresses in computer memory. Main storage, as seen by a process or task, appears
as a contiguous address space or collection of contiguous segments. The operating
system manages virtual address spaces and the assignment of real memory to
virtual memory. Address translation hardware in the CPU, often referred to as
a memory management unit or MMU, automatically translates virtual addresses to
physical addresses. Software within the operating system may extend these
capabilities to provide a virtual address space that can exceed the capacity of real
memory and thus reference more memory than is physically present in the
computer.

81
The primary benefits of virtual memory include freeing applications from having to
manage a shared memory space, increased security due to memory isolation, and
being able to conceptually use more memory than might be physically available,
using the technique of paging.

81
Reference https://www.enterprisestorageforum.com/storage-hardware/memory-
management.html

Since the beginning of modern IT, the challenge of memory exhaustion has been
handled by a diverse set of capabilities, usually grouped under the heading of
memory management.

The reality of most compute and storage deployments is that all types of computer
memory are constrained by an upper limit.

No resource on a modern system is perhaps as constrained as memory, which is


always needed by operating systems, applications and storage. Without unlimited
memory, at some point memory is fully consumed, which leads to system instability
or data loss.

Since the beginning of modern IT, the challenge of memory exhaustion has been
handled by a diverse set of capabilities, usually grouped under the heading
of memory management.

82
What Is Memory Management?
Memory management is all about making sure there is as much available memory
space as possible for new programs, data and processes to execute. As memory is
used by multiple parts of a modern system, memory allocation and memory
management can take on different forms.

Operating System - Operating systems like Microsoft Windows and Linux, can make
use of physical RAM as well as hard drive swap space to manage a total pool of
available memory.
Programming Languages - The C programming language requires developers to
directly manage memory utilization, while other languages like Java and C#, for
example, provide automatic memory management.
Applications - Applications consume and manage memory, but are often limited in
memory management capabilities as defined by the underlying language and
operating system.
Storage memory management - With new NVME storage drives, operating systems
can benefit from faster storage drives to help expand and enable more persistent
forms of memory management.
To be effective, a computer's memory management function must sit between the
hardware and the operating system.

How Memory Management Works


Memory management is all about allocation and optimization of finite physical
resources. Memory is not uniform - for example a 2GB RAM DIMM is not used as one
large chunk of space. Rather memory allocation techniques are used for
segmentation of RAM into usable blocks of memory cache.

Memory management strategies within an operating system or application typically


involve an understanding of what physical address space is available in RAM and
performing memory allocation to properly place, move and remove processes from
memory address space.

Types of Memory Addresses


Static and dynamic memory allocation in an operating system is linked to different
types of memory addresses. Fundamentally, there are two core types of memory
addresses:
Physical addresses - The physical address is the memory location within system RAM
and is identified as a set of digits.
Logical addresses - Also sometimes referred to as virtual memory, a logical address is

82
what operating systems and applications access to execute code, as an abstraction of
physical address space.

How does MMU convert virtual address to physical address?


The Memory Management Unit (MMU) within a computing system is the core
hardware component that translates virtual logical address space to physical
addresses. The MMU is typically a physical piece of hardware and is sometimes
referred to as a Paged Memory Management Unit (PMMU).
The process by which the MMU converts a virtual address to a physical address is
referred to as virtual address translation and makes use of a Page Directory Pointer
Table (PDPT) to convert one address type to another.

The process is directly tied to page table allocation, matching and managing one
address type to another. To help accelerate virtual address translation there is a
caching mechanism known as the Translation Lookaside Buffer (TLB) which is also
part of the virtual address to physical address translation process.

Memory Allocation: Static vs. Dynamic Loading


Applications and data can be loaded into memory in a number of different ways, with
the two core approaches being static and dynamic loading.
Static Loading - Code is loaded into memory, before it is executed. Used in structured
programming languages including C.
Dynamic Loading - Code is loaded into memory as needed. Used in object oriented
programming languages, such as Java.
Memory Fragmentation
When memory is allocated in a system, not all of the available is always consumed in
a linear manner, which can lead to fragmentation. There are two core types of
memory fragmentation, internal and external
Internal Fragmentation - Memory is allocated to a process or application and isn't
used, leaving un-allocated or fragmented memory.
External Fragmentation - As memory is allocated and then deallocated, there can be
small spaces of memory leftover, leaving memory holes or "fragments" that aren't
suitable for other processes.
Paging
Within logical address space, virtual memory is divided up using paging, meaning it's
divided into fixed units of memory, referred to as pages. Pages can have different
sizes, depending on the underlying system architecture and operating system The
process of page table management can be intricate and complex.
For more information on how paging is handled in Linux, check out the full kernel.org
documentation.
On Windows, Microsoft provides details on its paging process here.

82
Segmentation
Memory segmentation within the primary memory of a system is a complicated
process that references specific bits within a memory unit.
Each segment within system memory gets its own address, in an effort to improve
optimization and memory allocation. Segment registers are the primary mechanism
by which modern systems handle memory segmentation.
Swapping
Swapping is the process by which addition memory is claimed by an operating system
from a storage device.
How swapping works is an operating system defines an area of storage that is used as
"swap space," that is storage space where memory process will be stored and run as
physical and virtual memory space is exhausted, released and reclaimed. The usage
of swap space with traditional storage is a sub-optimal way of expanding available
memory as it incurs the overhead of transferring to and from physical RAM.
Additionally, traditional storage devices run with slower interface speeds that RAM.
Swapping however is now being revisited as a way to expand memory with the
emergence of faster, PCIe SSDs which offer an interface connect speed of up to 16
Gb/s. In contrast a SATA connected SSD has a maximum connection speed of 6.0
Gb/s.
Why do We Need Memory Management?
Memory management is an essential element of all modern computing systems. With
the continued used of virtualization and the need to optimize resource utilization,
memory is constantly being allocated, removed, segmented, used and re-used. With
memory management techniques, memory management errors, that can lead to
system and application instability and failures can be mitigated
Advantages
Maximizes the availability of memory to programs
Enables re-use and reclamation of memory that is not actively in use
Can help to extend available physical memory with swapping
Disadvantages
Can lead to fragmentation of memory resources
Adds complexity to system operations
Introduces potential performance latency

82
Reference https://onlinelibrary.wiley.com/doi/full/10.1002/sec.1471

Read this as we will be working on covert channel controls and attacks during stage 2 and 3
inshAllah within CODS.

Covert channel-internal control protocols: attacks and defense


Abstract
Network covert channels have become a sophisticated means for transferring
hidden information over the network. Covert channel-internal control protocols,
also called micro protocols, have been introduced in the recent years to enhance
capabilities of the network covert channels. Micro protocols are usually placed
within the hidden bits of a covert channel's payload and enable features such as
reliable data transfer, session management, and dynamic routing for network covert
channels. These features provide adaptive and stealthy covert communication
channels. Some of the micro protocol based tools exhibit vulnerabilities and are
susceptible to attacks. In this paper, we demonstrate some possible attacks on
micro protocols, which are capable of breaking the sophisticated covert channel
communication or jeopardizing the identity of peers in such a network. These

83
attacks are based on the attacker's interaction with the micro protocol. We also
present the defense techniques to safeguard micro protocols against such attacks. By
using these techniques, micro protocol-based tools can become immune to certain
attacks and lead to robust covert communication. We present our results for two
micro protocol-based tools: Ping Tunnel and smart covert channel tool. Copyright ©
2016 John Wiley & Sons, Ltd.
1 Introduction
Security is an important concern in the network communication. There are several
vulnerabilities present in the design of networks and network protocols. Illegitimate
users take advantage of such vulnerabilities and sometimes use the network for
unlawful purposes. Network covert channels are hidden and potentially
policy-breaking communication channels originally introduced as unforeseen
communication channels by Lampson 1. These channels are created by methods of
information hiding – a discipline known to be used for censorship circumvention,
malware communication, and exchange of secret information by terrorists and spies,
just to mention a few scenarios 2-4. The purpose of using network covert channels is
to transfer information over the network while ensuring that the transfer is not
recognized by a third party. The detection of a network covert channel is generally
made difficult by using unobtrusive characteristics of the communication medium to
signal hidden information.
These channels can take advantage of various environments for hiding the data, for
example, hiding it in the Open Systems Interconnection (OSI) network model,
Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite, and LAN
environment. As described in 5, network covert channels can be classified into two
different types:
Network covert storage channels embed hidden data into bits of the transferred
network packets. Reserved bits or unused bits of the protocol are mainly used to
transmit information. For example, DSCP bits in IP protocol.
Network covert timing channels signal hidden information through the timing
information of network packets. Normally, the timing difference between network
packets is used to deliver a covert message.
Covert channel-internal control protocols, also called micro protocols, were
introduced to integrate various capabilities into covert channels such as reliable data
transfer and dynamic overlay routing. The fundamental designing process for micro
protocols is performed in the same way as for other communication protocols.
However, a micro protocol header is placed within the hidden data transferred by a
covert channel and thus size-constrained 6. In other words, a micro protocol header
contains only a few bits, and these bits are embedded in a stealthy manner into the
underlying network protocol. Therefore, micro protocol-specific protocol engineering
methods enhance the existing protocol engineering techniques in the design process.
Micro protocols have several uses, good as well as bad 7. For instance, the micro

83
protocols could help journalists to transfer illicit information over the network, as
they give them the ability of hiding transferred data in an improved manner (e.g., by
splitting the payload over multiple simultaneous channels). At the same time, they
also provide reliable data transfer over all the channels used in covert
communication. They can additionally support dynamic overlay routing in order to
allow journalists to bypass critical censorship infrastructure on a global routing path.
In this way, journalists can gain the ability to express their opinion freely and do not
have to undergo legal restrictions.
On the other hand, the micro protocols can be exploited by illegitimate network
users who can use the network covert channels as stealthy botnet command and
control channels. If a botnet developer implements a network covert channel with a
micro protocol, such a channel could signal commands and configuration messages
between the bots and the bot master, ensuring stealthiness of the botnet's
communication.
Considering this dual nature of micro protocols, we present possible attacks and
defense techniques for micro protocol-based communications. The attacks are based
on the idea that an attacker is able to manipulate the micro protocol and influence its
communication channel. That is, he interacts with the micro protocol to counter its
communication (e.g., implementing a man-in-the-middle attack). The possibility of
covert channels and micro protocols cannot be completely eliminated, although it is
possible to significantly reduce their presence by careful design and analysis. Certain
attacks could be performed on micro protocols by sending fake protocol complying
commands or fake acknowledgments. Defense techniques proposed in this paper are
a set of improvement ideas which would make the micro protocol structures more
secure.
Overall, this paper presents a comprehensive study of possible attacks and defense
techniques for micro protocol-based tools. Our specific contributions are:
Analysis of existing micro protocol designs.
Discussion and demonstration of possible attacks on micro protocol-based tools.
Design of defense techniques to safeguard micro protocols against such attacks.
Evaluation of attacks and defense techniques on the covert channel network.
Anticipation of further developments which could lead to better micro protocol
designs.
The remainder of this paper is structured as follows. Section 2 puts our work in the
context of the existing research and is followed by Section 3 which provides an
insight into micro protocols and attack classification. Section 4 describes the attacks
on micro protocols and demonstrates these attacks on a peer-to-peer micro protocol
and a dynamic overlay routing micro protocol. Section 5 presents the improved micro
protocol designs in the form of defense techniques. The impact of attacks and
defense is discussed in Section 6. Anticipated improvements for micro protocol
designs are presented in Section 7. The conclusions and future work are given in

83
Section 8.
2 Related Work
Wendzel and Keller published a survey on micro protocols in 7. The authors cover the
general functionality spectrum of micro protocols, capabilities of existing micro
protocols, and protocol engineering means to optimize the micro protocol design.
Moreover, the survey mentions two aspects which are significant for our work,
namely, the drawbacks in the design of existing micro protocols and the future
research to be conducted in this area.
The important drawback mentioned in 7 is that micro protocols accept out of band
data, that is, non covert input that looks like covert input and is thus interpreted. The
survey also highlights a clear lack of countermeasures for micro protocols and
therefore the authors suggest developing them in future work.
Few implementations of micro protocols, such as Ping Tunnel 8, uses an indicator
called magic number to indicate the presence of covert data. The packet is only
considered to be part of a covert communication if the magic number equals a given
value.
Mazurczyk and Kotulski presented the only known micro protocol that comprises
support to evaluate the authenticity of packets 9. All other known micro protocols do
not encompass such features, and they might treat non-covert input as covert input.
Similar work can be found in the area of command and control (C&C) protocols for
botnets. Malware analysts aim at inserting fake commands into the C&C traffic of
botnets and reverse engineering of these protocols. We expect that some
C&C-related research can be applied to the world of network covert channels and
vice versa. A detailed discussion of the differences between C&C protocols and micro
protocols is available in 7. While C&C protocols might transfer encrypted traffic, the
challenge for developing C&C countermeasures differs from that of micro protocol
countermeasures as the major goal of a C&C protocol is to keep the content of a
transmission secret, while a micro protocol's goal is to keep the
communication itself secret. Hence, when blindly applying a countermeasure to such
a hidden micro protocol-based communication, it may not always be clear whether
the countermeasure is even applied to a covert channel or not.
There is some preliminary research conducted in the area of improving micro
protocol structures so that they become less prone to vulnerabilities leading to
attacks. In 10, the authors introduced an approach for the engineering of micro
protocols. They defined a terminology for designing micro protocols in which the
utilized protocol is called the underlying protocol; the area in which the covert data
are embedded within the underlying protocol is called the cover protocol. The micro
protocol with its payload is placed in the cover protocol. Their six-step engineering
approach provides standard conforming, and minimal attention raising design of
micro protocols, leading to improved stealthiness.
Another concept is status updates, which serve to minimize the traffic generated to

83
transfer the micro protocols. To achieve this, the header elements of micro protocols
are designed in a dynamic way so that they must only be transferred if they are
updated (e.g., if a destination address field must be set to a new destination
address). Status updates were introduced in a micro protocol-based tool named
smart covert channel tool (SCCT) 11. It is an optimized control protocol that allows
the configuration of dynamic routing in covert channel overlay networks. It is
implemented to achieve high stealthiness for the propagation of routing information.
Different routing messages like requesting peer tables, sending peer tables,
transferring topology graphs, and performing routing updates are implemented in
this tool. The current implementation of this tool is based on User Datagram
Protocol, but it is designed to be integrated into several network protocols, which can
be utilized by software architecture of SCCT. We design one of our defense
techniques for this tool, making the tool more secure.
3 Micro protocols and attack models
A micro protocol (MP) is a communication protocol whose header is placed within
the hidden data transferred by a covert channel. Because of fewer possibilities for
placing hidden data in a network covert channel, very limited space is available for
the micro protocols. For this reason, a micro protocol structure should contain a few
bits to fit itself in the limited space provided by the hidden data.
According to 7, the existing micro protocols are not optimized for a covert channel
environment as their protocol headers consume more bits than required, and their
behavior is not adjusted to the cover protocol. Because of this, micro protocols
sometimes fail to place themselves optimally within the covert data. More anomalies
are likely to occur because of the manipulation of bits, leading to easier detection of
the covert channel. Another type of anomaly could be caused if the micro protocol
header is not placed within the hidden data in an ideal way, that is, functionality of
the underlying protocol could fail and raise attention of the users in that case 6. One
example for such a case could be uncommon bit/flag combinations in network
packets.
3.1 Attack classification
The attacks on micro protocols can be classified in two ways based on (i) their
characteristics and (ii) attacker's knowledge about the micro protocol, as shown in
Figure 1.
Figure 1Open in figure viewerPowerPoint
Classification of attacks.
The characteristics-based classification is analogous to the covert channel
countermeasure classification by Zander et al. 2; it categorizes the general behavior
of attacks and is listed in the succeeding discussion:
preventing the covert communication containing micro protocols;
limiting the functionality of the micro protocol; or
observing the hidden communication, including its attributes.

83
Based on the attacker's knowledge about the micro protocol, attacks can be classified
into four types (as shown in Table 1 and Figure 1).
Table 1. Attacker's knowledge about micro protocolsType of KnowledgeCase ICase
IICase IIICase IVPresence of MP✓✓✗✗Syntax/Semantics of MP✓✗✓✗
MP, micro protocol.
In Case I, the attacker is fully aware of both where the micro protocol is placed and
how its header structure is designed. The header structure is available for analysis,
and the attacker could propose attacks by identifying the weaknesses in its design.
In Case II, the attacker is only aware of the presence of a micro protocol-based
communication but possesses no knowledge about its header structure. In certain
scenarios, because of (unlikely) presence of insider's information, it is possible to
manipulate such covert communication by transferring random data as noise to a
covert channel without knowing its effect on the communication channel. It is also
possible to monitor the data thoroughly on such a covert channel and decide where
exactly the noise could be inserted. In Case III, the attacker is not aware of the
presence of micro protocols but could predict the header structure design in the
event of its presence. In such a case, the attacker would be able to perform a blind
attack by sending falsifying commands to a system potentially involved in the covert
communication (e.g., fake commands to terminate the covert communication). In the
last case (Case IV), the attacker is unaware about both presence and structure of the
micro protocol. Techniques like traffic normalization, which applies rules such as
clearing unused header bits or setting header fields to default values 12, are an
efficient approach for breaking such covert channels.
3.2 Levels of attack
Depending on a given micro protocol scenario, multiple characteristics-based attacks
can be applied. In this work, we consider Case I (as shown in Table 1) in which we
know where a micro protocol is placed and how its header structure is designed. We
define attacks by analyzing structures and functioning of two micro protocol-based
tool implementations: Ping Tunnel 8 and SCCT 11. The analysis is performed to find
vulnerabilities and weaknesses in these micro protocol-based designs. The attacks
are further classified as active and passive. Active attacks influence the covert
channel network or its peers in a way that affects the covert communication. These
methods are responsible for breaking the channel or limiting the functionality of the
channel. On the other hand, passive attacks observe the behavior of the covert
channel network or its users to derive the content of the transferred covert data, the
throughput, or the participants involved in the hidden communication. This is
possible by sniffing the data exchanged over the covert channel and determining the
information patterns from the sniffed data. An attacker can observe the hidden
communication with the help of these methods and conclude the purpose of a
particular communication. Therefore, based on this observation, a suitable decision
could be made about the covert channel (e.g., block the covert communication or

83
allow it to continue). This further leads an attacker to obtain more information about
the network and the kind of information being exchanged by the peers.
In a scenario when an attack is performed on the legitimate covert channel where
the attacker's goal is to expose useful information, the defense techniques must be
used to safeguard against these attacks. We propose some improvement methods to
enhance the features and capabilities of the micro protocols. These improved designs
are based on the functionalities of micro protocol-based tools and aim at increasing
micro protocol's immunity to certain attacks.
3.3 Capabilities of an attacker
We consider that an attacker is capable of interacting with the micro protocol. Based
on this, we consider the following capabilities that an attacker can have:
Send packets (normal, fake, or modified) to other peers in the network, possibly
triggering abnormal behavior.
Gain access to the network, log traffic, or modify topology of the network.
Launch DoS attacks by disabling connections between the peers after passive analysis
of the network.
However, the classification of attacks mentioned in this section is not necessarily a
comprehensive enumeration of all possible scenarios. Instead, we define a general
context of the possible ways in which an attacker can attack a micro protocol-based
communication. We made our assumptions as general as possible and included most
of the vulnerabilities present in the micro protocol structure. Based on this
hypothesis, we demonstrate attacks in the next section.
4 Potential attacks on micro protocols
As mentioned earlier, we consider two micro protocol-based tool implementations to
design the attacks: Ping Tunnel and SCCT. Ping Tunnel 8 is designed for peer-to-peer
communications under TCP-restricted environments. SCCT 11 is capable of building a
hidden overlay network and provides dynamic overlay routing based on optimized
link-state routing (OLSR). Both micro protocols were selected because of their
dissimilar functionalities. We analyze the structure and behavior of both the micro
protocols and categorize attacks for them.
4.1 Peer-to-peer communications micro protocol
Ping Tunnel 8 is based on a micro protocol, which is used to tunnel TCP connections
to a remote host using Internet Control Message Protocol (ICMP) echo request and
reply packets in a reliable manner. The set up of Ping Tunnel is based on a client,
a proxy, and a destination host. Ping Tunnel can be used when a client desires to send
TCP packets to a destination (e.g, a browser connects to a web server) but a firewall
blocks TCP connections. In this case, the client sends ICMP packets containing TCP
data to the proxy, which resides outside the client's network. The proxy establishes a
normal TCP connection with the specified destination and sends the response TCP
payload back to the client hidden in the ICMP packet.
The client performs all its communication with the proxy using ICMP echo request

83
packets, whereas the proxy uses ICMP echo reply packets for the same path. A magic
number is used to differentiate Ping Tunnel's packets from normal ping packets. The
packet format used to exchange messages between a client and a proxy is shown in
Figure 2.
Figure 2Open in figure viewerPowerPoint
Ping Tunnel's micro protocol data unit as described in 8.
4.2 Dynamic overlay routing micro protocol
Smart covert channel tool is a status update-based micro protocol implementation
that establishes multi-hop covert channel routes. A status update is a small chunk of
data, transmitted through a covert channel specifying a change in at least one
setting, for example, changing the destination address in the covert channel 11. The
micro protocol's dynamic routing algorithm is based on optimized link-state routing
and was extended for providing different optimization means to maximize covertness
(quality of covertness) and connection quality of a covert channel routing path. It also
facilitates routers and proxies by combining agents (active participants in the routing
process) and drones (optional hops that solely forward data as a proxy) in the overlay
network. SCCT uses User Datagram Protocol-based tunneling, but in general, the tool
is capable of utilizing arbitrary underlying protocols and hiding techniques. The tool
maintains a topology table and a peer table at every peer in the overlay network and
also synchronizes them among all the peers present in the network.
4.3 Attacks
We categorize the following attacks after analyzing the micro protocol header and the
functionality of Ping Tunnel and SCCT:
4.3.1 Sniffing
To sniff ICMP packets sent by Ping Tunnel, an attacker could filter these packets from
all the received ICMP echo requests and replies using a sniffing tool, for
example, tcpdump 13 (in our case). Ping Tunnel uses a special four byte magic
number to indicate the presence of Ping Tunnel's ICMP echo requests and replies.
Hence, an attacker could compare the value of the first four bytes of the received
ICMP echo payload with the known value of the magic number and records
corresponding packets in the log files. An attacker could perform the following
activities with the sniffed Ping Tunnel data:
IP tapping: An attacker could retrieve a list of IP addresses involved in a hidden
network communication using the IP field of Ping Tunnel's packet. This helps in
obtaining information about the involved parties in the covert communication.
Traffic analysis: An attacker could intercept and examine the data exchanged during
the hidden communication. Necessary actions for the parties involved in the covert
communication (whether to block or allow the involved parties on the network)
would be based on the results of the traffic analysis.
To sniff the packets sent by SCCT † , a passive observer acting as an agent could be
introduced into the overlay network. The agent could then sniff the information of

83
peers (IP address, id, etc.) involved in the overlay network and could also retrieve the
overlay topology by sniffing routing updates. It is also possible to maintain log files of
sniffed information with the timestamps. This information is useful to predict the rate
at which the number of involved participants change (increase or decrease) over time
in the covert network. In our implementation, the agent is able to log all the
information related to the peers and the network topology in a .dot file. This allows
the generation of network graphs using Graphviz 14.
4.3.2 Man-in-the-middle attack
To perform a man-in-the-middle attack on Ping Tunnel, an attacker could act as a
man-in-the-middle by establishing independent connections with the client and the
proxy while relaying messages between them. Doing this would make both the
systems believe that they are directly connected to each other. In reality, the entire
conversation would be controlled by an attacker. All the messages exchanged
between the client and the proxy would be intercepted, resulting in breaking the
communication channel. This could be performed as follows:
Case I: Sending fake replies to the client's ICMP echo requests. This would lead to the
interpretation of replies at the client side, while it is up to the attacker's decision to
forward these messages to the destination, or not.
Case II: Changing the magic number by replacing it with the random data. This would
prevent Ping Tunnel from recognizing the packets belonging to the covert
communication channel.
Case III: Changing the payload with the random data. This would render the hidden
information in the payload useless.
The communication flow during the aforementioned cases is shown in Figure 3. The
mechanism is implemented in two steps: In the first step, a packet
redirector transfers all ICMP packets from the client to the attacker.
The IPTables 15 tool used with the PREROUTING/POSTROUTING chain (depending on
the desired action) of the mangle table could be used to perform this job. In the next
step, the attacker accepts all the incoming packets forwarded by the packet
redirector and acts as content modifier to accomplish one of the three cases. The
modifications could be carried out using NFQueue 16 to check the presence of Ping
Tunnel's magic number in the payload of received ICMP packets and performs the
necessary modifications on the payload. This approach is shown in Figure 4.
Figure 3Open in figure viewerPowerPoint
Scenarios for attacking Ping Tunnel.
Figure 4Open in figure viewerPowerPoint
The mechanism applied by the attacker.
To perform a man-in-the-middle attack in the overlay network formed by SCCT, a
malicious agent could join the network by announcing itself as a new peer to the
network. It could therefore send a routing update message to other peers and act as
a man-in-the-middle peer (cf. Figure 5). By doing so, the malicious agent is actively

83
able to participate in the network in order to redirect the covert traffic through itself.
Figure 5Open in figure viewerPowerPoint
A malicious agent in the overlay network.
Traffic redirection is useful as it enables the malicious agent to analyze the content of
the traffic and decide whether to modify, block, or forward (with or without a delay)
the packet. In our implementation, the malicious agent becomes one of the peers in
the overlay network and, as soon as it receives a topology status update, it joins all
other peers in the network. The malicious agent establish connections with all other
peers by propagating the lowest possible delay (metric) on all the overlay network's
edges it forms. When any agent in the overlay network sends a covert message to
any other agent who is at a distance of more than one hop from itself, the chosen
routing path to reach the destination agent would be via the malicious agent. When
the message passes through the malicious agent, it could perform the following
actions on the message:
Case I: Block the messages from reaching the destination.
Case II: Change the data in the payload of the transmitted messages (replace the
payload with random data) and then forward it to the desired destination.
Case III: Redirection of messages for the special analysis. The traffic could be routed
over a separate analysis network to carry further analysis on the content of the
hidden data.
The diagrammatic representation of the three scenarios is shown in Figure 6. Agent A
wants to send a message to agent B via the malicious agent M (i.e., the next hop on
the path). Thus, agent A starts the transfer by sending a status update to configure
the destination address of the following payload to be “B.” In the next message,
agent A sends a status update message to indicate the presence of payload data
(payload follows directly) to the malicious agent M, which it should forward to agent
B. The malicious agent M either performs the forwarding (sniffing as a passive
observer), actively blocks the message from being transmitted to agent B (as shown
in Case I of Figure 6) or modifies the value of the payload data and then forwards it to
agent B (as shown in Case II of Figure 6). If the malicious agent M recognizes
something suspicious in the hidden data, it could also redirect the traffic to a
separate analysis network to perform further investigation (Case III).
Figure 6Open in figure viewerPowerPoint
Malicious agent M interrupting the overlay communication based on status update
messages.
4.3.3 Side-channel attack by timing analysis
This attack could be performed by the an attacker on the Ping Tunnel network. He
could analyze the timings of incoming ICMP echo request packets and calculate the
inter-packet time gaps between them using the following formula:
(1)
where T is the incoming time of the i-th ICMP echo request packet and G is the

83
inter-packet time gap between two consecutive ICMP echo request packets. The
normal ICMP echo requests generally show a constant time gap produced by
the ping program. In this way, an attacker is able to determine the difference
between normal pings and pings from Ping Tunnel by comparing the time gaps of
different services like SSH and FTP sent through Ping Tunnel. Each service connected
via Ping Tunnel alters the inter-arrival time of the protocol, making its behavior highly
dependent on the embedded service as well as Ping Tunnel's protocol design.
5 Defense against attacks
In this section, we present improvement schemes that can be applied to the micro
protocol design of Ping Tunnel and SCCT. These improvements make these tools less
prone to the attacks described in Section 4.
5.1 Ping Tunnel
The key feature of Ping Tunnel used by the attacker is the four byte magic number at
the beginning of the Ping Tunnel data unit. Because the magic number is fixed, it is
easy for an attacker to identify the packets that belong to Ping Tunnel. In order to
hide the presence of a Ping Tunnel data unit, we propose the following scheme to
randomize the magic number. Ping Tunnel supports limited authentication when
setting up a new Ping Tunnel proxy, by specifying a secret passphrase. This is used
when performing a challenge-response authentication when creating a new tunnel.
The same passphrase can be used when generating magic numbers for the packets.
The new magic number has the same length as the old one and is computed as
follows:
where ∥ denotes string concatenation. We use one random byte R, followed by the
first three bytes of the SHA-256 hash of the passphrase, concatenated with the
random byte R and the two-byte tunnel id number (id_no). Because the tunnel id is
randomly generated in Ping Tunnel, this provides a total of up to 224 possible magic
numbers for a given passphrase. ‡ Upon receiving such a packet, the recipient in
possession of the secret passphrase has all the information needed to compute the
correct magic number and can compare it with the received magic number. As all of
the attacks on Ping Tunnel (from Sect. 4.3) rely on a fixed magic number, the
proposed scheme could prevent all such attacks. The only action an attacker can take
is to modify or block all ICMP echo packets.
5.2 Smart covert channel tool
As presented in Section 4.3, SCCT is vulnerable to attacks like sniffing and
man-in-the-middle. In this section, we present the improvement ideas for SCCT by
introducing an enhanced version of this tool. A sniffing attack enables the attacker to
monitor the network passively and gather information about the network topology as
well as about the joining and the leaving agents. In case of SCCT, the topology table
contains information about the whole network topology and of all the involved
agents. The topology table and the peer table of SCCT's overlay network, shown in
Figure 7, are presented in Table 2 and 3. The sniffing agent relies on these two

83
tables for deriving the topology of the overlay network and the identity of the peers.
Table 2. A sample topology table of the old smart covert channel toolEdges between
the agentsMetricsA ↔ B10A ↔ C7B ↔ D5C ↔ D12
Table 3. A sample peer table of the old smart covert channel toolAgentIP
AddressCCMaskAxx.xx.xx.AA0x406Bxx.xx.xx.BB0x406Cxx.xx.xx.CC0x406Dxx.xx.xx.DD0
x406
Figure 7Open in figure viewerPowerPoint
An example of smart covert channel tool network with four agents.
In order to prevent the sniffing agent from performing such an attack, the enhanced
version of SCCT introduces the concept of known agents. A specific agent is said to
be known if the source agent knows the IP address of the destination agent. The IP
address is only required when connecting any agent for the first time. The topology
table of enhanced SCCT remains initially empty and the peer table only gives the
information about the local IP address of itself. These two tables will be updated
whenever any agent is connected or disconnected. The peer table is never
synchronized across the network; instead, each peer keeps its own list of known
agents.
The enhanced version of SCCT keeps the topology table internally the same way as
the original version; however, there is no way for an agent to reveal this information
through the tool. The agent can only view the part of the topology table that involves
the known agents and the directly connected agents. If the destination agent is
reachable only via some known or unknown intermediate agents, the information
about the intermediate agents is not available to the source agent. Agents will not
have any idea about the topology of other agents in the overlay network. To reach an
agent in the network, the concept of path calculation is used in the same manner as
in the original version of SCCT 11. So the path is internally calculated between the
agents whenever network topology changes. This hides the network topology from
the agents so that sniffing will no longer be possible.
We present the working of the enhanced SCCT with the help of an example. Suppose
there is an overlay network as in Figure 7 and agent D knows agent C. If agent D
wants to connect to agent A, agent A is reachable; however, there is no information
about the intermediate agents or the metrics. Tables 4 and 5 show how the topology
table and the peer table look like from the perspective of agent D.
Table 4. Topology table of enhanced smart covert channel tool from the perspective
of agent DEdges between the agentsMetricsD ↔⋯↔ AConnected but no
informationon intermediate agentsD ↔ C12
Table 5. Peer table of enhanced smart covert channel tool from the perspective of
agent DAgentIP
AddressCCMaskAxx.xx.xx.AA0x406Cxx.xx.xx.CC0x406Dxx.xx.xx.DD0x406
During a man-in-the-middle attack, the malicious agent claims to be directly
connected to every agent in the overlay network and influences the topology table of

83
other agents by introducing the lowest metrics. Consequently, the traffic from all the
neighboring agents passes through the malicious agent, giving it the capability of
controlling or manipulating the network traffic. The enhanced SCCT prevents this
attack in two ways: it imposes a minimum metric allowed on a link, and it only allows
connections to known agents. This means the attacker can only connect to the known
agents and also cannot set the connection metric below the minimum, so the
messages will not necessarily be routed through the malicious agent. Moreover, any
modifications by an agent to the metrics of the routes not involving that agent are
not propagated or accepted by other agents. The modified version of the topology
table will be local to the malicious agent, and it will only affect its own routing
decisions. This prevents the attacker from forcing the routing through itself by
making the other connections too expensive.
6 Results and Discussion
We discuss the impact of attacks for both micro protocols separately. Providing a
general view, Figures 8 and 9 describe the influence of each attack using a value
between lol and high. Such an influential evaluation is necessary because the
application of attacks depends on the use case. In some cases, a covert channel
overlay network can only be observed and modified while keeping a low profile,
while at others, a destruction of the overlay network is desired and the recognition of
an attack by the covert channel users may be acceptable. Therefore, we combine the
following aspects in the influence value for an attack:
the passive influence on the covert communication, that is, determining the
involvement of a host in the covert communication or observing the transferred
hidden content;
the limiting influence, that is, performance decrease or functionality decrease; and
the active influence on the exchanged data, that is, modifying the content of the
covert communication or preventing the covert communication to occur at all.
Figure 8Open in figure viewerPowerPoint
Impact of active and passive attacks on Ping Tunnel.
Figure 9Open in figure viewerPowerPoint
Impact of active and passive attacks on smart covert channel tool.
We also discuss the influence of defense techniques for the two micro
protocol-based tools.
6.1 Impact of attacks on Ping Tunnel
It is possible to derive information patterns from the sniffed data depending on the
different types of transactions performed by Ping Tunnel users. By analyzing the log
files obtained by sniffing, it is clear how the information is exchanged through the
covert channel. The list of IP addresses involved in the covert communication can be
filtered from the sniffed data. This information is useful as it helps obtain details
about the peers involved in the covert communication.
The applied man-in-the-middle attack prevents the covert communication as long as

83
the packets are not forwarded, or packets are forwarded with replaced payloads or
magic numbers. Moreover, by introducing delays into the covert communication, the
man-in-the-middle attack could decrease the performance of the connection. In all of
the mentioned cases, it is likely that covert channel users become aware of the
attack.
By performing the timing analysis of Ping Tunnel's ICMP packets for SSH through Ping
Tunnel and FTP through Ping Tunnel, we compared their inter-packet time gaps with
normal ping packets' inter-packet time gaps. The inter-packet time gaps of ICMP echo
request packets for different services are shown in Figure 10. The graph shows that
the average inter-packet time gap calculated from 10 consecutive packets of normal
ICMP echo requests is approximately 1 s, but in the case of SSH through Ping Tunnel,
this average time is reduced to 0.35 s, and in the case of FTP through Ping Tunnel, it
increases to 1.5 s.
Figure 10Open in figure viewerPowerPoint
Incoming time of ICMP echo request packets for different services through Ping
Tunnel.
Although the introduction of a micro protocol influences the behavior of a covert
communication, it is likely that the detection using known approaches, such as
support vector machines 17, is feasible for SSH, FTP, and additional services.
The influence of the proposed attacks on Ping Tunnel are shown in Figure 8. The
active attacks are considered more capable of influencing the covert channel. The
influence of active attacks on the covert channel ranges between above average and
high because their objective is to break the goals of a covert channel. However,
passive attacks must be considered powerful as well, because the one of the goals of
applying covert channels is to keep the data transfer secret. While a timing analysis
may only lead to detection of a host's involvement in the covert communication,
sniffing allows determining the hidden content of messages.
Different scenarios for the application of active attacks are possible. When there is no
interactive connection but, for instance, an automatic covert logging connection, a
man-in-the-middle attack that sends fake replies to the original sender but does not
forward packets to the desired destination, allows sniffing secret information on a
constant basis. However, after a fake reply is sent, the attack may still provide an
option to forward packets a posteriori. The latter option is not provided if the
attacker simply changes the magic number or the ICMP echo payload as the whole
communication with the destination will break afterwards.
6.2 Impact of attacks on smart covert channel tool
The influence of attacks on the SCCT overlay network is presented in Figure 9. By
implementing the redirecting attack on SCCT, it is possible to sniff and alter the
routing updates, resulting in a successful integration of a man-in-the-middle agent.
The sniffed traffic generated by this agent is helpful in generating the topology graphs
of the overlay network. A drawback observed in this technique is the inability to

83
introduce a fake route when the distance of a routing path is only one hop. Two
agents can still exchange information between each other without passing through
the malicious agent as long as they are direct neighbors. The influence of the routing
redirect is considered average as this attack primarily lays the foundation for other
active and passive attacks, that is, sniffing, preventing the communication, and
replacing the payload.
If the malicious agent replaces the payload with random data, the communication
between agents is still provided, but the content of their communication is rendered
useless. Such an attack is likely to get detected but could – when applied in a
temporary and highly targeted manner – prevent a particular communication if highly
sensitive information must be protected from being transferred (e.g., exfiltrated from
an organizational network). The influence of blocking all the data transfers between
agents is similar to replacing the content, if applied in a temporary and highly
targeted manner as well.
By passive observation, future predictions could be made about the development of
the dynamic routing patterns and the topology of the overlay network (decreasing or
increasing in size; involvement of new participants), and the exchanged messages
could be observed, making this attack influential above average.
6.3 Impact of defense techniques
By randomizing the magic number of Ping Tunnel, the attacker has no way of
distinguishing the Ping Tunnel ICMP packets from other ICMP packets using the magic
number; thus, it is not possible to create simple rules to detect the presence of Ping
Tunnel. This leaves modifying or blocking all the ICMP echo packets as the only blind
option for the attacker. The non-blind option of detecting Ping Tunnel via its
generated inter-arrival times is still feasible. However, countermeasures for this could
also be implemented as discussed in Section 7.
With an introduction of enhanced SCCT, the tool is immune to the shown overlay
sniffing attacks as well as the presented man-in-the middle attack. An agent will not
be able to monitor the whole overlay network, unless if it is aware of all the IP
addresses of all the agents present in the overlay network. This reduces the
probability of conducting a sniffing attack to a great extent. Even if a sniffing agent
joins the overlay network, it will only have the information of its known agents.
In a nutshell, enhanced SCCT will be immune to the following attacks:
Sniffing network packets (promiscuous mode): SCCT sends two type of packets in the
overlay network, routing update packets, and message packets. Passive agents
cannot use routing update packets to get information about the network topology
because routing decisions are now made locally, and these packets do not contain
clear information about these decisions. By inspecting the message packets, passive
agents can only retrieve the information about the source and the content of the
message. The information about the destination agent will no longer be retrievable
because peer id is used to denote the destination agent in the message packet, and

83
the agents are only allowed to access information about peer ids of
their known agents.
Manipulating topology and routing update information on agents: This attack is no
longer possible as this information is now inaccessible, except for the known agents.
The enhanced version of the tool is also immune to any kind of propagation of
modified routing table information that can be exploited by malicious agents in terms
of a man-in-the-middle attack.
7 Anticipated improvements for micro protocol design
As with the development of C&C protocols for botnets, we can expect that micro
protocol development will become increasingly sophisticated within the next years.
Means to decrease the size of micro protocol headers to cause fewer
anomalies 12 and to ensure a low-attention raising header design and
embedding 11 are already published. Also, a micro protocol header could be
encrypted or processed by a receiver only if authenticated. So far, authentication is
only implemented by Mazurczyk and Kotulski 10, making it challenging to insert fake
messages by a third party.
While 10 proposes a dynamic micro protocol header structure, their approach could
be extended to scramble protocol header structures using a cryptographic key shared
between peers, which would increase the difficulty for analyzing the micro protocol
and for introducing fake messages.
It is known that covert channel tools can apply different hiding techniques for each
utilized connection and can split their payload over multiple communication
protocols and OSI layers at the same time 6, 18. Covert payload can even be nested in
several internal layers 18. In other words, micro protocols embedded in covert
channel payload can have their header bits fragmented over different areas in the
network protocol headers and layers 6, and it is likely that more sophisticated micro
protocol designs will take advantage of this option.
Furthermore, a higher stealthiness of a network covert channel itself increases the
stealthiness of all the hidden content, including the micro protocol. Many techniques
are known to improve covertness, but the most of the existing micro protocols do not
take advantage of these improvement techniques. For instance, considering Ping
Tunnel's detectability based on inter-packet time gaps, it is imaginable that a micro
protocol introduces artificial delays to adjust its traffic to the inter-arrival times of
normal ping traffic.
These examples for micro protocol improvements underpin the advantages of a
dynamic header structure and sophisticated embedding and transfer techniques. For
this reason, additional research is necessary to enable attacks for these anticipated
features.
8 Conclusion and Future work
Micro protocols increase the stealthiness and the capabilities of network covert
channels. In this paper, we proposed novel approaches for attacking micro

83
protocol-based tools. We presented a categorization of attacks based on the
knowledge of the micro protocol's presence and structure, as well as on the type of
the attack (active or passive). Additionally, we described three types of attacks based
on the characteristics of micro protocols: detection, limitation, and elimination. Our
implementation of attacks for two micro protocol-based tools, Ping Tunnel and SCCT,
confirms the feasibility of countering a micro protocol-based communication.
The categorization of these attacks also supports decision-making when countering
other tools based on micro protocols. Although the proposed attacks are applied for
fully-aware case (where the presence and the structure of the micro protocol header
are known), they could also be applied to the other cases (e.g., where the structure
of micro protocol is predicted but the presence is not detected). This would lead to
the design of additional attacks. Our results show that our techniques are able to
counter micro protocol-based communication in an effective manner without
requiring much effort to implement them.
In some scenarios, the attacks are implemented for offensive purposes, while micro
protocols are used for good purposes. To safeguard such micro protocols against
attacks, we presented some defense techniques in terms of improvement ideas so
that micro protocols can have a better and more robust structure. The improved
micro protocol designs of Ping Tunnel and SCCT proved that they are immune to the
presented sniffing and man-in-the-middle attacks. The presented improvement
methods are generic and can, for this reason, be applied to other micro protocols in
the future.
On one hand, the future work leads towards designing attacks for “blind” situations,
that is, the situations in which the presence or the structure of a micro protocol are
unknown. Such attacks will provide us with a capability of countering micro protocols
at any level, regardless of their structure. On the other hand, the analysis of
additional micro protocols is important to improve their design for more adaptive
and stealthier communication channels.

83
We have covered all the controls in detail during our Domain 3 review.

Cryptography techniques can be implemented to protect information by


transforming the data through encryption schemes and methods. Typically, they can
be used to protect the confidentiality and integrity of information. Cryptography
can also be used to address authenticity of communications and nonrepudiation.
Cryptography today can be used in many architectures and to protect information
while in motion (transit) or at rest. Encryption algorithms can be used to encrypt
specific information located anywhere in the architecture.

84
Operating system and applications can use passwords as a convenient mechanism
to provide authentication services. Typically, operating systems use passwords to
authenticate the user and establish access controls for resources including the
system, files, or applications. Password protections offered by the operating system
include controls on how the password is selected and how complex the password
needs to be, password time limits, and password lengths as well. Password files
stored within a computer system must be secured by the protection mechanisms of
the operating system so that no one, including system administrators, will have
access to passwords belonging to entities of the system. Because password files are
prone to unauthorized access,
the most common solution is to encrypt password files using one-way encryption
algorithms (hashing). Hashing passwords ensures that no one has access to the
actual passwords.

However, there are attacks against hashed password files, such as what is referred
to as a dictionary attack. There are many other types of password controls that may
be offered by the architecture, such as password masking, etc. Careful
implementation of these password protection measures needs to be done to

85
ensure protection based on the value of the architecture.

85
Reference https://www.helpsystems.com/resources/articles/six-ws-granular-access-control

The Six Ws of Granular Access Control

Security experts are in general agreement that passwords will simply no longer
suffice when it comes to system security. As the numerous breaches within the past
years have shown, it is too easy to crack passwords and gain access to all the data
across entire systems. So, what can an organization do to better protect its systems?
This is where granular access controls, a key feature in certain identity and access
management solutions, comes in.

What exactly is granular access control? With all the buzzwords floating around
the cybersecurity world, it’s easy to stumble across a term that could use additional
explanation. This article takes a closer look at why granular access control is so
effective -- by placing limitations on who can get into your organization’s
system, where, when and how they can access it, and what they can do with it.

WHO

86
In its simplest definition, granular access controls define who can have access to each
part of a system, as well as what they can do with that access. However, setting up
permissions for each individual user is impractical and would be incredibly time
consuming to track and maintain. Instead, privileges are granted based on roles
defined in a corporate directory. For example, a database administrator would be
granted permissions for all database servers, whereas a web administrator wouldn’t
need access to those particular servers and would therefore not be give permission
to access it.

While these role permissions can be set up manually, this would still be immensely
time consuming for an IT team, and nearly impossible to keep up with. Employees
who leave an organization may still retain permissions for days, even months after
their departure, leaving an organization incredibly vulnerable. Identity and access
solutions turn this crucial security protocol into a doable task. Rules and permissions
can be changed instantaneously, protecting users from making mistakes, and
organizations from leaving doors open to private data.

HOW
Once these roles have been established and assigned, users must also authenticate
their identity before logging in. This can be done in a number of ways – passwords,
tokens, etc. However, as mentioned above, passwords are no longer enough when it
comes to critical data. This is where identity and access management solutions come
in. They can step up the level of authentication needed for roles with administrator
access, adding an additional layer of protection.
How one accesses a system may seem like a simple concept, but when it comes to
access, the details matter. There are different levels of connection on can make to the
server. For example, an admin can securely access the server over ssh, transfer files
with sftp, and escalate privileges with sudo. Some admins may only need to copy files
to or from a server but won’t need access to the server itself. Others may need full
administrator privileges and all the commands that come with it. Granular access
controls assign only the necessary connection capability to each user class.

WHAT
When using granular access controls, it is ideal to practice the principle of least
privilege. That is, unless otherwise specified, a role will be assigned the least amount
of access possible to a system. As a role is more defined, the necessary access
becomes clearer and is assigned accordingly. For instance, a web administrator would
only need access to web servers and a select number of privileged commands.
However, it is not only the access to different parts of the system that are defined –
the level of permissions must also be determined. As stated earlier, database and
web administrators only need access to select servers and commands, while, Linux

86
administrators typically need access to all servers and all privileged commands. Being
this explicit in access and permissions prevents accidental and intentional tampering
that can result in data breaches or loss.

WHERE
While it’s an advantage not only to organizations and employees that work can be
accomplished from anywhere, it also requires extra vigilance. Since people can access
servers everywhere, it no longer looks suspicious when an organization has
numerous IP addresses logging in globally. It also isn’t feasible to require IT teams to
comb through these addresses to try and ensure that logins are only coming in from
locations where employees are located.
Granular access controls can skip these issues altogether simply by limiting the
amount of locations from where a server can be accessed. For example, if no
employees are located in Canada, then no Canadian IP addresses can have access to
an organization’s system.
Restrictions must also be placed on the type of access users have when using VPNs
(Virtual Private Networks) outside an organization’s offices. Allowing for major
changes from thousands of miles away leaves systems incredibly vulnerable. The
highest level of access should always be reserved for those logging in directly from
the physical server.

WHEN
A crucial component to configuring granular access controls for maximum security is
timing. Staff in an organization rarely need access to systems or data 24 hours a day.
In fact, someone signing into their account outside of normal business hours could be
considered suspicious. On the other hand, someone logging in during strange hours
may only indicate that they’re located in an office in another country. Granular access
controls are sophisticated enough to establish rules based on not only role, but on
the window of time that a group can be expected to be working. Limiting access to a
set timeframe can prevent an error or threat from remaining undiscovered for hours.
Additionally, granular access controls can provide temporary access for a limited
amount of time. For example, a contract employee could be given credentials that
are set to time out at the end of their contract. Alternately, a sales person traveling
abroad may be given credentials to log in from another country for the length of the
trip.

WHY
As mentioned above, passwords have become insufficient protection from internal
and external breaches. The key to system infiltration comes down to credentials.
Once someone has the necessary credentials to access a system and use privileged
commands, the damage can be catastrophic. The more employees with these high-

86
level credentials, the higher the risk is that someone could get full access to the
system, even with a simple phishing attack. Even when considering password vaulting
technologies, a single password is still all that prevents someone from getting
through the door. The flexibility of granular access control ensures there are multiple
ways to prevent someone from having complete access.
Risks of malicious insider attacks are also mitigated through granular access control.
Organizations that provide full credentials to every employee make it incredibly
difficult to track what employees are doing, making it possible to provide an insider
with an open window to private data for months without the threat of detection.

Attempting to control privileges like this manually would take numerous additional
employees, who would still be unable to make updates and adjust rules in real time.
Identity and access management solutions, like Powertech Identity & Access
Manager (BoKS), allows IT teams to efficiently protect your organization’s data. You
can restrict privileges so that no one employee has full control of your system, yet
still give users the credentials they need to get their work done.

86
In software development, there are usually various environments, these may
include, for example:

• Development environment
• Quality assurance environment
• Production environment

There may be other environments, but the security issue is to control how each
environment can access the application and the data that the application is
processing and then provide mechanisms to keep them separate. For example,
systems analysts and programmers write, compile, and perform initial testing of the
application’s implementation and functionality in the development environment. As
the application reaches the point of being ready to be put into production, users
and quality assurance people perform functional testing within the quality
assurance environment. To be effective, the quality assurance configuration should
simulate the production environment as closely as
possible. Once the testing has been completed, including the security testing, and
stakeholders have accepted the application, it is moved into the production

87
environment. What is important is to keep the environments separate and isolated.
Those working in any environment should be restricted to that environment only.
Blended environments combine one or more of these individual environments and
are generally the most difficult to control. As an example, it is generally accepted that
developers
working in development environments should never have access to the production
environment.
Control measures protecting the various environments are many, but they should
include physical isolation of environment, physical or temporal separation of data for
each environment, access control lists, content dependent access controls, role-
based constraints, role definition stability, accountability, and separation of duties.

87
Reference https://en.wikipedia.org/wiki/Software_forensics

Software forensics is the science of analyzing software source code or binary


code to determine whether intellectual property infringement or theft occurred. It
is the centerpiece of lawsuits, trials, and settlements when companies are in
dispute over issues involving software patents, copyrights, and trade secrets.
Software forensics tools can compare code to determine correlation, a measure
that can be used to guide a software forensics expert.

Past methods of software forensics[edit]


Past methods of code comparison included hashing, statistical analysis,
text matching, and tokenization. These methods compared software code and
produced a single measure indicating whether copying had occurred. However,
these measures were not accurate enough to be admissible in court because the
results were not accurate, the algorithms could be easily fooled by simple
substitutions in the code, and the methods did not take into account the fact that
code could be similar for reasons other than copying.

88
Robert Zeidman[edit]
In 2003, Robert Zeidman developed algorithms, which he incorporated in the
CodeSuite tool, for multidimensional software correlation that divides software code
into basic elements and determines which elements are similar, or “correlated.” He
also created a procedure for filtering and interpreting the correlations to eliminate
reasons for correlation that are not due to copying: common algorithms, third-party
code, common identifier names, common author, and automatic code generation.
The combination of these algorithms and procedures resulted in a more reliable and
quantitative analysis than was available previously. Zeidman’s book, "The Software IP
Detective's Handbook," is considered the standard textbook for software
forensics[1][2] and the CodeSuite tools and methodology has been used in many
software IP litigations including ConnectU v. Facebook, Symantec v. IRS Baker &
McKenzie, and SCO Group, Inc. v. International Business Machines Corp.

Copyright infringement[edit]
Following the use of software tools to compare code to determine the amount of
correlation, an expert can use an iterative filtering process to determine that the
correlated code is due to third-party code, code generation tools, commonly used
names, common algorithms, common programmers, or copying. If the correlation is
due to copying, and the copier did not have the authority from the rights holder,
then copyright infringement occurred.

Trade secret protection and infringement[edit]


Software can contain trade secrets, which provide a competitive advantage to a
business. To determine trade secret theft, the same tools and processes can be used
to detect copyright infringement. If code was copied without authority, and that code
has the characteristics of a trade secret—it is not generally known, the business
keeps it secret, and its secrecy maintains its value to the business—then the copied
code constitutes trade secret theft.

Trade secret theft can also involve the taking of code functionality without literally
copying the code. Comparing code functionality is a very difficult problem that has
yet to be accomplished by any algorithm in reasonable time. For this reason, finding
the theft of code functionality is still mostly a manual process.

Patent infringement[edit]
As with trade secret functionality, it is not currently possible to scientifically detect
software patent infringement, as software patents cover general implementation
rather than specific source code. For example, a program that implements a patented
invention can be written in many available programming languages, using different
function names and variable names and performing operations in different

88
sequences. There are so many combinations of ways to implement inventions in
software that even the most powerful modern computers cannot consider all
combinations of code that might infringe a patent. This work is still left to human
experts using their knowledge and experience, but it is a problem that many in
software forensics are trying to automate by finding an algorithm or simplifying
process.

Objective facts before subjective evidence[edit]


One important rule of any forensic analysis is that the objective facts must be
considered first. Reviewing comments in the code or searching the Internet to find
information about the companies that distribute the code and the programmers who
wrote the code are useful only after the objective facts regarding correlation have
been considered. Once an analysis has been performed using forensic tools and
procedures, analysts can then begin looking at subjective evidence like comments in
the code. If the information in that subjective evidence conflicts with the objective
analysis, analysts need to doubt the subjective evidence. Fake copyright notices,
open source notifications, or programmer names that were added to source code
after copying took place, in order to disguise the copying, are not uncommon in real-
world cases of code theft.

88
Reference https://www.veracode.com/security/mobile-code-security

Mobile App and Mobile Code Security Risks


There are two main categories of mobile code security risks: (1) malicious
functionality and (2) vulnerabilities.

The category of malicious functionality is a list of unwanted and dangerous mobile


code behaviors that are stealthily placed in a Trojan app that the user is tricked into
installing. Users think they are installing a game or utility and instead get hidden
spyware, phishing UI or unauthorized premium dialing.

Malicious Functionality
• Activity monitoring and data retrieval
• Unauthorized dialing, SMS and payments
• Unauthorized network connectivity (exfiltration or command & control)
• UI impersonation

89
• System modification (rootkit, APN proxy config)
• Logic or time bomb

The category of mobile security vulnerabilities are errors in design or implementation


that expose the mobile device data to interception and retrieval by attackers. Mobile
code security vulnerabilities can also expose the mobile device or the cloud
applications used from the device to unauthorized access.

Vulnerabilities
• Sensitive data leakage (inadvertent or side channel)
• Unsafe sensitive data storage
• Unsafe sensitive data transmission
• Hardcoded password/keys
• The Mobile Code Security Stack

Increasing smartphone adoption rates coupled with the rapid growth in smartphone
application counts have created a scenario where private and sensitive information is
being pushed to the new device perimeter at an alarming rate. The smartphone
mobile device is quickly becoming ubiquitous. While there is much overlap with
common operating system models, the mobile device code security model has some
distinct points of differentiation.

The mobile code security stack can be broken up into four distinct layers. The lowest
layer of the stack is the infrastructure layer, followed upward by the hardware,
operating system and application layers. These security stack layers each define a
separate section of the security model of a smartphone or mobile device.

Each layer of the mobile code security model is responsible for the security of its
defined components and nothing more. The upper layers of the stack rely on all
lower layers to ensure that their components are appropriately safe. This abstraction-
based model allows the design of a particular mobile security mechanism to focus on
a single specific area of concern without expending the resources required to analyze
all layers that support its current location within the stack.

Mobile Security - Infrastructure Layer


The infrastructure layer is the lowest and thus most supportive layer of the mobile
code security stack. This layer is the foundation that supports all of the other tiers of
the model. The majority of the functional components at this layer are owned and

89
operated by a mobile carrier or infrastructure provider; however, integration into the
handset occurs as data is transmitted from this tier upward.
Cellular voice and data carriers operate the infrastructure that carries all data and
voice communications from end point to end point. The security of components at
this level typically encompasses the protocols in use by the carriers and infrastructure
providers themselves. Examples of such protocols include code division multiple
access protocol (CDMA), global system for mobile communications (GSM), global
positions systems (GPS), short messaging systems (SMS) and multimedia messaging
systems (MMS). Due to the low foundational nature of this particular security tier,
flaws or vulnerabilities discovered at this tier are generally effective across multiple
platforms, multiple carriers and multiple handset providers.

Mobile Security - Hardware Layer


As we move up the stack to the second tier of the mobile code security stack, we are
moving into the realm of a physical unit that is typically under the direct control of an
end user. The hardware layer is identified by the individual end user premise
equipment, generally in the form of a smartphone or tablet style mobile device. The
hardware layer is accessible to the operating system, allowing for direct control of the
physical components of the unit. This hardware is generally called the “firmware” and
is upgraded by the physical manufacturer of the handset and occasionally delivered
by proxy through the phone carrier. Security flaws or vulnerabilities discovered at this
layer typically affect all end users who use a particular piece of hardware or individual
hardware component. If a hardware flaw is discovered in a single manufacturer’s
device, it is more than likely that all hardware revisions using that similar design
and/or chip will be affected as well.

Mobile Security - Operating System Layer


The third tier in the mobile code security stack is the operating system layer. This
layer corresponds to the software running on a device that allows communications
between the hardware and the application tiers. The operating system is periodically
updated with feature enhancements, patches and security fixes, which may or may
not coincide with patches made to the firmware by the physical handset
manufacturer. The operating system provides access to its resources via the
publishing of application programming interfaces. These resources are available to be
consumed by the application layer as it is the only layer higher in the stack than the
operating system itself. Simultaneously, the operating system communicates with the
hardware/firmware to run processes and pass data to and from the device.

Operating system flaws are a very common flaw type and currently tend to be the
target of choice for attackers that wish to have a high impact. If an operating system
flaw is discovered, the entire installed base of that particular operating system

89
revision will likely be vulnerable. It is at this layer, and above, where software is the
overriding enforcement mechanism for security. Specifically due to the fact that
software is relied upon, the operating system, and the application layer above, is the
most common location where security flaws are discovered.

Mobile Security - Application Layer


The application tier resides at the top of the mobile security stack and is the layer
that the end user directly interfaces with. The application layer is identified by
running processes that utilize application programming interfaces provided by the
operating system layer as an entry point into the rest of the stack.

Application layer security flaws generally result from coding flaws in applications that
are either shipped with or installed onto a mobile device after deployment. These
flaws come in classes that are similar to the personal computing area. Buffer
overflows, insecure storage of sensitive data, improper cryptographic algorithms,
hardcoded passwords and backdoored applications are only a sample set of
application layer flaw classes. The result of exploitation of application layer security
flaws can range from elevated operating system privilege to exfiltration of sensitive
data.

How to Test for Mobile Code Security


When analyzing an individual device for security implications, take into account each
of the layers of the mobile code security stack and determine the effectiveness of the
security mechanisms that are in place. At each layer, determine what, if any, security
mechanisms and mitigations the manufacturer has implemented and if those
mechanisms are sufficient for the type of data you plan to store and access on the
device.

89
One of the control mechanisms for mobile code is called a sandbox environment. As
its name implies, a sandbox can be a “play” area where we can test certain pieces
of code to see if they are malicious. The sandbox provides a protective area for
program execution. Limits are placed on the amount of memory and processor
resources the program can consume in that sandbox environment. If the program
exceeds these limits, the web browser terminates the process and logs an error
code and ultimately does not allow the code to run.

This can ensure the safety of the browser’s activities. As an example, in the Java
sandbox security model, there is an option to provide an area for the Java code to
do what it needs to do, including restricting the bounds of this area. This is exactly
the idea of a sandbox. A sandbox cannot confine code and its behavior without
some type of enforcement mechanism. The Java security manager makes sure all
restricted code stays in the sandbox and cannot ultimately do anything outside of it.
Trusted code resides outside the sandbox, and untrusted code is confined within
the sandbox. By default, Java applications live outside the sandbox and Java applets
are confined within the sandbox.

90
You can read further via https://www.windowschimp.com/best-sandbox-software/

90
Reference https://en.wikipedia.org/wiki/Software_development_security

Security, as part of the software development process, is an ongoing process


involving people and practices, and ensures application confidentiality, integrity,
and availability. Secure software is the result of security aware software
development processes where security is built in and thus software is developed
with security in mind.[1]

Security is most effective if planned and managed throughout every stage


of software development life cycle (SDLC), especially in critical applications or those
that process sensitive information.
The solution to software development security is more than just the technology.

Software development challenges[edit]


As technology advances, application environments become more complex and
application development security becomes more challenging. Applications, systems,
and networks are constantly under various security attacks such as malicious

91
code or denial of service. Some of the challenges from the application development
security point of view include Viruses, Trojan horses, Logic bombs, Worms, Agents,
and Applets.[2]
Applications can contain security vulnerabilities that may be introduced by software
engineers either intentionally or carelessly.
Software, environmental, and hardware controls are required although they cannot
prevent problems created from poor programming practice. Using limit and sequence
checks to validate users’ input will improve the quality of data. Even though
programmers may follow best practices, an application can still fail due to
unpredictable conditions and therefore should handle unexpected failures
successfully by first logging all the information it can capture in preparation for
auditing. As security increases, so does the relative cost and administrative overhead.
Applications are typically developed using high-level programming languages which
in themselves can have security implications. The core activities essential to the
software development process to produce secure applications and systems include:
conceptual definition, functional requirements, control specification, design review,
code review and walk-through, system test review, and maintenance and change
management.
Building secure software is not only the responsibility of a software engineer but also
the responsibility of the stakeholders which include: management, project managers,
business analysts, quality assurance managers, technical architects, security
specialists, application owners, and developers.

Basic principles[edit]
There are a number of basic guiding principles to software security. Stakeholders’
knowledge of these and how they may be implemented in software is vital to
software security. These include:
• Protection from disclosure
• Protection from alteration
• Protection from destruction
• Who is making the request
• What rights and privileges does the requester have
• Ability to build historical evidence
• Management of configuration, sessions and errors/exceptions

Basic practices[edit]
The following lists some of the recommended web security practices that are more
specific for software developers.
• Sanitize inputs at the client side and server side
• Encode request/response
• Use HTTPS for domain entries

91
• Use only current encryption and hashing algorithms
• Do not allow for directory listing
• Do not store sensitive data inside cookies
• Check the randomness of the session
• Set secure and HttpOnly flags in cookies
• Use TLS not SSL
• Set strong password policy
• Do not store sensitive information in a form’s hidden fields
• Verify file upload functionality
• Set secure response headers
• Make sure third party libraries are secured
• Hide web server information

Security testing[edit]
Common attributes of security testing include authentication, authorization,
confidentiality, availability, integrity, non-repudiation, and resilience. Security testing
is essential to ensure that the system prevents unauthorized users to access its
resources and data. Some application data is sent over the internet which travels
through a series of servers and network devices. This gives ample opportunities to
unscrupulous hackers.
Summary[edit]
All secure systems implement security controls within
the software, hardware, systems, and networks - each component or process has a
layer of isolation to protect an organization's most valuable resource which is its data.
There are various security controls that can be incorporated into an application's
development process to ensure security and prevent unauthorized access.

91
Information systems are becoming more distributed, with a substantial increase in
the use of open protocols, interfaces, and source code, as well as sharing of
resources. All of these elements require that all resources be protected against
unauthorized access, as well as issues related to confidentiality, integrity, and
availability. Many of these safeguards are provided through software controls,
especially operating system mechanisms and application software controls. The
operating system must offer controls that protect the computer’s resources and so
must the application and system itself running on top of the operating system. In
addition, the relationship between applications and the operating system, and how
they communicate is also very important. Controls must be included in operating
systems so that applications cannot damage or circumvent the operating system
controls. And controls need to be designed and built into the application software
to protect the data that ultimately it processes. A lack of adequate software
protection mechanisms can leave the operating system and critical computer
resources open to corruption and attack and the sensitive data open to potential
disclosure, corruption, or unavailability.

The complexity of information systems today has also increased. Older computing

92
typically required the application running on a specific machine, aside from the
hardwired functions resident in the CPU. Today, an application may be running on
architectures that involve the hardware platform, CPU microcode, virtual machine
server, operating system, network operating system, utilities, remote procedure calls,
object request broker, database and web servers, engine application, multiple
interface applications, interface utilities, API libraries, and multiple entities involved
in a remote client interface. In other words, the architecture itself, and the
components that make it up, has become much more complex. This ultimately
requires adequate protection of all entities and components that make up the
architecture.

While many of these levels have been added in the name of interoperability and
standardization, the complexity introduced does make addressing the security
requirements more difficult. Some of the main security requirements for applications
and databases are to ensure that only valid, authorized, and authenticated users can
access the sensitive information contained within the database environments and
the proper enforcement of the permissions related to use of the data. It may also be
required that the system or software provides some type of granularity for controlling
such permissions and that possibly encryption or other appropriate logical controls
are available for protecting the value of sensitive information. Other controls
required may include password protection and audit mechanisms that provide
assurance of the functional security controls.

92
Reference https://opensource.com/resources/what-open-source

The term "open source" refers to something people can modify and share because
its design is publicly accessible.
The term originated in the context of software development to designate a specific
approach to creating computer programs. Today, however, "open source"
designates a broader set of values—what we call "the open source way." Open
source projects, products, or initiatives embrace and celebrate principles of open
exchange, collaborative participation, rapid prototyping, transparency, meritocracy,
and community-oriented development.

What is open source software?


Open source software is software with source code that anyone can inspect,
modify, and enhance.
"Source code" is the part of software that most computer users don't ever see; it's
the code computer programmers can manipulate to change how a piece of
software—a "program" or "application"—works. Programmers who have access to a
computer program's source code can improve that program by adding features to it

93
or fixing parts that don't always work correctly.

What's the difference between open source software and other types of software?
Some software has source code that only the person, team, or organization who
created it—and maintains exclusive control over it—can modify. People call this kind
of software "proprietary" or "closed source" software.
Only the original authors of proprietary software can legally copy, inspect, and alter
that software. And in order to use proprietary software, computer users must agree
(usually by signing a license displayed the first time they run this software) that they
will not do anything with the software that the software's authors have not expressly
permitted. Microsoft Office and Adobe Photoshop are examples of proprietary
software.
Open source software is different. Its authors make its source code available to
others who would like to view that code, copy it, learn from it, alter it, or share
it. LibreOffice and the GNU Image Manipulation Program are examples of open
source software.
As they do with proprietary software, users must accept the terms of a license when
they use open source software—but the legal terms of open source licenses differ
dramatically from those of proprietary licenses.
Open source licenses affect the way people can use, study, modify, and
distribute software. In general, open source licenses grant computer users permission
to use open source software for any purpose they wish. Some open source licenses—
what some people call "copyleft" licenses—stipulate that anyone who releases a
modified open source program must also release the source code for that program
alongside it. Moreover, some open source licenses stipulate that anyone who alters
and shares a program with others must also share that program's source code
without charging a licensing fee for it.
By design, open source software licenses promote collaboration and sharing because
they permit other people to make modifications to source code and incorporate
those changes into their own projects. They encourage computer programmers to
access, view, and modify open source software whenever they like, as long as they let
others do the same when they share their work.
Is open source software only important to computer programmers?
No. Open source technology and open source thinking both benefit programmers
and non-programmers.
Because early inventors built much of the Internet itself on open source
technologies—like the Linux operating system and the Apache Web server
application—anyone using the Internet today benefits from open source software.
Every time computer users view web pages, check email, chat with friends, stream
music online, or play multiplayer video games, their computers, mobile phones, or
gaming consoles connect to a global network of computers using open source

93
software to route and transmit their data to the "local" devices they have in front of
them. The computers that do all this important work are typically located in faraway
places that users don't actually see or can't physically access—which is why some
people call these computers "remote computers."
More and more, people rely on remote computers when performing tasks they might
otherwise perform on their local devices. For example, they may use online word
processing, email management, and image editing software that they don't install
and run on their personal computers. Instead, they simply access these programs on
remote computers by using a Web browser or mobile phone application. When they
do this, they're engaged in "remote computing."
Some people call remote computing "cloud computing," because it involves activities
(like storing files, sharing photos, or watching videos) that incorporate not only local
devices but also a global network of remote computers that form an "atmosphere"
around them.
Cloud computing is an increasingly important aspect of everyday life with Internet-
connected devices. Some cloud computing applications, like Google Apps, are
proprietary. Others, like ownCloud and Nextcloud, are open source.
Cloud computing applications run "on top" of additional software that helps them
operate smoothly and efficiently, so people will often say that software running
"underneath" cloud computing applications acts as a "platform" for those
applications. Cloud computing platforms can be open source or closed
source. OpenStack is an example of an open source cloud computing platform.
Why do people prefer using open source software?
People prefer open source software to proprietary software for a number of reasons,
including:
Control. Many people prefer open source software because they have more
control over that kind of software. They can examine the code to make sure it's not
doing anything they don't want it to do, and they can change parts of it they don't
like. Users who aren't programmers also benefit from open source software, because
they can use this software for any purpose they wish—not merely the way someone
else thinks they should.
Training. Other people like open source software because it helps them become
better programmers. Because open source code is publicly accessible, students can
easily study it as they learn to make better software. Students can also share their
work with others, inviting comment and critique, as they develop their skills. When
people discover mistakes in programs' source code, they can share those mistakes
with others to help them avoid making those same mistakes themselves.
Security. Some people prefer open source software because they consider it
more secure and stable than proprietary software. Because anyone can view and
modify open source software, someone might spot and correct errors or omissions
that a program's original authors might have missed. And because so many

93
programmers can work on a piece of open source software without asking for
permission from original authors, they can fix, update, and upgrade open source
software more quickly than they can proprietary software.
Stability. Many users prefer open source software to proprietary software for
important, long-term projects. Because programmers publicly distribute the source
code for open source software, users relying on that software for critical tasks can be
sure their tools won't disappear or fall into disrepair if their original creators stop
working on them. Additionally, open source software tends to both incorporate and
operate according to open standards.
Doesn't "open source" just mean something is free of charge?
No. This is a common misconception about what "open source" implies, and the
concept's implications are not only economic.
Open source software programmers can charge money for the open source software
they create or to which they contribute. But in some cases, because an open source
license might require them to release their source code when they sell software to
others, some programmers find that charging users money for software services and
support (rather than for the software itself) is more lucrative. This way, their software
remains free of charge, and they make money helping others install, use, and
troubleshoot it.
While some open source software may be free of charge, skill in programming and
troubleshooting open source software can be quite valuable. Many employers
specifically seek to hire programmers with experience working on open source
software.
What is open source "beyond software"?
At Opensource.com, we like to say that we're interested in the ways open source
values and principles apply to the world beyond software. We like to think of open
source as not only a way to develop and license computer software, but also
an attitude.
Approaching all aspects of life "the open source way" means expressing a willingness
to share, collaborating with others in ways that are transparent (so that others can
watch and join too), embracing failure as a means of improving, and expecting—even
encouraging—everyone else to do the same.
It also means committing to playing an active role in improving the world, which is
possible only when everyone has access to the way that world is designed.
The world is full of "source code"—blueprints, recipes, rules—that guide and shape
the way we think and act in it. We believe this underlying code (whatever its form)
should be open, accessible, and shared—so many people can have a hand in altering
it for the better.
Here, we tell stories about the impact of open source values on all areas of life—
science, education, government, manufacturing, health, law, and organizational
dynamics. We're a community committed to telling others how the open source way

93
is the best way, because a love of open source is just like anything else: it's better
when it's shared.
Where can I learn more about open source?
We've compiled several resources designed to help you learn more about open
source. We recommend you read our open source FAQs, how-to guides, and
tutorials to get started.

93
Reference https://en.wikipedia.org/wiki/Data_warehouse

In computing, a data warehouse (DW or DWH), also known as an enterprise data


warehouse (EDW), is a system used for reporting and data analysis, and is
considered a core component of business intelligence.[1] DWs are central
repositories of integrated data from one or more disparate sources. They store
current and historical data in one single place[2] that are used for creating analytical
reports for workers throughout the enterprise.[3]

The data stored in the warehouse is uploaded from the operational systems (such
as marketing or sales). The data may pass through an operational data store and
may require data cleansing[2] for additional operations to ensure data quality before
it is used in the DW for reporting.

Extract, transform, load (ETL) and Extract, load, transform (E-LT) are the two main
approaches used to build a data warehouse system.

94
ETL based Data warehousing[edit]
The typical extract, transform, load (ETL)-based data warehouse[4] uses staging, data
integration, and access layers to house its key functions. The staging layer or staging
database stores raw data extracted from each of the disparate source data systems.
The integration layer integrates the disparate data sets by transforming the data from
the staging layer often storing this transformed data in an operational data
store (ODS) database. The integrated data are then moved to yet another database,
often called the data warehouse database, where the data is arranged into
hierarchical groups, often called dimensions, and into facts and aggregate facts. The
combination of facts and dimensions is sometimes called a star schema. The access
layer helps users retrieve data.[5]
The main source of the data is cleansed, transformed, catalogued, and made
available for use by managers and other business professionals for data
mining, online analytical processing, market research and decision
support.[6] However, the means to retrieve and analyze data, to extract, transform,
and load data, and to manage the data dictionary are also considered essential
components of a data warehousing system. Many references to data warehousing
use this broader context. Thus, an expanded definition for data warehousing
includes business intelligence tools, tools to extract, transform, and load data into the
repository, and tools to manage and retrieve metadata.

IBM InfoSphere DataStage, Ab Initio Software, Informatica – PowerCenter are some


of the tools which are widely used to implement ETL based data warehouse.

ELT based Data warehousing[edit]


ELT based data warehousing gets rid of a separate ETL tool for data transformation.
Instead, it maintains a staging area inside the data warehouse itself. In this approach,
data gets extracted from heterogeneous source systems and are then directly loaded
into the data warehouse, before any transformation occurs. All necessary
transformations are then handled inside the data warehouse itself. Finally, the
manipulated data gets loaded into target tables in the same data warehouse.

Benefits[edit]
A data warehouse maintains a copy of information from the source transaction
systems. This architectural complexity provides the opportunity to:
Integrate data from multiple sources into a single database and data model. More
congregation of data to single database so a single query engine can be used to
present data in an ODS.
Mitigate the problem of database isolation level lock contention in transaction
processing systems caused by attempts to run large, long-running, analysis queries in
transaction processing databases.

94
Maintain data history, even if the source transaction systems do not.
Integrate data from multiple source systems, enabling a central view across the
enterprise. This benefit is always valuable, but particularly so when the organization
has grown by merger.
Improve data quality, by providing consistent codes and descriptions, flagging or even
fixing bad data.

Present the organization's information consistently.


Provide a single common data model for all data of interest regardless of the data's
source.
Restructure the data so that it makes sense to the business users.
Restructure the data so that it delivers excellent query performance, even for
complex analytic queries, without impacting the operational systems.
Add value to operational business applications, notably customer relationship
management (CRM) systems.
Make decision–support queries easier to write.
Organize and disambiguate repetitive data
Generic[edit]
The environment for data warehouses and marts includes the following:
Source systems that provide data to the warehouse or mart;
Data integration technology and processes that are needed to prepare the data for
use;
Different architectures for storing data in an organization's data warehouse or data
marts;
Different tools and applications for the variety of users;
Metadata, data quality, and governance processes must be in place to ensure that
the warehouse or mart meets its purposes.
In regards to source systems listed above, R. Kelly Rainer states, "A common source
for the data in data warehouses is the company's operational databases, which can
be relational databases".[7]
Regarding data integration, Rainer states, "It is necessary to extract data from source
systems, transform them, and load them into a data mart or warehouse".[7]
Rainer discusses storing data in an organization's data warehouse or data marts.[7]
Metadata is data about data. "IT personnel need information about data sources;
database, table, and column names; refresh schedules; and data usage measures".[7]
Today, the most successful companies are those that can respond quickly and flexibly
to market changes and opportunities. A key to this response is the effective and
efficient use of data and information by analysts and managers.[7] A "data
warehouse" is a repository of historical data that is organized by subject to support
decision makers in the organization.[7] Once data is stored in a data mart or
warehouse, it can be accessed.

94
Related systems (data mart, OLAP, OLTP, predictive analytics)[edit]
A data mart is a simple form of a data warehouse that is focused on a single subject
(or functional area), hence they draw data from a limited number of sources such as
sales, finance or marketing. Data marts are often built and controlled by a single
department within an organization. The sources could be internal operational
systems, a central data warehouse, or external data.[8] Denormalization is the norm
for data modeling techniques in this system. Given that data marts generally cover
only a subset of the data contained in a data warehouse, they are often easier and
faster to implement.

94
Reference https://en.wikipedia.org/wiki/Database_model

A database model is a type of data model that determines the logical structure of
a database and fundamentally determines in which manner data can be stored,
organized and manipulated. The most popular example of a database model is
the relational model, which uses a table-based format.

Examples[edit]
Common logical data models for databases include:
Hierarchical database model
It is the oldest form of data base model. It was developed by IBM for IMS (information
Management System). It is a set of organized data in tree structure. DB record is a tree
consisting of many groups called segments. It uses one to many relationships. The data
access is also predictable

Network model
Relational model
Entity–relationship model
Enhanced entity–relationship model

95
Object model
Document model
Entity–attribute–value model
Star schema
An object-relational database combines the two related structures.
Physical data models include:
Inverted index
Flat file

Other models include:


Associative model
Correlational model
Multidimensional model
Multivalue model
Semantic model
XML database
Named graph
Triplestore
Relationships and functions[edit]
A given database management system may provide one or more models. The optimal
structure depends on the natural organization of the application's data, and on the
application's requirements, which include transaction rate (speed), reliability,
maintainability, scalability, and cost. Most database management systems are built
around one particular data model, although it is possible for products to offer
support for more than one model.
Various physical data models can implement any given logical model. Most database
software will offer the user some level of control in tuning the physical
implementation, since the choices that are made have a significant effect on
performance.
A model is not just a way of structuring data: it also defines a set of operations that
can be performed on the data.[1] The relational model, for example, defines
operations such as select (project) and join. Although these operations may not be
explicit in a particular query language, they provide the foundation on which a query
language is built.

Flat model[edit]
Flat File Model
Main articles: Flat file database and Spreadsheet
The flat (or table) model consists of a single, two-dimensional array of data elements,
where all members of a given column are assumed to be similar values, and all

95
members of a row are assumed to be related to one another. For instance, columns
for name and password that might be used as a part of a system security database.
Each row would have the specific password associated with an individual user.
Columns of the table often have a type associated with them, defining them as
character data, date or time information, integers, or floating point numbers. This
tabular format is a precursor to the relational model.
Early data models[edit]
These models were popular in the 1960s, 1970s, but nowadays can be found
primarily in old legacy systems. They are characterized primarily by
being navigational with strong connections between their logical and physical
representations, and deficiencies in data independence.

Hierarchical model[edit]
Hierarchical Model
Main article: Hierarchical model
In a hierarchical model, data is organized into a tree-like structure, implying a single
parent for each record. A sort field keeps sibling records in a particular order.
Hierarchical structures were widely used in the early mainframe database
management systems, such as the Information Management System (IMS) by IBM,
and now describe the structure of XML documents. This structure allows one one-to-
many relationship between two types of data. This structure is very efficient to
describe many relationships in the real world; recipes, table of contents, ordering of
paragraphs/verses, any nested and sorted information.

This hierarchy is used as the physical order of records in storage. Record access is
done by navigating downward through the data structure using pointers combined
with sequential accessing. Because of this, the hierarchical structure is inefficient for
certain database operations when a full path (as opposed to upward link and sort
field) is not also included for each record. Such limitations have been compensated
for in later IMS versions by additional logical hierarchies imposed on the base
physical hierarchy.

Network model[edit]
Network Model
Main article: Network model
The network model expands upon the hierarchical structure, allowing many-to-many
relationships in a tree-like structure that allows multiple parents. It was most popular
before being replaced by the relational model, and is defined by
the CODASYL specification.

The network model organizes data using two fundamental concepts,

95
called records and sets. Records contain fields (which may be organized hierarchically,
as in the programming language COBOL). Sets (not to be confused with mathematical
sets) define one-to-many relationships between records: one owner, many members.
A record may be an owner in any number of sets, and a member in any number of
sets.
A set consists of circular linked lists where one record type, the set owner or parent,
appears once in each circle, and a second record type, the subordinate or child, may
appear multiple times in each circle. In this way a hierarchy may be established
between any two record types, e.g., type A is the owner of B. At the same time
another set may be defined where B is the owner of A. Thus all the sets comprise a
general directed graph (ownership defines a direction), or network construct. Access
to records is either sequential (usually in each record type) or by navigation in the
circular linked lists.
The network model is able to represent redundancy in data more efficiently than in
the hierarchical model, and there can be more than one path from an ancestor node
to a descendant. The operations of the network model are navigational in style: a
program maintains a current position, and navigates from one record to another by
following the relationships in which the record participates. Records can also be
located by supplying key values.
Although it is not an essential feature of the model, network databases generally
implement the set relationships by means of pointers that directly address the
location of a record on disk. This gives excellent retrieval performance, at the
expense of operations such as database loading and reorganization.
Popular DBMS products that utilized it were Cincom Systems' Total
and Cullinet's IDMS. IDMS gained a considerable customer base; in the 1980s, it
adopted the relational model and SQL in addition to its original tools and languages.
Most object databases (invented in the 1990s) use the navigational concept to
provide fast navigation across networks of objects, generally using object identifiers
as "smart" pointers to related objects. Objectivity/DB, for instance, implements
named one-to-one, one-to-many, many-to-one, and many-to-many named
relationships that can cross databases. Many object databases also support SQL,
combining the strengths of both models.

Inverted file model[edit]


Main article: Inverted index
In an inverted file or inverted index, the contents of the data are used as keys in a
lookup table, and the values in the table are pointers to the location of each instance
of a given content item. This is also the logical structure of contemporary database
indexes, which might only use the contents from a particular columns in the lookup
table. The inverted file data model can put indexes in a set of files next to existing flat
database files, in order to efficiently directly access needed records in these files.

95
Notable for using this data model is the ADABAS DBMS of Software AG, introduced in
1970. ADABAS has gained considerable customer base and exists and supported until
today. In the 1980s it has adopted the relational model and SQL in addition to its
original tools and languages.

Document-oriented database Clusterpoint uses inverted indexing model to provide


fast full-text search for XML or JSON data objects for example.
Relational model[edit]
Two tables with a relationship
Main article: Relational model
The relational model was introduced by E.F. Codd in 1970[2] as a way to make
database management systems more independent of any particular application. It is
a mathematical model defined in terms of predicate logic and set theory, and
implementations of it have been used by mainframe, midrange and microcomputer
systems.
The products that are generally referred to as relational databases in fact implement
a model that is only an approximation to the mathematical model defined by Codd.
Three key terms are used extensively in relational database
models: relations, attributes, and domains. A relation is a table with columns and
rows. The named columns of the relation are called attributes, and the domain is the
set of values the attributes are allowed to take.
The basic data structure of the relational model is the table, where information about
a particular entity (say, an employee) is represented in rows (also called tuples) and
columns. Thus, the "relation" in "relational database" refers to the various tables in
the database; a relation is a set of tuples. The columns enumerate the various
attributes of the entity (the employee's name, address or phone number, for
example), and a row is an actual instance of the entity (a specific employee) that is
represented by the relation. As a result, each tuple of the employee table represents
various attributes of a single employee.
All relations (and, thus, tables) in a relational database have to adhere to some basic
rules to qualify as relations. First, the ordering of columns is immaterial in a table.
Second, there can't be identical tuples or rows in a table. And third, each tuple will
contain a single value for each of its attributes.

A relational database contains multiple tables, each similar to the one in the "flat"
database model. One of the strengths of the relational model is that, in principle, any
value occurring in two different records (belonging to the same table or to different
tables), implies a relationship among those two records. Yet, in order to enforce
explicit integrity constraints, relationships between records in tables can also be
defined explicitly, by identifying or non-identifying parent-child relationships

95
characterized by assigning cardinality (1:1, (0)1:M, M:M). Tables can also have a
designated single attribute or a set of attributes that can act as a "key", which can be
used to uniquely identify each tuple in the table.
A key that can be used to uniquely identify a row in a table is called a primary key.
Keys are commonly used to join or combine data from two or more tables. For
example, an Employee table may contain a column named Location which contains a
value that matches the key of a Location table. Keys are also critical in the creation of
indexes, which facilitate fast retrieval of data from large tables. Any column can be a
key, or multiple columns can be grouped together into a compound key. It is not
necessary to define all the keys in advance; a column can be used as a key even if it
was not originally intended to be one.
A key that has an external, real-world meaning (such as a person's name, a
book's ISBN, or a car's serial number) is sometimes called a "natural" key. If no
natural key is suitable (think of the many people named Brown), an arbitrary or
surrogate key can be assigned (such as by giving employees ID numbers). In practice,
most databases have both generated and natural keys, because generated keys can
be used internally to create links between rows that cannot break, while natural keys
can be used, less reliably, for searches and for integration with other databases. (For
example, records in two independently developed databases could be matched up
by social security number, except when the social security numbers are incorrect,
missing, or have changed.)
The most common query language used with the relational model is the Structured
Query Language (SQL).

Dimensional model[edit]
The dimensional model is a specialized adaptation of the relational model used to
represent data in data warehouses in a way that data can be easily summarized using
online analytical processing, or OLAP queries. In the dimensional model, a database
schema consists of a single large table of facts that are described using dimensions
and measures. A dimension provides the context of a fact (such as who participated,
when and where it happened, and its type) and is used in queries to group related
facts together. Dimensions tend to be discrete and are often hierarchical; for
example, the location might include the building, state, and country. A measure is a
quantity describing the fact, such as revenue. It is important that measures can be
meaningfully aggregated—for example, the revenue from different locations can be
added together.
In an OLAP query, dimensions are chosen and the facts are grouped and aggregated
together to create a summary.
The dimensional model is often implemented on top of the relational model using
a star schema, consisting of one highly normalized table containing the facts, and
surrounding denormalized tables containing each dimension. An alternative physical

95
implementation, called a snowflake schema, normalizes multi-level hierarchies within
a dimension into multiple tables.
A data warehouse can contain multiple dimensional schemas that share dimension
tables, allowing them to be used together. Coming up with a standard set of
dimensions is an important part of dimensional modeling.

Its high performance has made the dimensional model the most popular database
structure for OLAP.
Post-relational database models[edit]
Products offering a more general data model than the relational model are
sometimes classified as post-relational.[3] Alternate terms include "hybrid database",
"Object-enhanced RDBMS" and others. The data model in such products
incorporates relations but is not constrained by E.F. Codd's Information Principle,
which requires that
all information in the database must be cast explicitly in terms of values in relations and in no
other way
— [4]
Some of these extensions to the relational model integrate concepts from
technologies that pre-date the relational model. For example, they allow
representation of a directed graph with trees on the nodes. The German
company sones implements this concept in its GraphDB.
Some post-relational products extend relational systems with non-relational features.
Others arrived in much the same place by adding relational features to pre-relational
systems. Paradoxically, this allows products that are historically pre-relational, such
as PICK and MUMPS, to make a plausible claim to be post-relational.
The resource space model (RSM) is a non-relational data model based on multi-
dimensional classification.[5]

Graph model[edit]
Main article: Graph database
Graph databases allow even more general structure than a network database; any
node may be connected to any other node.

Multivalue model[edit]
Main article: MultiValue
Multivalue databases are "lumpy" data, in that they can store exactly the same way
as relational databases, but they also permit a level of depth which the relational
model can only approximate using sub-tables. This is nearly identical to the way XML
expresses data, where a given field/attribute can have multiple right answers at the
same time. Multivalue can be thought of as a compressed form of XML.
An example is an invoice, which in either multivalue or relational data could be seen

95
as (A) Invoice Header Table - one entry per invoice, and (B) Invoice Detail Table - one
entry per line item. In the multivalue model, we have the option of storing the data
as on table, with an embedded table to represent the detail: (A) Invoice Table - one
entry per invoice, no other tables needed.
The advantage is that the atomicity of the Invoice (conceptual) and the Invoice (data
representation) are one-to-one. This also results in fewer reads, less referential
integrity issues, and a dramatic decrease in the hardware needed to support a given
transaction volume.

Object-oriented database models[edit]


Object-Oriented Model
Main articles: Object-relational model and Object model
In the 1990s, the object-oriented programming paradigm was applied to database
technology, creating a new database model known as object databases. This aims to
avoid the object-relational impedance mismatch - the overhead of converting
information between its representation in the database (for example as rows in
tables) and its representation in the application program (typically as objects). Even
further, the type system used in a particular application can be defined directly in the
database, allowing the database to enforce the same data integrity invariants. Object
databases also introduce the key ideas of object programming, such
as encapsulation and polymorphism, into the world of databases.

A variety of these ways have been tried[by whom?]for storing objects in a database.
Some[which?] products have approached the problem from the application
programming end, by making the objects manipulated by the program persistent.
This typically requires the addition of some kind of query language, since
conventional programming languages do not have the ability to find objects based on
their information content. Others[which?] have attacked the problem from the database
end, by defining an object-oriented data model for the database, and defining a
database programming language that allows full programming capabilities as well as
traditional query facilities.
Object databases suffered because of a lack of standardization: although standards
were defined by ODMG, they were never implemented well enough to ensure
interoperability between products. Nevertheless, object databases have been used
successfully in many applications: usually specialized applications such as engineering
databases or molecular biology databases rather than mainstream commercial data
processing. However, object database ideas were picked up by the relational vendors
and influenced extensions made to these products and indeed to the SQL language.

An alternative to translating between objects and relational databases is to use


an object-relational mapping (ORM) library.

95
Reference https://en.wikipedia.org/wiki/SQL

SQL (/ˌɛsˌkjuːˈɛl/ (listen) S-Q-L,[4] /ˈsiːkwəl/ "sequel"; Structured Query


Language)[5][6][7] is a domain-specific language used in programming and designed
for managing data held in a relational database management system (RDBMS), or
for stream processing in a relational data stream management system (RDSMS). It is
particularly useful in handling structured data, i.e. data incorporating relations
among entities and variables.
SQL offers two main advantages over older read–write APIs such as ISAM or VSAM.
Firstly, it introduced the concept of accessing many records with one single
command. Secondly, it eliminates the need to specify how to reach a record, e.g.
with or without an index.

Originally based upon relational algebra and tuple relational calculus, SQL consists
of many types of statements,[8] which may be informally classed as sublanguages,
commonly: a data query language (DQL),[a] a data definition
language (DDL),[b] a data control language (DCL), and a data manipulation
language (DML).[c][9] The scope of SQL includes data query, data manipulation

96
(insert, update and delete), data definition (schema creation and modification), and
data access control. Although SQL is essentially a declarative language (4GL), it
includes also procedural elements.
SQL was one of the first commercial languages to utilize Edgar F. Codd’s relational
model. The model was described in his influential 1970 paper, "A Relational Model of
Data for Large Shared Data Banks".[10] Despite not entirely adhering to the relational
model as described by Codd, it became the most widely used database
language.[11][12]
SQL became a standard of the American National Standards Institute (ANSI) in 1986,
and of the International Organization for Standardization (ISO) in 1987.[13] Since then,
the standard has been revised to include a larger set of features. Despite the
existence of such standards, most SQL code is not completely portable among
different database systems without adjustments.

History[edit]
SQL was initially developed at IBM by Donald D. Chamberlin and Raymond F.
Boyce after learning about the relational model from Ted Codd[14] in the early
1970s.[15] This version, initially called SEQUEL (Structured English Query Language),
was designed to manipulate and retrieve data stored in IBM's original quasi-relational
database management system, System R, which a group at IBM San Jose Research
Laboratory had developed during the 1970s.[15]

Chamberlin and Boyce's first attempt of a relational database language was Square,
but it was difficult to use due to subscript notation. After moving to the San Jose
Research Laboratory in 1973, they began work on SEQUEL.[14] The acronym SEQUEL
was later changed to SQL because "SEQUEL" was a trademark of the UK-
based Hawker Siddeley Dynamics Engineering Limited company.[16]
After testing SQL at customer test sites to determine the usefulness and practicality
of the system, IBM began developing commercial products based on their System R
prototype including System/38, SQL/DS, and DB2, which were commercially available
in 1979, 1981, and 1983, respectively.[17]

In the late 1970s, Relational Software, Inc. (now Oracle Corporation) saw the
potential of the concepts described by Codd, Chamberlin, and Boyce, and developed
their own SQL-based RDMS with aspirations of selling it to the U.S. Navy, Central
Intelligence Agency, and other U.S. government agencies. In June 1979, Relational
Software, Inc. introduced the first commercially available implementation of
SQL, Oracle V2 (Version2) for VAX computers.

By 1986, ANSI and ISO standard groups officially adopted the standard "Database

96
Language SQL" language definition. New versions of the standard were published in
1989, 1992, 1996, 1999, 2003, 2006, 2008, 2011[14] and, most recently, 2016.
Design[edit]
SQL deviates in several ways from its theoretical foundation, the relational model and
its tuple calculus. In that model, a table is a set of tuples, while in SQL, tables and
query results are lists of rows: the same row may occur multiple times, and the order
of rows can be employed in queries (e.g. in the LIMIT clause).
Critics argue that SQL should be replaced with a language that returns strictly to the
original foundation: for example, see The Third Manifesto. However, no known proof
exists that such uniqueness cannot be added to SQL itself[citation needed], or at least a
variation of SQL. In other words, it's quite possible that SQL can be "fixed" or at least
improved in this regard such that the industry may not have to switch to a completely
different query language to obtain uniqueness. Debate on this remains open.

Syntax[edit]
Main article: SQL syntax
{\displaystyle \left.{\begin{array}{rl}\textstyle {\mathtt {UPDATE~clause}}&\{{\mathtt
{UPDATE\ country}}\\\textstyle {\mathtt {SET~clause}}&\{{\mathtt {SET\
population=~}}\overbrace {\mathtt {population+1}} ^{\mathtt
{expression}}\\\textstyle {\mathtt {WHERE~clause}}&\{{\mathtt {WHERE\
\underbrace {{name=}\overbrace {'USA'} ^{expression}}
_{predicate};}}\end{array}}\right\}{\textstyle {\texttt {statement}}}}
A chart showing several of the SQL language elements that compose a single
statement
The SQL language is subdivided into several language elements, including:
Clauses, which are constituent components of statements and queries. (In some
cases, these are optional.)[18]
Expressions, which can produce either scalar values, or tables consisting
of columns and rows of data
Predicates, which specify conditions that can be evaluated to SQL three-valued logic
(3VL) (true/false/unknown) or Boolean truth values and are used to limit the effects
of statements and queries, or to change program flow.

Queries, which retrieve the data based on specific criteria. This is an important
element of SQL.

Statements, which may have a persistent effect on schemata and data, or may
control transactions, program flow, connections, sessions, or diagnostics.
SQL statements also include the semicolon (";") statement terminator. Though
not required on every platform, it is defined as a standard part of the SQL
grammar.

96
Insignificant whitespace is generally ignored in SQL statements and queries, making it
easier to format SQL code for readability.

96
SQL actually consists of these three sublanguages:

The Data Definition Language (DDL) is used to create databases, tables, views, and
keys specifying the links between tables. Because it is administrative in nature,
users of SQL rarely use DDL commands as they should be restricted to database
administrators.

DDL also has nothing to do with the population of the database, which is
accomplished by Data Manipulation Language (DML),
used to query and extract data, insert new records, delete old records, and update
existing records.

System and database administrators utilize Data Control Language (DCL) to control
access to data. It provides the security
control aspects of SQL and should be the security professional’s area of concern.

DDL-Data Definition Language


– Used to define Database objects like TABLE, VIEW,SEQUENCE,INDEX,SYNONYM

97
creation or modification or removing.
–CREATE,ALTER,DROP,TRUNCATE,RENAME are the DDL commands

DML – Data Manipulation Language


-Used to manipulate the data in Database objects like table, view, index ..etc,.
-INSERT, UPDATE, DELETE are the DML commands.

DRL/DQL-Data Retrieval Language/Data Query Language


-used to retrieve information from the database objects. it is for read only purpose.
– SELECT is the DQL or DRL command.

TCL – Transaction Control Language


-Transaction control statement are use to apply the changes permanently save into
database.
-COMMIT, ROLLBACK, SAVEPOINT, ROLLBACK TO are the TCL commands.

DCL – Data Control Language


Data control statements are use to give privileges to access limited data or share the
information between users.
-GRANT,REVOKE ,AUDIT,COMMENT, ANALYZE are the DCL commands.

SCL- Session Control Language


Session control statement are manage properties dynamically of a user session.
-ALTER SESSION,SET ROLL are the SCL commands.

Partial reference https://www.interviewsansar.com/2018/11/16/sql-sub-languages/

97
The object-oriented (OO) database model is one of the newest database models. It
is very similar to OOP languages, and as such, the OO database model stores data as
objects. The objects are a collection of public and private data elements and the set
of operations that can be executed on those data elements. Because the data
objects contain their own operations, any call to data potentially has the full range
of database functions available, and therefore, must be secured properly. Because
of the nature of objects being the driver in this model, the OO model does not
necessarily require a high-level language, such as SQL, because the functions are
contained within the objects themselves. An advantage of not having a query
language allows the OO DBMS to interact with applications without the language
overhead. There is no need for a language in between.

A natural evolution of the above DBMS models has seen relational models being
used together with OO functions and interfaces to
create what is called an object-relational model. This is basically a hybrid model,
taking the advantages of each, relational and OO.
The hybrid model allows organizations to maintain their current relational database
software and, at the same time, provide an upgrade path for future technologies by

98
supporting the OO capabilities.

98
The existence of legacy databases has proven a difficult challenge for managing new
database access requirements. To provide an interface that combines newer
systems and legacy systems that are still being used by many organizations, several
standardized access methods have evolved. These are referred to as Database
Interface Languages, and some of them include the following:

• Open Database Connectivity (ODBC)


• Java Database Connectivity (JDBC)
• Extensible Markup Language (XML)
• Object Linking and Embedding Database (OLE DB)
• ActiveX Data Objects (ADO)

The purpose of all of these languages is to provide a gateway to the data contained
in the legacy systems as well as the newer database systems.

99
100
ODBC is considered to be the dominant means of standardized data access. It was
developed and maintained by Microsoft, most database vendors use it as an
interface method to allow an application to communicate with a database either
locally or remotely over a network. It is really considered to be an API that is used to
provide a connection between applications and databases. It was designed so that
databases could connect without having to use specific database commands and
features. It acts as the middle component that facilitates access between
applications and databases.

ODBC commands are used in application programs that then translates them into
the commands required by the specific database system. This allows programs to be
linked between any DBMS with a minimum of code changes. It allows users to
specify which database is being used and can be easily updated as new database
technologies enter the market. ODBC is considered to be a very powerful tool.
However, because it needs to operate as a system entity, it has vulnerabilities that
can be exploited. The following is a discussion of some of the ODBC security issues.

ODBC Security Issues

101
• The username and password for the database are stored in plaintext. To prevent
disclosure of this information, the files need to be protected. For example, if an
HTML document was calling an ODBC data source, the HTML source must be
protected to ensure that the username and password in plaintext cannot be read.

• The HTML should call a common gateway interface (CGI) that has the
authentication details because HTML can be viewed in a browser.

• The returned data is sent as clear text over the network.

• Verification of the access level of the user using the ODBC application may be
inadequate in some cases.

• Calling applications must be checked to ensure they do not attempt to combine


data from multiple data sources, thus allowing data aggregation that may lead to
unauthorized inference.

• Every calling applications or API must be checked properly to ensure they do not
attempt to exploit the ODBC drivers and somehow gain elevated system access.

101
As we have seen above, ODBC is Microsoft’s answer to providing an interface
between applications and the database environment.
JDBC is Sun Microsystems’ technology. It is an API used to connect Java programs to
database environments. It is used to connect a
Java program to a database either directly or also by connecting through ODBC,
depending on whether the database vendor has
created the necessary drivers for Java.

Regardless of the interface used to connect the user to the database, there are
some very important security controls to consider in this environment. These
include how and where the user will be authenticated, controlling user access
properly, and auditing user actions to provide accountability. As security is very
important in these environments, Java has a number of capabilities driven toward
security, but these must be deliberately and properly implemented to secure the
database calls and applications.

102
XML is referred to as a markup language that is used to store and transport data
across networks. Much like HTML, it is widely used
across the internet to represent data structures used in web services. XML can also
be used to make database calls as it is used
to store and transport data, and as such, XML applications must be reviewed for
how authentication of users is established, access
controls are implemented, auditing of user actions is implemented and stored, and
confidentiality of sensitive data is maintained.

103
Object Linking and Embedding (OLE) is a Microsoft technology that allows an
object, such as an Excel spreadsheet, to be embedded or linked to the inside of
another object, such as a Word document. This capability makes OLE very flexible in

making data calls. The Component Object Model (COM) is the protocol that allows
OLE to work properly. OLE allows users to
share a single source of data for a particular object. The document contains the
name of the file containing the data, along with a
picture of the data. The way OLE works is that when the source is updated, all the
documents using the data are also updated.
As part of the OLE technology, there is something called OLE DB, which is an
interface language designed by Microsoft to link data
across various DBMSs. It is an open specification that is designed to build on the
success of ODBC by providing an open standard for accessing all kinds of data across
different environments. It enables organizations to easily take advantage of
information contained not only in data within a database environment, but also
when accessing data from other types of data sources.
The OLE DB interfaces are based on the COM, and as such, they provide

104
applications with uniform access to data regardless of the
information source. The OLE DB separates the data into components that can run as
middleware on a client or server across a wide variety of applications. The OLE DB
architecture provides for components such as direct data access interfaces, query
engines, cursor engines, optimizers, business rules, and transaction managers.

As with any powerful interface language, when organizations are developing


databases and determining how data may be linked
through the applications accessing those databases, security must be addressed
during the development stage. If OLE DB is considered, there are optional OLE DB
interfaces that can be implemented to support the administration of security
information. OLE DB interfaces allow for authentication and authorization for access
to data among components and applications. The OLE DB can also provide a clear
view of the security mechanisms that are supported by the operating system and the
database components.

104
105
Reference https://www.guru99.com/dbms-concurrency-control.html

DBMS Concurrency Control: Two Phase, Timestamp, Lock-Based Protocol


What is Concurrency Control?
Concurrency control is the procedure in DBMS for managing simultaneous
operations without conflicting with each another. Concurrent access is quite easy if
all users are just reading data. There is no way they can interfere with one another.
Though for any practical database, would have a mix of reading and WRITE
operations and hence the concurrency is a challenge.
Concurrency control is used to address such conflicts which mostly occur with a
multi-user system. It helps you to make sure that database transactions are
performed concurrently without violating the data integrity of respective databases.
Therefore, concurrency control is a most important element for the proper
functioning of a system where two or multiple database transactions that require
access to the same data, are executed simultaneously.

Potential problems of Concurrency

106
Here, are some issues which you will likely to face while using the Concurrency
Control method:
Lost Updates occur when multiple transactions select the same row and update the
row based on the value selected
Uncommitted dependency issues occur when the second transaction selects a row
which is updated by another transaction (dirty read)
Non-Repeatable Read occurs when a second transaction is trying to access the same
row several times and reads different data each time.
Incorrect Summary issue occurs when one transaction takes summary over the value
of all the instances of a repeated data-item, and second transaction update few
instances of that specific data-item. In that situation, the resulting summary does not
reflect a correct result.
Why use Concurrency method?
Reasons for using Concurrency control method is DBMS:
To apply Isolation through mutual exclusion between conflicting transactions
To resolve read-write and write-write conflict issues
To preserve database consistency through constantly preserving execution
obstructions
The system needs to control the interaction among the concurrent transactions. This
control is achieved using concurrent-control schemes.
Concurrency control helps to ensure serializability
Example
Assume that two people who go to electronic kiosks at the same time to buy a movie
ticket for the same movie and the same show time.
However, there is only one seat left in for the movie show in that particular theatre.
Without concurrency control, it is possible that both moviegoers will end up
purchasing a ticket. However, concurrency control method does not allow this to
happen. Both moviegoers can still access information written in the movie seating
database. But concurrency control only provides a ticket to the buyer who has
completed the transaction process first.
Concurrency Control Protocols
Different concurrency control protocols offer different benefits between the amount
of concurrency they allow and the amount of overhead that they impose.
Lock-Based Protocols
Two Phase
Timestamp-Based Protocols
Validation-Based Protocols
Lock-based Protocols
A lock is a data variable which is associated with a data item. This lock signifies that
operations that can be performed on the data item. Locks help synchronize access to
the database items by concurrent transactions.

106
All lock requests are made to the concurrency-control manager. Transactions proceed
only once the lock request is granted.
Binary Locks: A Binary lock on a data item can either locked or unlocked states.
Shared/exclusive: This type of locking mechanism separates the locks based on their
uses. If a lock is acquired on a data item to perform a write operation, it is called an
exclusive lock.
1. Shared Lock (S):
A shared lock is also called a Read-only lock. With the shared lock, the data item can
be shared between transactions. This is because you will never have permission to
update data on the data item.
For example, consider a case where two transactions are reading the account balance
of a person. The database will let them read by placing a shared lock. However, if
another transaction wants to update that account's balance, shared lock prevent it
until the reading process is over.
2. Exclusive Lock (X):
With the Exclusive Lock, a data item can be read as well as written. This is exclusive
and can't be held concurrently on the same data item. X-lock is requested using lock-
x instruction. Transactions may unlock the data item after finishing the 'write'
operation.
For example, when a transaction needs to update the account balance of a person.
You can allows this transaction by placing X lock on it. Therefore, when the second
transaction wants to read or write, exclusive lock prevent this operation.
3. Simplistic Lock Protocol
This type of lock-based protocols allows transactions to obtain a lock on every object
before beginning operation. Transactions may unlock the data item after finishing the
'write' operation.
4. Pre-claiming Locking
Pre-claiming lock protocol helps to evaluate operations and create a list of required
data items which are needed to initiate an execution process. In the situation when
all locks are granted, the transaction executes. After that, all locks release when all of
its operations are over.
Starvation
Starvation is the situation when a transaction needs to wait for an indefinite period
to acquire a lock.
Following are the reasons for Starvation:
When waiting scheme for locked items is not properly managed
In the case of resource leak
The same transaction is selected as a victim repeatedly
Deadlock
Deadlock refers to a specific situation where two or more processes are waiting for
each other to release a resource or more than two processes are waiting for the

106
resource in a circular chain.
Two Phase Locking (2PL) Protocol
Two-Phase locking protocol which is also known as a 2PL protocol. It is also called
P2L. In this type of locking protocol, the transaction should acquire a lock after it
releases one of its locks.
This locking protocol divides the execution phase of a transaction into three different
parts.
In the first phase, when the transaction begins to execute, it requires permission for
the locks it needs.
The second part is where the transaction obtains all the locks. When a transaction
releases its first lock, the third phase starts.
In this third phase, the transaction cannot demand any new locks. Instead, it only
releases the acquired locks.
The Two-Phase Locking protocol allows each transaction to make a lock or unlock
request in two steps:
Growing Phase: In this phase transaction may obtain locks but may not release any
locks.
Shrinking Phase: In this phase, a transaction may release locks but not obtain any
new lock
It is true that the 2PL protocol offers serializability. However, it does not ensure that
deadlocks do not happen.
In the above-given diagram, you can see that local and global deadlock detectors are
searching for deadlocks and solve them with resuming transactions to their initial
states.
Strict Two-Phase Locking Method
Strict-Two phase locking system is almost similar to 2PL. The only difference is that
Strict-2PL never releases a lock after using it. It holds all the locks until the commit
point and releases all the locks at one go when the process is over.
Centralized 2PL
In Centralized 2 PL, a single site is responsible for lock management process. It has
only one lock manager for the entire DBMS.
Primary copy 2PL
Primary copy 2PL mechanism, many lock managers are distributed to different sites.
After that, a particular lock manager is responsible for managing the lock for a set of
data items. When the primary copy has been updated, the change is propagated to
the slaves.
Distributed 2PL
In this kind of two-phase locking mechanism, Lock managers are distributed to all
sites. They are responsible for managing locks for data at that site. If no data is
replicated, it is equivalent to primary copy 2PL. Communication costs of Distributed
2PL are quite higher than primary copy 2PL

106
Timestamp-based Protocols
The timestamp-based algorithm uses a timestamp to serialize the execution of
concurrent transactions. This protocol ensures that every conflicting read and write
operations are executed in timestamp order. The protocol uses the System Time or
Logical Count as a Timestamp.
The older transaction is always given priority in this method. It uses system time to
determine the time stamp of the transaction. This is the most commonly used
concurrency protocol.
Lock-based protocols help you to manage the order between the conflicting
transactions when they will execute. Timestamp-based protocols manage conflicts as
soon as an operation is created.
Example:
Suppose there are there transactions T1, T2, and T3. T1 has entered the system at
time 0010 T2 has entered the system at 0020 T3 has entered the system at 0030
Priority will be given to transaction T1, then transaction T2 and lastly Transaction T3.
Advantages:
Schedules are serializable just like 2PL protocols
No waiting for the transaction, which eliminates the possibility of deadlocks!
Disadvantages:
Starvation is possible if the same transaction is restarted and continually aborted
Characteristics of Good Concurrency Protocol
An ideal concurrency control DBMS mechanism has the following objectives:
Must be resilient to site and communication failures.
It allows the parallel execution of transactions to achieve maximum concurrency.
Its storage mechanisms and computational methods should be modest to minimize
overhead.
It must enforce some constraints on the structure of atomic actions of transactions.
Summary
Concurrency control is the procedure in DBMS for managing simultaneous operations
without conflicting with each another.
Lost Updates, dirty read, Non-Repeatable Read, and Incorrect Summary Issue are
problems faced due to lack of concurrency control.
Lock-Based, Two-Phase, Timestamp-Based, Validation-Based are types of
Concurrency handling protocols
The lock could be Shared (S) or Exclusive (X)
Two-Phase locking protocol which is also known as a 2PL protocol needs transaction
should acquire a lock after it releases one of its locks. It has 2 phases growing and
shrinking.
The timestamp-based algorithm uses a timestamp to serialize the execution of
concurrent transactions. The protocol uses the System Time or Logical Count as a
Timestamp.

106
106
Reference https://en.wikipedia.org/wiki/Online_transaction_processing

In Online transaction processing (OLTP), information systems typically facilitate and


manage transaction-oriented applications.
The term "transaction" can have two different meanings, both of which might
apply: in the realm of computers or database transactions it denotes an atomic
change of state, whereas in the realm of business or finance, the term typically
denotes an exchange of economic entities (as used by, e.g., Transaction Processing
Performance Council or commercial transactions.[1]):50 OLTP may use transactions of
the first type to record transactions of the second.
OLTP has also been used to refer to processing in which the system responds
immediately to user requests. An automated teller machine (ATM) for a bank is an
example of a commercial transaction processing application. Online transaction
processing applications have high throughput and are insert- or update-intensive in
database management. These applications are used concurrently by hundreds of
users. The key goals of OLTP applications are availability, speed, concurrency and
recoverability.[2] Reduced paper trails and the faster, more accurate forecast for
revenues and expenses are both examples of how OLTP makes things simpler for

107
businesses. However, like many modern online information technology solutions,
some systems require offline maintenance, which further affects the cost-benefit
analysis of an online transaction processing system.
OLTP is typically contrasted to OLAP (online analytical processing), which is generally
characterized by much more complex queries, in a smaller volume, for the purpose of
business intelligence or reporting rather than to process transactions. Whereas OLTP
systems process all kinds of queries (read, insert, update and delete), OLAP is
generally optimized for read only and might not even support other kinds of queries.
OLTP also operates differently from batch processing and grid computing.[1]:15
OLTP is contrasted to OLEP (online event processing), which is based on
distributed event logs to offer strong consistency in large-scale heterogeneous
systems.[3] Whereas OLTP is associated with short atomic transactions, OLEP allows
for more flexible distribution patterns and higher scalability, but with increased
latency and without guaranteed upper bound to the processing time.

Overview[edit]
An OLTP system is an accessible data processing system in today's enterprises. Some
examples of OLTP systems include order entry, retail sales, and financial transaction
systems.[4] Online transaction processing systems increasingly require support for
transactions that span a network and may include more than one company. For this
reason, modern online transaction processing software uses client or server
processing and brokering software that allows transactions to run on different
computer platforms in a network.
In large applications, efficient OLTP may depend on sophisticated transaction
management software (such as CICS) and/or database optimization tactics to
facilitate the processing of large numbers of concurrent updates to an OLTP-oriented
database.
For even more demanding decentralized database systems, OLTP brokering programs
can distribute transaction processing among multiple computers on a network. OLTP
is often integrated into service-oriented architecture (SOA) and Web services.
Online transaction processing (OLTP) involves gathering input information, processing
the data and updating existing data to reflect the collected and processed
information. As of today, most organizations use a database management system to
support OLTP. OLTP is carried in a client-server system.
Online transaction process concerns about concurrency and atomicity. Concurrency
controls guarantee that two users accessing the same data in the database system
will not be able to change that data or the user has to wait until the other user has
finished processing, before changing that piece of data. Atomicity controls guarantee
that all the steps in a transaction are completed successfully as a group. That is, if any
steps between the transaction fail, all other steps must fail also.[5]
Systems design[edit]

107
To build an OLTP system, a designer must know that the large number of concurrent
users does not interfere with the system's performance. To increase the performance
of an OLTP system, a designer must avoid excessive use of indexes and clusters.
The following elements are crucial for the performance of OLTP systems:[2]
Rollback segments
Rollback segments are the portions of database that record the actions of transactions in the
event that a transaction is rolled back. Rollback segments provide read consistency, rollback
transactions, and recovery of the database.[6]Clusters
A cluster is a schema that contains one or more tables that have one or more columns in
common. Clustering tables in a database improves the performance
of join operations.[7]Discrete transactions
A discrete transaction defers all change to the data until the transaction is committed. It can
improve the performance of short, non-distributed transactions.[8]Block size
The data block size should be a multiple of the operating system's block size within the
maximum limit to avoid unnecessary I/O.[9]Buffer cache size
SQL statements should be tuned to use the database buffer cache to avoid unnecessary
resource consumption.[10]Dynamic allocation of space to tables and rollback segments
Transaction processing monitors and the multi-threaded server
A transaction processing monitor is used for coordination of services. It is like an operating
system and does the coordination at a high level of granularity and can span multiple
computing devices.[11]Partition (database)
Partition use increases performance for sites that have regular transactions while still
maintaining availability and security.Database tuning
With database tuning, an OLTP system can maximize its performance as efficiently and
rapidly as possible.

107
Reference https://en.wikipedia.org/wiki/Knowledge_management

Knowledge management (KM) is the process of creating, sharing, using and


managing the knowledge and information of an organisation.[1] It refers to a
multidisciplinary approach to achieving organisational objectives by making the
best use of knowledge.[2]
An established discipline since 1991[citation needed], KM includes courses taught in the
fields of business administration, information systems, management, library,
and information sciences.[3][4] Other fields may contribute to KM research, including
information and media, computer science, public health and public policy.[5] Several
universities offer dedicated master's degrees in knowledge management.
Many large companies, public institutions and non-profit organisations have
resources dedicated to internal KM efforts, often as a part of their business
strategy, IT, or human resource management departments.[6] Several consulting
companies provide advice regarding KM to these organisations.[6]
Knowledge management efforts typically focus on organisational objectives such as
improved performance, competitive advantage, innovation, the sharing of lessons
learned, integration and continuous improvement of the organisation.[7] These

108
efforts overlap with organisational learning and may be distinguished from that by a
greater focus on the management of knowledge as a strategic asset and on
encouraging the sharing of knowledge.[2][8] KM is an enabler of organisational
learning.[9][10]

History[edit]
Knowledge management efforts have a long history, including on-the-job discussions,
formal apprenticeship, discussion forums, corporate libraries, professional training,
and mentoring programs.[2][10] With increased use of computers in the second half of
the 20th century, specific adaptations of technologies such as knowledge
bases, expert systems, information repositories, group decision support
systems, intranets, and computer-supported cooperative work have been introduced
to further enhance such efforts.[2]
In 1999, the term personal knowledge management was introduced; it refers to the
management of knowledge at the individual level.[11]
In the enterprise, early collections of case studies recognised the importance of
knowledge management dimensions of strategy, process and measurement.[12][13] Key
lessons learned include people and the cultural norms which influence their
behaviors are the most critical resources for successful knowledge creation,
dissemination and application; cognitive, social and organisational learning processes
are essential to the success of a knowledge management strategy; and
measurement, benchmarking and incentives are essential to accelerate the learning
process and to drive cultural change.[13] In short, knowledge management programs
can yield impressive benefits to individuals and organisations if they are purposeful,
concrete and action-orientated.
Research[edit]
KM emerged as a scientific discipline in the early 1990s.[14] It was initially supported
by individual practitioners, when Skandia hired Leif Edvinsson of Sweden as the
world's first Chief Knowledge Officer (CKO).[15] Hubert Saint-Onge (formerly of CIBC,
Canada), started investigating KM long before that.[2] The objective of CKOs is to
manage and maximise the intangible assets of their organisations.[2] Gradually, CKOs
became interested in practical and theoretical aspects of KM, and the new research
field was formed.[16] The KM idea has been taken up by academics, such as Ikujiro
Nonaka (Hitotsubashi University), Hirotaka Takeuchi (Hitotsubashi
University), Thomas H. Davenport (Babson College) and Baruch Lev (New York
University).[3][17]
In 2001, Thomas A. Stewart, former editor at Fortune magazine and subsequently the
editor of Harvard Business Review, published a cover story highlighting the
importance of intellectual capital in organisations.[18] The KM discipline has been
gradually moving towards academic maturity.[2] First, is a trend toward higher

108
cooperation among academics; single-author publications are less common. Second,
the role of practitioners has changed.[16] Their contribution to academic research
declined from 30% of overall contributions up to 2002, to only 10% by 2009.[19] Third,
the number of academic knowledge management journals has been steadily growing,
currently reaching 27 outlets.[20]
Multiple KM disciplines exist; approaches vary by author and school.[16][21] As the
discipline matured, academic debates increased regarding theory and practice,
including:
Techno-centric with a focus on technology, ideally those that enhance knowledge
sharing and creation.[22][23]
Organisational with a focus on how an organisation can be designed to facilitate
knowledge processes best.[6]
Ecological with a focus on the interaction of people, identity, knowledge, and
environmental factors as a complex adaptive system akin to a
natural ecosystem.[24][25]
Regardless of the school of thought, core components of KM roughly include
people/culture, processes/structure and technology. The details depend on
the perspective.[26] KM perspectives include:
community of practice[27]
social network analysis[28]
intellectual capital[29]
information theory[14][15]
complexity science[30]
constructivism[31][32]
The practical relevance of academic research in KM has been
questioned[33] with action research suggested as having more relevance[34] and the
need to translate the findings presented in academic journals to a practice.[12]
Dimensions[edit]
Different frameworks for distinguishing between different 'types of' knowledge
exist.[10] One proposed framework for categorising the dimensions of knowledge
distinguishes tacit knowledge and explicit knowledge.[30] Tacit knowledge represents
internalised knowledge that an individual may not be consciously aware of, such as to
accomplish particular tasks. At the opposite end of the spectrum, explicit knowledge
represents knowledge that the individual holds consciously in mental focus, in a form
that can easily be communicated to others.[16][35]
The Knowledge Spiral as described by Nonaka & Takeuchi.
Ikujiro Nonaka proposed a model (SECI, for Socialisation, Externalisation,
Combination, Internalisation) which considers a spiraling interaction between explicit
knowledge and tacit knowledge.[36] In this model, knowledge follows a cycle in which
implicit knowledge is 'extracted' to become explicit knowledge, and explicit
knowledge is 're-internalised' into implicit knowledge.[36]

108
Hayes and Walsham (2003) describe knowledge and knowledge management as two
different perspectives.[37] The content perspective suggests that knowledge is easily
stored; because it may be codified, while the relational perspective recognises the
contextual and relational aspects of knowledge which can make knowledge difficult
to share outside the specific context in which it is developed.[37]
Early research suggested that KM needs to convert internalised tacit knowledge into
explicit knowledge to share it, and the same effort must permit individuals to
internalise and make personally meaningful any codified knowledge retrieved from
the KM effort.[6][38]
Subsequent research suggested that a distinction between tacit knowledge and
explicit knowledge represented an oversimplification and that the notion of explicit
knowledge is self-contradictory.[11] Specifically, for knowledge to be made explicit, it
must be translated into information (i.e., symbols outside our heads).[11][39] More
recently, together with Georg von Krogh and Sven Voelpel, Nonaka returned to his
earlier work in an attempt to move the debate about knowledge conversion
forward.[4][40]
A second proposed framework for categorising knowledge dimensions distinguishes
embedded knowledge of a system outside a human individual (e.g., an information
system may have knowledge embedded into its design) from embodied
knowledge representing a learned capability of a human
body's nervous and endocrine systems.[41]
A third proposed framework distinguishes between the exploratory creation of "new
knowledge" (i.e., innovation) vs. the transfer or exploitation of "established
knowledge" within a group, organisation, or community.[37][42] Collaborative
environments such as communities of practice or the use of social computing tools
can be used for both knowledge creation and transfer.[42]
Strategies[edit]
Knowledge may be accessed at three stages: before, during, or after KM-related
activities.[29] Organisations have tried knowledge capture incentives, including making
content submission mandatory and incorporating rewards into performance
measurement plans.[43] Considerable controversy exists over whether such incentives
work and no consensus has emerged.[7]
One strategy to KM involves actively managing knowledge (push strategy).[7][44] In
such an instance, individuals strive to explicitly encode their knowledge into a shared
knowledge repository, such as a database, as well as retrieving knowledge they need
that other individuals have provided (codification).[44]
Another strategy involves individuals making knowledge requests of experts
associated with a particular subject on an ad hoc basis (pull strategy).[7][44] In such an
instance, expert individual(s) provide insights to requestor (personalisation).[30]
Hansen et al. defined the two strategies.[45] Codification focuses on collecting and
storing codified knowledge in electronic databases to make it

108
accessible.[46] Codification can therefore refer to both tacit and explicit
knowledge.[47] In contrast, personalisation encourages individuals to share their
knowledge directly.[46] Information technology plays a less important role, as it is only
facilitates communication and knowledge sharing.
Other knowledge management strategies and instruments for companies
include:[7][24][30]
Knowledge sharing (fostering a culture that encourages the sharing of information,
based on the concept that knowledge is not irrevocable and should be shared and
updated to remain relevant)
Make knowledge-sharing a key role in employees' job description
Inter-project knowledge transfer
Intra-organisational knowledge sharing
Inter-organisational knowledge sharing
Proximity & architecture (the physical situation of employees can be either conducive
or obstructive to knowledge sharing)
Storytelling (as a means of transferring tacit knowledge)
Cross-project learning
After-action reviews
Knowledge mapping (a map of knowledge repositories within a company accessible
by all)
Communities of practice
Expert directories (to enable knowledge seeker to reach to the experts)
Expert systems (knowledge seeker responds to one or more specific questions to
reach knowledge in a repository)
Best practice transfer
Knowledge fairs
Competency-based management (systematic evaluation and planning of knowledge
related competences of individual organisation members)
Master–apprentice relationship, Mentor-mentee relationship, job shadowing
Collaborative software technologies (wikis, shared bookmarking, blogs, social
software, etc.)
Knowledge repositories (databases, bookmarking engines, etc.)
Measuring and reporting intellectual capital (a way of making explicit knowledge for
companies)
Knowledge brokers (some organisational members take on responsibility for a specific
"field" and act as first reference on a specific subject)
Motivations[edit]
Multiple motivations lead organisations to undertake KM.[35] Typical considerations
include:[30]
Making available increased knowledge content in the development and provision
of products and services

108
Achieving shorter development cycles
Facilitating and managing innovation and organisational learning
Leveraging expertises across the organisation
Increasing network connectivity between internal and external individuals
Managing business environments and allowing employees to obtain relevant insights
and ideas appropriate to their work
Solving intractable or wicked problems
Managing intellectual capital and assets in the workforce (such as the expertise
and know-how possessed by key individuals or stored in repositories)
KM technologies[edit]
Knowledge management (KM) technology can be categorised:
Groupware—Software that facilitates collaboration and sharing of organisational
information. Such applications provide tools for threaded discussions, document
sharing, organisation-wide uniform email, and other collaboration-related features.
Workflow systems—Systems that allow the representation of processes associated
with the creation, use and maintenance of organisational knowledge, such as the
process to create and utilise forms and documents.
Content management and document management systems—Software systems that
automate the process of creating web content and/or documents. Roles such as
editors, graphic designers, writers and producers can be explicitly modeled along
with the tasks in the process and validation criteria. Commercial vendors started
either to support documents or to support web content but as the Internet grew
these functions merged and vendors now perform both functions.
Enterprise portals—Software that aggregates information across the entire
organisation or for groups such as project teams.
eLearning—Software that enables organisations to create customised training and
education. This can include lesson plans, monitoring progress and online classes.
Planning and scheduling software—Software that automates schedule creation and
maintenance. The planning aspect can integrate with project management
software.[22]
Telepresence—Software that enables individuals to have virtual "face-to-face"
meetings without assembling at one location. Videoconferencing is the most obvious
example.
Ontological Approach—An ontology-based knowledge model for knowledge
management. This model can facilitate knowledge discovery that provides users with
insight for decision making.[48]
These categories overlap. Workflow, for example, is a significant aspect of a content
or document management systems, most of which have tools for developing
enterprise portals.[7][49]
Proprietary KM technology products such as Lotus Notes defined proprietary formats
for email, documents, forms, etc. The Internet drove most vendors to adopt Internet

108
formats. Open-source and freeware tools for the creation of blogs and wikis now
enable capabilities that used to require expensive commercial tools.[34][50]
KM is driving the adoption of tools that enable organisations to work at the semantic
level,[51] as part of the Semantic Web.[52] Some commentators have argued that after
many years the Semantic Web has failed to see widespread adoption,[53][54][55] while
other commentators have argued that it has been a success.[56]
Knowledge Barriers[edit]
Just like knowledge transfer and knowledge sharing, the term “knowledge barriers” is
not a uniformly defined term and differs in its meaning depending on the
author. [57] Knowledge barriers can be associated with high costs for both companies
and individuals[58] [59] [60].

108
The web application environment is where web applications run on a server and
hosts the interface that web users use to interact with organizations. As the web
application environment is accessible to everyone out on the web, it becomes really
important to protect theentire web application architecture and its components. If
the web server can be compromised in some way, it may offer the attacker a
platform from which to mount probes or other nefarious activities. Also, such
unauthorized access may provide the attacker with intelligence about the
organization such as corporate sales and projects and can also provide a way by
which the attacker may be able to gain access to the enterprise’s proprietary and
sensitive intellectual property. Current statistics indicate that most attacks are
conducted at the application level, either against the web server application itself,
in-house scripts, or the common front-end applications used for e-commerce
activities. There are many vulnerabilities and exploits that exist in the application
layer, especially the web application environment. Therefore, attacks on the
application software are much more likely to succeed than attacks on the
underlying platforms. Once the application has been breached, an attack on the
operating system, and other components of the architecture becomes generally
possible.

109
Factors that Make Websites Vulnerable
• Websites are designed to be widely accessible and are usually heavily advertised
as well, therefore, a very large number of people will have information about the
web site and its architecture.
• Web server software does make provisions for logging of traffic, but many
administrators either turn off logging altogether or reduce the logging to minimal
levels.
• The standard security tools of firewalls and intrusion detection systems can be
applied but are not particularly well suited to protecting such public websites:
• In the case of firewalls, a website must have standard ports open for
specific traffic.
• Intrusion detection systems (IDSs) must be tuned properly and maintained
adequately to provide any useful information from the flood of data.
Websites will see all kinds of traffic, from different locations, requesting
connections, web pages, submitting form information, or even updating
search engine facts.

Web Application Threats and Protection


Specific protections that may be helpful include the following:
• Having a particular assurance sign-off process for web servers
• Hardening the operating system used on such servers, which would include at the
very least removing default configurations and accounts, configuring permissions
and privileges correctly, and keeping up to date with vendor patches
• Extending web and network vulnerability scans prior to deployment
• Deploying IDS and advanced intrusion prevention system (IPS) technology
• Using application proxy firewalls

Disabling any unnecessary documentation and libraries


Ensure administrative interfaces are removed or secured appropriately
Only allow access from authorized hosts or networks, and then use strong (multi-
factor) user authentication
Do not hard code the authentication credentials into the application itself, and
ensure the security of the credentials using certificates or similar high-trust
authentication mechanisms
Use account lockout and extended logging and audit, and protect all authentication
traffic with encryption
Ensure the interface is at least as secure as the rest of the application and most often
secure it at a higher level

Because of the accessibility of web systems and applications, and the vulnerabilities

109
and exploits available, input validation becomes essential to address as part of
securing this environment. Application proxy firewalls are very effective, but they
need to make
sure the proxies are able to deal with problems of known exploits such as buffer
overflows, authentication issues, scripting, the passing of commands to the
underlying platform (that includes issues related to database engines, such as SQL
commands), encoding issues (such as Unicode), and URL encoding and translation. In
particular, the application proxy firewalls may need to address issues of the passing
of input data to in-house and custom-developed software, ensuring validation of
input to those systems. In other words, the biggest challenge when data is being
passed from anything to anything else becomes adequate data validation.

In regard to session management, we need to remember that Hypertext Transfer


Protocol (HTTP) is a stateless technology, and therefore, periods of apparent
attachment to the server are controlled by other technologies, such as cookies or URL
data, that must be both protected and validated. If cookies are needed, or allowed,
they should always be encrypted. Also, time validation needs to be included as part
of session management, which typically means to disallow sequential, calculable, or
predictable cookies, session numbers, or URL data. Instead, always use random and
unique indicators.

As usual, with any application related environment, web application environments


should always validate all input and output, fail secure (closed), and make your
application or system as simple as possible. Use secure network design and
penetration testing to validate secure designs and to identify potential vulnerabilities
and threats to be mitigated and use defense in depth. Some other specific security
controls to consider in a web system are not to cache secure pages, confirm that all
encryption used meets industry standards, monitor your code vendors for security
patches and alerts, log any and all critical transactions and milestones, handle
exceptions properly, do not trust any data from the client, and do not automatically
trust data from other servers, partners, or any other part of the application itself.

109
Reference https://www.owasp.org/index.php/Main_Page

Please review the above link properly.

Open Web Application Security Project (OWASP) Framework


One very helpful resource for the secure development of web environments,
including web applications, is the Open Web Application Security Web Project
(OWASP). OWASP provides a number of helpful frameworks focused on the secure
deployment
of web applications. OWASP has several guides and resources available for secure
web application development:

• Development Guide
• Code Review Guide
• Testing Guide
• Top Ten Web Application Security Vulnerabilities
• OWASP Mobile

110
Given the prevalence of web-based and cloud-based solutions that organizations can
standardize on, OWASP provides an easily accessible and complete framework with
processes for web application security that has become very valuable in current web
application environments. The security professional should be familiar with the “top
ten” web application vulnerabilities and also how to mitigate them. This knowledge
needs to be enforced in web development and deployment areas of the organization,
together with other valuable resources from OWASP and possibly other frameworks
used by professionals and stakeholders involved in web solution deployment.

110
Reference https://en.wikipedia.org/wiki/Malware

Malware (a portmanteau for malicious software) is any software intentionally


designed to cause damage to a computer, server, client, or computer
network[1][2] (by contrast, software that causes unintentional harm due to some
deficiency is typically described as a software bug). A wide variety of types of
malware exist, including computer viruses, worms, Trojan
horses, ransomware, spyware, adware, and scareware.
Programs are also considered malware if they secretly act against the interests of
the computer user. For example, at one point Sony music Compact discs silently
installed a rootkit on purchasers' computers with the intention of preventing illicit
copying, but which also reported on users' listening habits, and unintentionally
created extra security vulnerabilities.[3]
A range of antivirus software, firewalls and other strategies are used to help protect
against the introduction of malware, to help detect it if it is already present, and to
recover from malware-associated malicious activity and attacks.[4]

111
Purposes[edit]
Many early infectious programs, including the first Internet Worm, were written as
experiments or pranks. Today, malware is used by both black hat hackers and
governments, to steal personal, financial, or business information.[5][6]
Malware is sometimes used broadly against government or corporate websites to
gather guarded information,[7] or to disrupt their operation in general. However,
malware can be used against individuals to gain information such as personal
identification numbers or details, bank or credit card numbers, and passwords.
Since the rise of widespread broadband Internet access, malicious software has more
frequently been designed for profit. Since 2003, the majority of
widespread viruses and worms have been designed to take control of users'
computers for illicit purposes.[8] Infected "zombie computers" can be used to
send email spam, to host contraband data such as child pornography,[9] or to engage
in distributed denial-of-service attacks as a form of extortion.[10]
Programs designed to monitor users' web browsing, display unsolicited
advertisements, or redirect affiliate marketing revenues are called spyware. Spyware
programs do not spread like viruses; instead they are generally installed by exploiting
security holes. They can also be hidden and packaged together with unrelated user-
installed software.[11] The Sony BMG rootkit was intended to preventing illicit
copying; but also reported on users' listening habits, and unintentionally created
extra security vulnerabilities.[3]
Ransomware affects an infected computer system in some way, and demands
payment to bring it back to its normal state. There are two variations of ransomware,
being crypto ransomware and locker ransomware.[12] With the locker ransomware
just locking down a computer system without encrypting its contents. Whereas the
traditional ransomware is one that locks down your system and encrypts the contents
of the system. For example, programs such as CryptoLocker encrypt files securely, and
only decrypt them on payment of a substantial sum of money.[13]
Some malware is used to generate money by click fraud, making it appear that the
computer user has clicked an advertising link on a site, generating a payment from
the advertiser. It was estimated in 2012 that about 60 to 70% of all active malware
used some kind of click fraud, and 22% of all ad-clicks were fraudulent.[14]
In addition to criminal money-making, malware can be used for sabotage, often for
political motives. Stuxnet, for example, was designed to disrupt very specific
industrial equipment. There have been politically motivated attacks that have spread
over and shut down large computer networks, including massive deletion of files and
corruption of master boot records, described as "computer killing." Such attacks were
made on Sony Pictures Entertainment (25 November 2014, using malware known
as Shamoon or W32.Disttrack) and Saudi Aramco (August 2012).[15][16]
Infectious malware[edit]
Main articles: Computer virus and Computer worm

111
The best-known types of malware, viruses and worms, are known for the manner in
which they spread, rather than any specific types of behavior. A computer virus is
software that embeds itself in some other executable software (including the
operating system itself) on the target system without the user's knowledge and
consent and when it is run, the virus is spread to other executables. On the other
hand, a worm is a stand-alone malware software that actively transmits itself over
a network to infect other computers. These definitions lead to the observation that a
virus requires the user to run an infected software or operating system for the virus
to spread, whereas a worm spreads itself.[17]
Concealment[edit]
These categories are not mutually exclusive, so malware may use multiple
techniques.[18] This section only applies to malware designed to operate undetected,
not sabotage and ransomware.
See also: Polymorphic packer
Viruses[edit]
Main article: Computer virus
A computer virus is software usually hidden within another seemingly innocuous
program that can produce copies of itself and insert them into other programs or
files, and that usually performs a harmful action (such as destroying data).[19] An
example of this is a PE infection, a technique, usually used to spread malware, that
inserts extra data or executable code into PE files.[20]
Screen-locking ransomware[edit]
Main article: Ransomware
'Lock-screens', or screen lockers is a type of “cyber police” ransomware that blocks
screens on Windows or Android devices with a false accusation in harvesting illegal
content, trying to scare the victims into paying up a fee.[21] Jisut and SLocker impact
Android devices more than other lock-screens, with Jisut making up nearly 60
percent of all Android ransomware detections.[22]
Trojan horses[edit]
Main article: Trojan horse (computing)
A Trojan horse is a harmful program that misrepresents itself to masquerade as a
regular, benign program or utility in order to persuade a victim to install it. A Trojan
horse usually carries a hidden destructive function that is activated when the
application is started. The term is derived from the Ancient Greek story of the Trojan
horse used to invade the city of Troy by stealth.[23][24][25][26][27]
Trojan horses are generally spread by some form of social engineering, for example,
where a user is duped into executing an e-mail attachment disguised to be
unsuspicious, (e.g., a routine form to be filled in), or by drive-by download. Although
their payload can be anything, many modern forms act as a backdoor, contacting a
controller which can then have unauthorized access to the affected
computer.[28] While Trojan horses and backdoors are not easily detectable by

111
themselves, computers may appear to run slower due to heavy processor or network
usage.
Unlike computer viruses and worms, Trojan horses generally do not attempt to inject
themselves into other files or otherwise propagate themselves.[29]
In spring 2017 Mac users were hit by the new version of Proton Remote Access Trojan
(RAT)[30] trained to extract password data from various sources, such as browser auto-
fill data, the Mac-OS keychain, and password vaults.[31]
Rootkits[edit]
Main article: Rootkit
Once malicious software is installed on a system, it is essential that it stays concealed,
to avoid detection. Software packages known as rootkits allow this concealment, by
modifying the host's operating system so that the malware is hidden from the user.
Rootkits can prevent a harmful process from being visible in the system's list
of processes, or keep its files from being read.[32]
Some types of harmful software contain routines to evade identification and/or
removal attempts, not merely to hide themselves. An early example of this behavior
is recorded in the Jargon File tale of a pair of programs infesting a Xerox CP-V time
sharing system:
Each ghost-job would detect the fact that the other had been killed, and would start a new
copy of the recently stopped program within a few milliseconds. The only way to kill both
ghosts was to kill them simultaneously (very difficult) or to deliberately crash the
system.[33]Backdoors[edit]
Main article: Backdoor (computing)
A backdoor is a method of bypassing normal authentication procedures, usually over
a connection to a network such as the Internet. Once a system has been
compromised, one or more backdoors may be installed in order to allow access in the
future,[34] invisibly to the user.
The idea has often been suggested that computer manufacturers preinstall backdoors
on their systems to provide technical support for customers, but this has never been
reliably verified. It was reported in 2014 that US government agencies had been
diverting computers purchased by those considered "targets" to secret workshops
where software or hardware permitting remote access by the agency was installed,
considered to be among the most productive operations to obtain access to networks
around the world.[35] Backdoors may be installed by Trojan horses, worms, implants,
or other methods.[36][37]
Evasion[edit]
Since the beginning of 2015, a sizable portion of malware utilizes a combination of
many techniques designed to avoid detection and analysis.[38] From the more
common, to the least common:
evasion of analysis and detection by fingerprinting the environment when
executed.[39]

111
confusing automated tools' detection methods. This allows malware to avoid
detection by technologies such as signature-based antivirus software by changing the
server used by the malware.[40]
timing-based evasion. This is when malware runs at certain times or following certain
actions taken by the user, so it executes during certain vulnerable periods, such as
during the boot process, while remaining dormant the rest of the time.
obfuscating internal data so that automated tools do not detect the malware.[41]
An increasingly common technique (2015) is adware that uses stolen certificates to
disable anti-malware and virus protection; technical remedies are available to deal
with the adware.[42]
Nowadays, one of the most sophisticated and stealthy ways of evasion is to use
information hiding techniques, namely stegomalware. A survey on stegomalware was
published by Cabaj et al. in 2018.[43]
Vulnerability[edit]
Main article: Vulnerability (computing)
In this context, and throughout, what is called the "system" under attack may be
anything from a single application, through a complete computer and operating
system, to a large network.
Various factors make a system more vulnerable to malware:
Security defects in software[edit]
Malware exploits security defects (security bugs or vulnerabilities) in the design of
the operating system, in applications (such as browsers, e.g. older versions of
Microsoft Internet Explorer supported by Windows XP[44]), or in vulnerable versions
of browser plugins such as Adobe Flash Player, Adobe Acrobat or Reader, or Java
SE.[45][46] Sometimes even installing new versions of such plugins does not
automatically uninstall old versions. Security advisories from plug-in providers
announce security-related updates.[47] Common vulnerabilities are assigned CVE
IDs and listed in the US National Vulnerability Database. Secunia PSI[48] is an example
of software, free for personal use, that will check a PC for vulnerable out-of-date
software, and attempt to update it.
Malware authors target bugs, or loopholes, to exploit. A common method is
exploitation of a buffer overrun vulnerability, where software designed to store data
in a specified region of memory does not prevent more data than the buffer can
accommodate being supplied. Malware may provide data that overflows the buffer,
with malicious executable code or data after the end; when this payload is accessed
it does what the attacker, not the legitimate software, determines.
Insecure design or user error[edit]
Early PCs had to be booted from floppy disks. When built-in hard drives became
common, the operating system was normally started from them, but it was possible
to boot from another boot device if available, such as a floppy disk, CD-ROM, DVD-
ROM, USB flash drive or network. It was common to configure the computer to boot

111
from one of these devices when available. Normally none would be available; the
user would intentionally insert, say, a CD into the optical drive to boot the computer
in some special way, for example, to install an operating system. Even without
booting, computers can be configured to execute software on some media as soon as
they become available, e.g. to autorun a CD or USB device when inserted.
Malware distributors would trick the user into booting or running from an infected
device or medium. For example, a virus could make an infected computer add
autorunnable code to any USB stick plugged into it. Anyone who then attached the
stick to another computer set to autorun from USB would in turn become infected,
and also pass on the infection in the same way.[49] More generally, any device that
plugs into a USB port - even lights, fans, speakers, toys, or peripherals such as a
digital microscope - can be used to spread malware. Devices can be infected during
manufacturing or supply if quality control is inadequate.[49]
This form of infection can largely be avoided by setting up computers by default to
boot from the internal hard drive, if available, and not to autorun from
devices.[49] Intentional booting from another device is always possible by pressing
certain keys during boot.
Older email software would automatically open HTML email containing potentially
malicious JavaScript code. Users may also execute disguised malicious email
attachments. The 2018 Data Breach Investigations Report by Verizon, cited by CSO
Online, states that emails are the primary method of malware delivery, accounting
for 92% of malware delivery around the world.[50][51]
Over-privileged users and over-privileged code[edit]
Main article: principle of least privilege
In computing, privilege refers to how much a user or program is allowed to modify a
system. In poorly designed computer systems, both users and programs can be
assigned more privileges than they should have, and malware can take advantage of
this. The two ways that malware does this is through overprivileged users and
overprivileged code.
Some systems allow all users to modify their internal structures, and such users today
would be considered over-privileged users. This was the standard operating
procedure for early microcomputer and home computer systems, where there was
no distinction between an administrator or root, and a regular user of the system. In
some systems, non-administrator users are over-privileged by design, in the sense
that they are allowed to modify internal structures of the system. In some
environments, users are over-privileged because they have been inappropriately
granted administrator or equivalent status.
Some systems allow code executed by a user to access all rights of that user, which is
known as over-privileged code. This was also standard operating procedure for early
microcomputer and home computer systems. Malware, running as over-privileged
code, can use this privilege to subvert the system. Almost all currently popular

111
operating systems, and also many scripting applications allow code too many
privileges, usually in the sense that when a user executes code, the system allows
that code all rights of that user. This makes users vulnerable to malware in the form
of e-mail attachments, which may or may not be disguised.
Use of the same operating system[edit]
Homogeneity can be a vulnerability. For example, when all computers in
a network run the same operating system, upon exploiting one, one worm can exploit
them all:[52] In particular, Microsoft Windows or Mac OS X have such a large share of
the market that an exploited vulnerability concentrating on either operating system
could subvert a large number of systems. Introducing diversity purely for the sake of
robustness, such as adding Linux computers, could increase short-term costs for
training and maintenance. However, as long as all the nodes are not part of the
same directory service for authentication, having a few diverse nodes could deter
total shutdown of the network and allow those nodes to help with recovery of the
infected nodes. Such separate, functional redundancy could avoid the cost of a total
shutdown, at the cost of increased complexity and reduced usability in terms
of single sign-on authentication.
Anti-malware strategies[edit]
Main article: Antivirus software
As malware attacks become more frequent, attention has begun to shift
from viruses and spyware protection, to malware protection, and programs that have
been specifically developed to combat malware. (Other preventive and recovery
measures, such as backup and recovery methods, are mentioned in the computer
virus article).
Anti-virus and anti-malware software[edit]
A specific component of anti-virus and anti-malware software, commonly referred to
as an on-access or real-time scanner, hooks deep into the operating system's core
or kernel and functions in a manner similar to how certain malware itself would
attempt to operate, though with the user's informed permission for protecting the
system. Any time the operating system accesses a file, the on-access scanner checks
if the file is a 'legitimate' file or not. If the file is identified as malware by the scanner,
the access operation will be stopped, the file will be dealt with by the scanner in a
pre-defined way (how the anti-virus program was configured during/post
installation), and the user will be notified.[citation needed] This may have a considerable
performance impact on the operating system, though the degree of impact is
dependent on how well the scanner was programmed. The goal is to stop any
operations the malware may attempt on the system before they occur, including
activities which might exploit bugs or trigger unexpected operating system behavior.
Anti-malware programs can combat malware in two ways:
They can provide real time protection against the installation of malware software on
a computer. This type of malware protection works the same way as that of antivirus

111
protection in that the anti-malware software scans all incoming network data for
malware and blocks any threats it comes across.
Anti-malware software programs can be used solely for detection and removal of
malware software that has already been installed onto a computer. This type of anti-
malware software scans the contents of the Windows registry, operating system files,
and installed programs on a computer and will provide a list of any threats found,
allowing the user to choose which files to delete or keep, or to compare this list to a
list of known malware components, removing files that match.[53]
Real-time protection from malware works identically to real-time antivirus
protection: the software scans disk files at download time, and blocks the activity of
components known to represent malware. In some cases, it may also intercept
attempts to install start-up items or to modify browser settings. Because many
malware components are installed as a result of browser exploits or user error, using
security software (some of which are anti-malware, though many are not) to
"sandbox" browsers (essentially isolate the browser from the computer and hence
any malware induced change) can also be effective in helping to restrict any damage
done.[citation needed]
Examples of Microsoft Windows antivirus and anti-malware software include the
optional Microsoft Security Essentials[54] (for Windows XP, Vista, and Windows 7) for
real-time protection, the Windows Malicious Software Removal Tool[55] (now included
with Windows (Security) Updates on "Patch Tuesday", the second Tuesday of each
month), and Windows Defender (an optional download in the case of Windows XP,
incorporating MSE functionality in the case of Windows 8 and later).[56] Additionally,
several capable antivirus software programs are available for free download from the
Internet (usually restricted to non-commercial use).[57] Tests found some free
programs to be competitive with commercial ones.[57][58][59] Microsoft's System File
Checker can be used to check for and repair corrupted system files.
Some viruses disable System Restore and other important Windows tools such
as Task Manager and Command Prompt. Many such viruses can be removed
by rebooting the computer, entering Windows safe mode with networking,[60] and
then using system tools or Microsoft Safety Scanner.[61]
Hardware implants can be of any type, so there can be no general way to detect
them.
Website security scans[edit]
As malware also harms the compromised websites (by breaking reputation,
blacklisting in search engines, etc.), some websites offer vulnerability
scanning.[62][63][64][65] Such scans check the website, detect malware, may note
outdated software, and may report known security issues.
"Air gap" isolation or "parallel network"[edit]
As a last resort, computers can be protected from malware, and infected computers
can be prevented from disseminating trusted information, by imposing an "air

111
gap" (i.e. completely disconnecting them from all other networks). However, malware
can still cross the air gap in some situations. For example, removable media can carry
malware across the gap.[citation needed]
"AirHopper",[66] "BitWhisper",[67] "GSMem" [68] and "Fansmitter" [69] are four
techniques introduced by researchers that can leak data from air-gapped computers
using electromagnetic, thermal and acoustic emissions.
Grayware[edit]
See also: Privacy-invasive software and Potentially unwanted program
Grayware is a term applied to unwanted applications or files that are not classified as
malware, but can worsen the performance of computers and may cause security
risks.[70]
It describes applications that behave in an annoying or undesirable manner, and yet
are less serious or troublesome than malware. Grayware
encompasses spyware, adware, fraudulent dialers, joke programs, remote access
tools and other unwanted programs that may harm the performance of computers or
cause inconvenience. The term came into use around 2004.[71]
Another term, potentially unwanted program (PUP) or potentially unwanted
application (PUA),[72] refers to applications that would be considered unwanted
despite often having been downloaded by the user, possibly after failing to read a
download agreement. PUPs include spyware, adware, and fraudulent dialers. Many
security products classify unauthorised key generators as grayware, although they
frequently carry true malware in addition to their ostensible purpose.
Software maker Malwarebytes lists several criteria for classifying a program as a
PUP.[73] Some types of adware (using stolen certificates) turn off anti-malware and
virus protection; technical remedies are available.[42]
History of viruses and worms[edit]
Before Internet access became widespread, viruses spread on personal computers by
infecting executable programs or boot sectors of floppy disks. By inserting a copy of
itself into the machine code instructions in these programs or boot sectors, a virus
causes itself to be run whenever the program is run or the disk is booted. Early
computer viruses were written for the Apple II and Macintosh, but they became more
widespread with the dominance of the IBM PC and MS-DOS system. The first IBM PC
virus in the "wild" was a boot sector virus dubbed (c)Brain,[74] created in 1986 by the
Farooq Alvi brothers in Pakistan.[75] Executable-infecting viruses are dependent on
users exchanging software or boot-able floppies and thumb drives so they spread
rapidly in computer hobbyist circles.[citation needed]
The first worms, network-borne infectious programs, originated not on personal
computers, but on multitasking Unix systems. The first well-known worm was
the Internet Worm of 1988, which infected SunOS and VAX BSD systems. Unlike a
virus, this worm did not insert itself into other programs. Instead, it exploited security
holes (vulnerabilities) in network server programs and started itself running as a

111
separate process.[76] This same behavior is used by today's worms as well.[77][78]
With the rise of the Microsoft Windows platform in the 1990s, and the
flexible macros of its applications, it became possible to write infectious code in the
macro language of Microsoft Word and similar programs. These macro viruses infect
documents and templates rather than applications (executables), but rely on the fact
that macros in a Word document are a form of executable code.[79]

111
As part of good security management, it’s very important to ensure the safety of
application code while it is being developed, as well as during usage and while at
rest in the enterprise. Code is typically stored in what are called code repositories.
In today’s
environments and trends, the security of code repositories can pose a challenge for
several reasons. With the move to offshoring application development, the code
being developed may not be available to the enterprise directly, and likewise, the
development
environment may be unavailable for management and inspection. The protection of
code repositories needs to be handled just like any other valuable asset through a
combination of logical and physical access controls and mechanisms, as well as
protecting the
integrity and availability of the content of code repositories.

112
Configuration Management (CM)
For software and applications, configuration management (CM) refers to monitoring
and managing changes to a program or documentation. The goal is to guarantee the
integrity of the code, availability, and usage of the correct version of all system
components such as the software code, design documents, documentation, and
control files.

CM, therefore, involves reviewing every change made to a system.


This includes identifying, controlling, accounting for, and auditing all changes. The
process would include the following:
l The first step is to identify any changes that are made.
l Controlling occurs when every change is subject to some type of documentation
that must be reviewed and approved by an authorized individual.
l Accounting is recording and reporting on the configuration of the software or
hardware throughout any change procedures.
l Auditing allows the completed change to be verified, especially
ensuring that any changes did not affect the security policy or
protection mechanisms that are implemented.

113
Successful CM requires a well-defined and understood
set of policies and standards that clearly define the following:
l The set of artifacts (configuration items) under the jurisdiction
of CM
l How artifacts are named

How artifacts enter and leave the controlled set


l How an artifact under CM is allowed to change
l How different versions of an artifact under CM are made
available and under what conditions each one can be used
l How CM tools are used to enable and enforce CM

113
Effectiveness of Software Security
As we have seen, application software has become an integral component in every
organization over the last number of decades, and building better applications, that
have the proper security controls built-in based on requirements becomes very
important. As part of this importance, organizations need to evaluate the
effectiveness of the applications development process, including how security is
involved and ultimately that the security designed into the application is indeed
effective based on the organization’s requirements.

The best way to evaluate the effectiveness of application development and


software security is through having an efficient and secure process itself and
through testing and assurance mechanisms. Providing meaningful metrics that are
evaluated, meaningful, and provided to stakeholders allows organizations to have
assurance that the effectiveness of software security is indeed at the levels required
based on goals and objectives. Providing meaningful metrics that reflect on use
cases can give
organizations a more comprehensive view of how secure applications actually are.

114
Use cases are tangible outcomes of a program and can definitely be useful in
applications security testing. They are essentially scores for how well the security
functions in certain test situations. By measuring the quality of each use case,
organizations can have a clear understanding of how well the applications provide
security.

114
https://en.wikipedia.org/wiki/Certification_and_Accreditation

Certification and accreditation (C&A or CnA) is a process for implementing any


formal process. It is a systematic procedure for evaluating, describing, testing,
and authorizing systems or activities prior to or after a system is in operation. The
process is used extensively across the world.

Certification is a comprehensive evaluation of a process, system, product, event, or


skill, typically measured against some existing norm or standard. Industry
and/or trade associations will often create certification programs to test and
evaluate the skills of those performing services within the interest area of that
association. Testing laboratories may also certify that certain products meet pre-
established standards, or governmental agencies may certify that a company is
meeting existing regulations (e.g., emission limits).

Accreditation is the formal declaration by a neutral third party that the certification
program is administered in a way that meets the relevant norms or standards of
certification program (e.g., ISO/IEC 17024).

115
National bodies[edit]
Many nations have established specific bodies.
United Kingdom[edit]
In the United Kingdom, for example, an organization known as United Kingdom
Accreditation Service (UKAS) has been established as the nation's official
accreditation body. Most European nations have similar organizations established to
provide accreditation services within their borders.
United States[edit]
There is no such "approved" accreditation body within the United States, however. As
a result, over the years multiple accreditation bodies have become established to
address the accreditation needs of specific industries or market segments. Some of
these accreditation services are for profit entities, however the majority are not-for-
profit bodies that provide accreditation services as part of their mission.
Information security[edit]
Certification and accreditation is a two-step process that
ensures security of information systems.[1] Certification is the process of evaluating,
testing, and examining security controls that have been pre-determined based on the
data type in an information system. The evaluation compares the current systems’
security posture with specific standards. The certification process ensures that
security weaknesses are identified and plans for mitigation strategies are in place. On
the other hand, accreditation is the process of accepting the residual risks associated
with the continued operation of a system and granting approval to operate for a
specified period of time.
In IT governance, the primary reason why certification and accreditation (C&A)
process is being performed on critical systems is to ensure that the security
compliance has been technically evaluated. Certified and accredited systems are
systems that have had their security compliance technically evaluated for optimal
performance in a specific environment and configuration. These certified systems are
hereby evaluated to run in a specific working environment.

115
Please download and read this document as it is a key document.

https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-37r2.pdf

Simply read it atleast once.

The U.S. National Institute of Standards and Technology (NIST) has developed and
published a document, SP 800-37 Revision 1: Guide for Applying the Risk
Management Framework to Information Systems that recommends a security
authorization process and procedures to ensure the risk management process is
applied into application development and how security is involved to ensure the
effectiveness of software and its security capabilities. As we’ve seen above, the
process of certification and accreditation can be very useful, but the NIST SP 800-37
Revision 1 guidance has provided a way to create a change in the traditional
thought process surrounding certification and accreditation and extends it. The
revised process emphasizes the following:
• Building information security capabilities into information systems through the
application of state-of-the-practice management, operational, and technical

116
security controls
• Maintaining awareness of the security state of information systems on an ongoing
basis though enhanced monitoring processes
• Providing essential information to senior leaders to facilitate decisions regarding
the acceptance of risk to organizational operations and assets, individuals, and
other organizations, arising from the operation and use of information systems

116
Reference https://csrc.nist.gov/projects/risk-management/risk-management-framework-
(RMF)-Overview

Reference https://en.wikipedia.org/wiki/Risk_management_framework

Please read this document once so that you understand as it will help you tremendously
during your cyber security career.

The Risk Management Framework is a United States federal government policy and
standards to help secure information systems (computers and networks) developed
by National Institute of Standards and Technology.
The two main publications that cover the details of RMF are NIST Special
Publication 800-37, "Guide for Applying the Risk Management Framework to
Federal Information Systems", and NIST Special Publication 800-53, "Security and
Privacy Controls for Federal Information Systems and Organizations".
NIST Special Publication 800-37, "Guide for Applying the Risk Management
Framework to Federal Information Systems", developed by the Joint Task Force
Transformation Initiative Working Group, transforms the traditional Certification

117
and Accreditation (C&A) process into the six-step Risk Management Framework
(RMF).
The Risk Management Framework (RMF), illustrated at right, provides a disciplined
and structured process that integrates information security and risk
management activities into the system development life cycle.[1]

The RMF steps include:


Categorize the information system and the information processed, stored, and
transmitted by that system based on an impact analysis. Vested party is identified.
Select an initial set of baseline security controls for the information system based on
the security categorization; tailoring and supplementing the security control baseline
as needed based on an organizational assessment of risk and local conditions. If any
overlays apply to the system it will be added in this step
Implement the security controls identified in the Step 2 SELECTION are applied in
this step.
Assess third party entity assess the controls and verifies that the controls are
properly applied to the system.
Authorize the information system is granted or denied an Authority to Operate (ATO),
in some cases it may be postponed while certain items are fixed. The ATO is based off
the report from the Assessment phase.
Monitor the security controls in the information system are monitored in a pre-
planned fashion documented earlier in the process. ATO is good for 3 years, every 3
years the process needs to be repeated.

Risks[edit]
During its lifecycle, an information system will encounter many types of risk that
affect the overall security posture of the system and the security controls that must
be implemented. The RMF process supports early detection and resolution of risks.
Risk can be categorized at high level as infrastructure risks, project risks, application
risks, information asset risks, business continuity risks, outsourcing risks, external
risks and strategic risks. Infrastructure risks focus on the reliability of computers and
networking equipment. Project risks focus on budget, timeline and system quality.
Application risks focus on performance and overall system capacity. Information asset
risks focus on the damage, loss or disclosure to an unauthorized part of information
assets. Business continuity risks focus on maintaining a reliable system with
maximum up-time. Outsourcing risks focus on the impact of 3rd party supplier
meeting their requirements. [2] External risks are items outside the information
system control that impact the security of the system. Strategic risks focuses on the
need of information system functions to align with the business strategy that the
system supports. [3]

117
Systems, applications, architecture, and network device reporting is important to
the overall health and security of systems. Every network device, operating system,
or application, and indeed, component of architectures should provide some form
of logging capabilities.

Reference https://logz.io/blog/monitoring-logging-compliance/

Addressing compliance requirements for monitoring and logging can be a challenge


for any organization no matter how experienced or skilled the people responsible
are. Compliance requirements are often not well understood by technical teams
and there is not much instruction on how to comply with a compliance program. In
this article, we’ll discuss what some of these new compliance programs mean, why
they are important, and how you can comply with your logging and monitoring
system.

What is a compliance program?


The goal of compliance is to provide stronger security in a verifiable manner. The

118
way they do this is to create standards or regulations that stipulate where an
organization must meet a minimum level of strong security practices. To meet these,
it may include technical controls; such as logging/ monitoring software, strong
configuration control, or administrative controls; such as policy, procedure, and
training.
While compliance sets a minimum level of requirements, it is up to the organization
to determine the level of control needed. Most commonly, this is done via an
assessment of risk or threat in context with the requirement. If it can be shown that
a compliance control requirement is sufficient, then the control should be at least
what is required but can vary based on the assessment.

Who is affected by these regulations?


This next section includes a table of some common compliance programs – often
referred to by the regulation or security framework associated with it. A regulation
may be very broad and impact any business when it focuses on specific types of data
(just as a state privacy law protects any citizen’s data regardless of location) or it may
focus on a specific sector of the economy (like healthcare or energy utilities).
Many businesses may find they are within the scope of a regulation to a limited
degree, such as when a business self-insures its employees, it falls into some HIPAA
regulation requirements even though the business has no health-industry
orientation. If you’re wondering if you might be impacted by a regulation, the
question is typically determined by your legal or security department.

What do the compliance programs say?


The regulation may be how a program is identified, but often it is also a framework or
associated document that provides instruction on how a compliance effort is to be
implemented and assessed. It is wise to distribute these framework requirements to
staff (sometimes in addition to the regulation) in order to explain what is required.

118
A log is a record of security relevant actions and events that have taken place on a
computer architecture. Logs:
• Provide a clear view of who owns a process, what action was initiated, when it
was initiated, where the action occurred, and why the process ran
• Are the primary record keepers of system and network activity l Are particularly
helpful in capturing the pertinent information to explain what happened and why
in the event that security controls experience failures

Reference https://en.wikipedia.org/wiki/Log_file

In computing, a log file is a file that records either events that occur in an operating
system or other software runs,[1] or messages between different users of
a communication software. Logging is the act of keeping a log. In the simplest case,
messages are written to a single log file.

A transaction log is a file (i.e., log) of the communications between a system and
the users of that system,[2] or a data collection method that automatically captures
the type, content, or time of transactions made by a person from a terminal with

119
that system.[3] For Web searching, a transaction log is an electronic record of
interactions that have occurred during a searching episode between a Web search
engine and users searching for information on that Web search engine.

Many operating systems, software frameworks and programs include a logging


system. A widely used logging standard is syslog, defined in Internet Engineering Task
Force (IETF) RFC 5424). The syslog standard enables a dedicated, standardized
subsystem to generate, filter, record, and analyze log messages. This relieves software
developers of having to design and code their own ad hoc logging systems.[4][5][6]

Event logs[edit]
Event logs record events taking place in the execution of a system in order to provide
an audit trail that can be used to understand the activity of the system and to
diagnose problems. They are essential to understand the activities of complex
systems, particularly in the case of applications with little user interaction (such
as server applications).
It can also be useful to combine log file entries from multiple sources. This approach,
in combination with statistical analysis, may yield correlations between seemingly
unrelated events on different servers. Other solutions employ network-wide querying
and reporting.[7][8]
Transaction logs[edit]
Main article: Transaction log
Most database systems maintain some kind of transaction log, which are not mainly
intended as an audit trail for later analysis, and are not intended to be human-
readable. These logs record changes to the stored data to allow the database to
recover from crashes or other data errors and maintain the stored data in a
consistent state. Thus, database systems usually have both general event logs and
transaction logs.

Message logs[edit]
Internet Relay Chat (IRC), instant messaging (IM) programs, peer-to-peer file sharing
clients with chat functions, and multiplayer games (especially MMORPGs) commonly
have the ability to automatically log or save textual communication, both public (IRC
channel/IM conference/MMO public/party chat messages) and private chat
messages between users. [13] Message logs are almost universally plain text files, but
IM and VoIP clients (which supports textual chat, e.g. Skype) might save them
in HTML files or in a custom format to ease reading and encryption.

Internet Relay Chat (IRC)[edit]


In the case of IRC software, message logs often include system/server messages and
entries related to channel and user changes (e.g. topic change, user

119
joins/exits/kicks/bans, nickname changes, user status changes), making them more
like a combined message/event log of the channel in question, but such a log isn't
comparable to a true IRC server event log, because it only records user-visible events
for the time frame the user spent being connected to a certain channel.

Instant messaging[edit]
Instant messaging and VoIP clients often offer the chance to store encrypted logs to
enhance the user's privacy. These logs require a password to be decrypted and
viewed, and they are often handled by their respective writing application..

Transaction log analysis[edit]


The use of data stored in transaction logs of Web search engines, Intranets, and Web
sites can provide valuable insight into understanding the information-searching
process of online searchers.[14] This understanding can enlighten information system
design, interface development, and devising the information architecture for content
collections.

Please read this article and we will cover more during our PCI DSS and ISO 27001
courses. Reference https://logz.io/blog/monitoring-logging-compliance/

Addressing compliance requirements for monitoring and logging can be a challenge


for any organization no matter how experienced or skilled the people responsible are.
Compliance requirements are often not well understood by technical teams and
there is not much instruction on how to comply with a compliance program. In this
article, we’ll discuss what some of these new compliance programs mean, why they
are important, and how you can comply with your logging and monitoring system.

What is a compliance program?


The goal of compliance is to provide stronger security in a verifiable manner. The way
they do this is to create standards or regulations that stipulate where an organization
must meet a minimum level of strong security practices. To meet these, it may
include technical controls; such as logging/ monitoring software, strong configuration
control, or administrative controls; such as policy, procedure, and training.
While compliance sets a minimum level of requirements, it is up to the organization
to determine the level of control needed. Most commonly, this is done via an
assessment of risk or threat in context with the requirement. If it can be shown that
a compliance control requirement is sufficient, then the control should be at least
what is required but can vary based on the assessment.

Who is affected by these regulations?


This next section includes a table of some common compliance programs – often

119
referred to by the regulation or security framework associated with it. A regulation
may be very broad and impact any business when it focuses on specific types of data
(just as a state privacy law protects any citizen’s data regardless of location) or it may
focus on a specific sector of the economy (like healthcare or energy utilities).
Many businesses may find they are within the scope of a regulation to a limited
degree, such as when a business self-insures its employees, it falls into some HIPAA
regulation requirements even though the business has no health-industry
orientation. If you’re wondering if you might be impacted by a regulation, the
question is typically determined by your legal or security department.

What do the compliance programs say?


The regulation may be how a program is identified, but often it is also a framework or
associated document that provides instruction on how a compliance effort is to be
implemented and assessed. It is wise to distribute these framework requirements to
staff (sometimes in addition to the regulation) in order to explain what is required.

Why and when do you have to comply?


The penalty mechanism helps to provide the answers to the “why and when do we
have to comply” question. In many instances, a compliance program costs more in
fines and fees than if the company were to just comply.
For instance, PCI fines range between $5000–$100,000/month[XVIII].. For US State
Privacy breach laws (in 48 states) the average fine is $50–$90 per person included in
the breach. This does not exclude any other civil litigation that often follows a breach
event. HIPAA fines reach up to $1.5M per incident[XIX], and GDPR fines[XX] are up to
20 million Euros or 4 percent of the annual global business, whichever is highest.
These high costs make companies pay attention to key aspects of the compliance
program – such as adhering to a compliance schedule. The compliance program, once
engaged, typically starts a strict timing for audits, and in many cases, the timeframe
for fixing or rectifying non-compliance findings.
More on the subject:
Announcing Logz.io Security Analytics
How to Stay Ahead of Data Retention Requirements – Part 1
How Log Analytics Improves Your Zero Trust Security Model
The compliance framework is where compliance programs get challenging, and
where technical staff may get involved. You’ll want to start by reading the actual text
of the framework. Most compliance frameworks are typically publicly
available[XXI] so you can read about the requirements for the organization to follow.
Where is the compliance program applied?
The challenge with a framework is that it is usually somewhat “high level” for most
technical staff. This can be a point of frustration – most frameworks are not very
prescriptive.

119
There are a few aspects to keep in mind: first, there is often a question of scope,
defining where the compliance program is to be applied. While this may be obvious,
it can also be strategic when a company can segregate a high-risk function to limit the
costs of security.
Second, it is most common to find that “objectives” or security control requirements
are 1) categorized into common business operations and key functions (such as
Human Resources, Access Control, Physical Security, Computer Operations,
Encryption). Then 2) language is used to describe the outcome or components of a
“compliant” environment, but without defining the specific control.
Finally, it is becoming more common for objectives to be couched in language that
assumes a critical risk determination to be part of the solution to affect the features
of the control. Therefore, these are all areas that can be part of a company
compliance approach. Some examples of compliance language may help provide
some context for the topic of logging or monitoring.
Examples of compliance language for logging/event monitoring
NERC CIP-007-5 Table R4 – Security Event Monitoring
R4. Each Responsible Entity shall implement, in a manner that identifies, assesses,
and corrects deficiencies, one or more documented processes that collectively
include each of the applicable requirement parts in CIP-007-5 Table R4 – Security
Event Monitoring. [Violation Risk Factor: Medium] [Time Horizon: Same Day
Operations and Operations Assessment].
M4. Evidence must include each of the documented processes that collectively
include each of the applicable requirement parts in CIP-007-5 Table R4 – Security
Event Monitoring and additional evidence to demonstrate implementation as
described in the Measures column of the table.
ISO 27001 – A.12.4 – Logging and Monitoring
Objective: To record events and generate evidence.
Control 12.4.1 A.12.4.1 Event logging – Event logs recording user activities,
exceptions, faults, and information security events shall be produced, kept and
regularly reviewed.
Control A.12.4.2 Protection of log information – Logging facilities and log information
shall be protected against tampering and unauthorized access.
Control A.12.4.3 Administrator and operator logs – System administrator and system
operator activities shall be logged, and the logs protected and regularly reviewed.
Control A.12.4.4 Clock synchronization –The clocks of all relevant information
processing systems within an organization or security domain shall be synchronized
to a single reference time source.
PCI DSS (Requirement 10): Track and monitor all access to network resources and
cardholder data logging mechanisms and the ability to track user activities are critical
for effective forensics and vulnerability management. The presence of logs in all
environments allows for thorough tracking and analysis if something goes wrong.

119
Determining the cause of a compromise is very difficult without system activity logs.
10.1 Establish a process for linking all access to system components to each individual
user – especially access done with administrative privileges.
10.2 Implement automated audit trails for all system components for reconstructing
these events: all individual user accesses to cardholder data; all actions taken by any
individual with root or administrative privileges; access to all audit trails; invalid
logical access attempts; use of identification and authentication mechanisms;
initialization of the audit logs; creation and deletion of system-level objects.
10.3 Record audit trail entries for all system components for each event, including at
a minimum: user identification, type of event, date and time, success or failure
indication, the origin of the event, and identity or name of affected data, system
component or resource.
10.4 Using time synchronization technology, synchronize all critical system clocks and
times and implement controls for acquiring, distributing, and storing time.
10.5 Secure audit trails so they cannot be altered.
10.6 Review logs for all system components related to security functions at least
daily.
10.7 Retain audit trail history for at least one year; at least three months of history
must be immediately available for analysis.
Takeaways
As you can see from the examples above, the compliance requirements do not
specify a specific product or solution – it is up to the organization to choose how to
address the requirements to achieve compliance. Such solutions should ideally bring
auditable outcomes so that an assessor or auditor could verify that the control in
place meets the expectations of the compliance program.
A second takeaway is that the topic – logging and monitoring – is fairly similar across
compliance programs. Many controls would be the same between compliance
programs because the underlying systems and technology risks are so similar.
Finally, there are typically assessment guides provided with many compliance
frameworks. With a little research, you will find multiple guidelines and checklists for
deploying or assessing compliance with your program. Many are likely to be
instructions to auditors and can give you “insight” as to how you might be assessed
and what sort of evidence is expected. Advance planning can save you time and
effort while illustrating to compliance auditors that your organization is staying ahead
of the goals for the program.
Final Recommendations
We have discussed the who, what, why, when, and where of compliance and want to
leave you with a couple final recommendations:
1) Consider the scope of the compliance program to ensure that your controls include
the system components, facilities, products, and business processes that are included
in the compliance program. Avoid focusing too narrowly on the scope of compliance.

119
2) Interpretation of control requirements can be challenging. One way to address
complex technical control configuration is to work towards standard security best
practices with all products, tools, and utilities at your enterprise, and consider
performing a risk assessment to help deploy the controls most applicable.

119
Read this article https://www.prosci.com/resources/articles/what-is-change-management

Organizations need to understand change and change management as integral


elements in any successful enterprise security architecture. They need to make sure
that changes to applications and other systems already in production are made in a
rigorous and controlled way to ensure quality assurance of the change. As part of
this, organizations need to be able to plan for change, manage it through a well-
defined
lifecycle, approve changes, document it, and roll it back if required. There are many
practices and guides available that organizations can use as frameworks to guide
change management and change control.

120
Information integrity means that organizations need to have procedures in place
that should be applied to compare or reconcile what was processed against what
was supposed to be processed. For example, controls can compare totals or check
sequence numbers to make sure the right operations were performed on the
correct data elements.

Another element of integrity is information accuracy. Because decisions are made


based on information, the accuracy of information becomes very important to
ensure as information is processed by applications. To check input accuracy, data
validation and verification checks should be incorporated into the appropriate
applications. Other controls that may be required are character checks to compare
input characters against the expected type of actual characters, such as numbers or
letters. This is sometimes known as sanity checking by developers and others
involved in applications. Range checks verify input data against predetermined
upper and lower limits to make sure they fit within those ranges. Relationship
checks compare input data with data on a master record file somewhere else to
ensure the correct relationships. Reasonableness checks will compare input data
with an expected standard that is also considered to be another form of sanity

121
checking. Transaction limits check input data against set ceilings on specified
transactions to make sure
they don’t exceed the limits set as being the specified ceiling or upper limit.

Information auditing is important because vulnerabilities may exist in the


development and software lifecycles and therefore, as a result, there is a likelihood
that attacks and vulnerabilities may be exploited. Auditing procedures can assist in
detecting any abnormal activities that may indicate vulnerabilities are being
exploited. A secure information system must provide authorized personnel with the
ability to audit any action that can potentially cause unauthorized access to, damage
to, or in some
way affect the release of sensitive and valuable information. The level and type of
auditing depends on the auditing requirements of the installed software and the
sensitivity of data that is processed or stored on the system. The key point is that the
audit results provide information on what types of unauthorized activities have taken
place and who or what processes took the action to be able to drive the corrective
actions necessary at that point.

121
Risk is defined as an event or occurrence that has a probability of having an impact
to an application project should that risk occur. Being able to identify the risks and
mitigate them as part of application security effectiveness is also very important.

122
Risk management is an ongoing process that continues through the life of a project.
It includes processes for risk management planning, identification, analysis,
monitoring, and control. Many of these processes are updated throughout the
project lifecycle as new risks can be identified at any time and need to be mitigated
as they are identified
and analyzed. It is the objective of risk management to mitigate or treat the risk,
and therefore, the probability and impact of events adverse to the project.

123
When mitigations are implemented, they must be tested. In mature and efficient
SDLC environments, this is often done as part of the promotion between
development environments by the quality assurance and testing teams.

124
Reference https://www.plutora.com/blog/verification-vs-validation

Security findings should be addressed by the development team the same as any
other change request with the condition that the security assessor or another
independent entity verifies and validates the flaw has indeed been remediated.
These roles need to be distinct and separate. In large organizations, independent
verification and validation teams work to determine if security findings and flaws
are truly resolved. They do this by testing and using other assurance methods. This
process should also involve the audit group to independently verify that the
findings have been addressed. In other words, the developer or system owner does
not
authoritatively declare the risk mitigated without the concurrence of an
independent party that includes security and audit and possibly other stakeholders.
In addition to testing of mitigations, the developer should be encouraged to use
code signing as another means of integrity checking for the code they are
producing.

125
Reference https://en.wikipedia.org/wiki/Code_signing

Code signing is the process of digitally signing executables and scripts to confirm
the software author and guarantee that the code has not been altered or corrupted
since it was signed. The process employs the use of a cryptographic hash to validate
authenticity and integrity.[1]
Code signing can provide several valuable features. The most common use of code
signing is to provide security when deploying; in some programming languages, it
can also be used to help prevent namespace conflicts. Almost every code signing
implementation will provide some sort of digital signature mechanism to verify the
identity of the author or build system, and a checksum to verify that the object has
not been modified. It can also be used to provide versioning information about an
object or to store other meta data about an object.[2]

The efficacy of code signing as an authentication mechanism for software depends


on the security of underpinning signing keys. As with other public key infrastructure
(PKI) technologies, the integrity of the system relies on publishers securing their
private keys against unauthorized access. Keys stored in software on general-

126
purpose computers are susceptible to compromise. Therefore, it is more secure, and
best practice, to store keys in secure, tamper-proof, cryptographic hardware devices
known as hardware security modules or HSMs.[3]

Providing security[edit]
Many code signing implementations will provide a way to sign the code using a
system involving a pair of keys, one public and one private, similar to the process
employed by TLS or SSH. For example, in the case of .NET, the developer uses a
private key to sign their libraries or executables each time they build. This key will be
unique to a developer or group or sometimes per application or object. The
developer can either generate this key on their own or obtain one from a
trusted certificate authority (CA).[4]
Code signing is particularly valuable in distributed environments, where the source of
a given piece of code may not be immediately evident - for example Java
applets, ActiveX controls and other active web and browser scripting code. Another
important usage is to safely provide updates and patches to existing
software.[5] Windows, Mac OS X, and most Linux distributions provide updates using
code signing to ensure that it is not possible for others to maliciously distribute code
via the patch system. It allows the receiving operating system to verify that the
update is legitimate, even if the update was delivered by third parties or physical
media (disks).
Code signing is used on Windows and Mac OS X to authenticate software on first run,
ensuring that the software has not been maliciously tampered with by a third-party
distributor or download site. This form of code signing is not used on Linux because
of that platform's decentralized nature, the package manager being the predominant
mode of distribution for all forms of software (not just updates and patches), as well
as the open-source model allowing direct inspection of the source code if
desired. Debian-based Linux distributions (among others) validate downloaded
packages using public key cryptography.[6]
Trusted identification using a certificate authority (CA)[edit]
The public key used to authenticate the code signature should be traceable back to a
trusted root authority CA, preferably using a secure public key infrastructure (PKI).
This does not ensure that the code itself can be trusted, only that it comes from the
stated source (or more explicitly, from a particular private key).[7] A CA provides a root
trust level and is able to assign trust to others by proxy. If a user trusts a CA, then the
user can presumably trust the legitimacy of code that is signed with a key generated
by that CA or one of its proxies. Many operating systems and frameworks contain
built-in trust for one or more existing CAs (such as Entrust
Datacard, VeriSign/Symantec, DigiCert, Comodo, GoDaddy and GlobalSign). It is also
commonplace for large organizations to implement a private CA, internal to the
organization, which provides the same features as public CAs, but it is only trusted

126
within the organization.

Extended Validation (EV) Code Signing[edit]


Extended validation (EV) code signing certificates are subject to additional validation
and technical requirements, which are summarized in the CA/Browser Forum's EV
Code Signing Certificate Guidelines. These guidelines are based on the CA/B
Forum's Baseline Requirements and Extended Validation Guidelines. In addition to
validation requirements specific to EV, the EV code signing guidelines stipulate that
the "the Subscriber’s private key is generated, stored and used in a crypto module
that meets or exceeds the requirements of FIPS 140-2 level 2."[8]
Certain applications, such as signing Windows 10 kernel-mode drivers, require an EV
code signing certificate.[9] Additionally, Microsoft's IEBlog states that Windows
programs "signed by an EV code signing certificate can immediately establish
reputation with SmartScreen reputation services even if no prior reputation exists for
that file or publisher." [10]
As of December 4, 2019, Microsoft listed the following providers of EV code signing
certificates:[11]
SSL.com
Symantec
Certum
Entrust
GlobalSign
Sectigo (formerly Comodo)
DigiCert
Alternative to CAs[edit]
The other model is where developers can choose to provide their own self-generated
key. In this scenario, the user would normally have to obtain the public key in some
fashion directly from the developer to verify the object is from them for the first
time. Many code signing systems will store the public key inside the signature. Some
software frameworks and OSs that check the code's signature before executing will
allow you to choose to trust that developer from that point on after the first run. An
application developer can provide a similar system by including the public keys with
the installer. The key can then be used to ensure that any subsequent objects that
need to run, such as upgrades, plugins, or another application, are all verified as
coming from that same developer.

Time-stamping[edit]
Time-stamping was designed to circumvent the trust warning that will appear in the
case of an expired certificate. In effect, time-stamping extends the code trust beyond
the validity period of a certificate.[12]
In the event that a certificate has to be revoked due to a compromise, a specific date

126
and time of the compromising event will become part of the revocation record. In
this case, time-stamping helps establish whether the code was signed before or after
the certificate was compromised.[12]

Code-Signing in Xcode[edit]
Developers need to sign their iOS and tvOS apps before running them on any real
device and before uploading them to the App Store. This is needed to prove that the
developer owns a valid Apple Developer ID. An application needs a valid profile or
certificate so that it can run on the devices.

Problems[edit]
Like any security measure, code signing can be defeated. Users can be tricked into
running unsigned code, or even into running code that refuses to validate, and the
system only remains secure as long as the private key remains private.[13][14]
It is also important to note that code signing does not protect the end user from any
malicious activity or unintentional software bugs by the software author — it merely
ensures that the software has not been modified by anyone other than the author.
Sometimes, sandbox systems do not accept certificates, because of a false time-
stamp or because of an excess usage of RAM.

Implementations[edit]
IBM's Lotus Notes has had PKI signing of code from Release 1, and both client and
server software have execution control lists to control what levels of access to data,
environment and file system are permitted for given users. Individual design
elements, including active items such as scripts, actions and agents, are always
signed using the editor's ID file, which includes both the editor's and the domain's
public keys. Core templates such as the mail template are signed with a dedicated ID
held by the Lotus template development team.

Microsoft implements a form of code signing (based on Authenticode) provided for


Microsoft tested drivers. Since drivers run in the kernel, they can destabilize the
system or open the system to security holes. For this reason, Microsoft tests drivers
submitted to its WHQL program. After the driver has passed, Microsoft signs that
version of the driver as being safe. On 32-bit systems only, installing drivers that are
not validated with Microsoft is possible after accepting to allow the installation in a
prompt warning the user that the code is unsigned. For .NET (managed) code, there
is an additional mechanism called Strong Name Signing that uses Public/Private keys
and SHA-1 hash as opposed to certificates. However, Microsoft discourages reliance
on Strong Name
Signing as a replacement for Authenticode.[15]

126
Unsigned code in gaming and consumer devices[edit]
In the context of consumer devices such as games consoles, unsigned code is often
used to refer to an application which has not been signed with the cryptographic
key normally required for software to be accepted and executed. Most console
games have to be signed with a secret key designed by the console maker or the
game will not load on the console. There are several methods to get unsigned code
to execute which include software exploits, the use of a modchip, a technique known
as the swap trick or running a softmod.

It may not initially seem obvious why simply copying a signed application onto
another DVD does not allow it to boot. On the Xbox, the reason for this is that the
Xbox executable file (XBE) contains a media-type flag, which specifies the type of
media that the XBE is bootable from. On nearly all Xbox software, this is set such that
the executable will only boot from factory produced discs so simply copying the
executable to burnable media is enough to stop the execution of the software.
However, since the executable is signed, simply changing the value of the flag is not
possible as this alters the signature of the executable causing it to fail validation
when checked.

Proliferation of connected devices[edit]


Proliferation of connected devices is required for having the ability to identify and
authenticate devices. This ability applies to the area of Internet of Things (IoT). IoT
requires devices to be trusted. In the IoT, code signing in the software release process
ensures the integrity of IoT device software and firmware updates, and defends
against the risks associated with tampering the device or the code embedded in it.
Device credentialing enables control over the manufacturing process of high
technology products and protects against unauthorized production of counterfeits.
Together with code signing, the technology ensures physical authenticity and the
authenticity and integrity of the code they possess at the time of manufacture
through the use of a digital birth certificate, or during subsequent upgrades through
code validation any time during the product lifecycle. This is creating a new
dimension for code signing, and elevating the security awareness and need to
maintain private signing keys secured within a dedicated protected environment to
establish a root of trust for the entire system. Given the prevalence of malware and
Advanced Persistent Threats (APTs), many software vendors, providers of online
services, enterprise IT organizations and manufacturers of high-technology IoT
devices are under pressure to increase the security of their high technology
manufacturing and code signing process.[16]

126
Reference https://en.wikipedia.org/wiki/Regression_testing

Regression testing (rarely non-regression testing[1]) is re-


running functional and non-functional tests to ensure that previously developed
and tested software still performs after a change.[2] If not, that would be called
a regression. Changes that may require regression testing include bug fixes,
software enhancements, configuration changes, and even substitution of electronic
components.[3] As regression test suites tend to grow with each found defect, test
automation is frequently involved. Sometimes a change impact analysis is
performed to determine an appropriate subset of tests (non-regression analysis[4]).

Background[edit]
As software is updated or changed, or reused on a modified target, emergence of
new faults and/or re-emergence of old faults is quite common. Sometimes re-
emergence occurs because a fix gets lost through poor revision control practices (or
simple human error in revision control). Often, a fix for a problem will be "fragile" in
that it fixes the problem in the narrow case where it was first observed but not in
more general cases which may arise over the lifetime of the software. Frequently, a

127
fix for a problem in one area inadvertently causes a software bug in another area.
Finally, it may happen that, when some feature is redesigned, some of the same
mistakes that were made in the original implementation of the feature are made in
the redesign.
Therefore, in most software development situations, it is considered good coding
practice, when a bug is located and fixed, to record a test that exposes the bug and
re-run that test regularly after subsequent changes to the program.[5] Although this
may be done through manual testing procedures using programming techniques, it is
often done using automated testing tools.[6] Such a test suite contains software tools
that allow the testing environment to execute all the regression test
cases automatically; some projects even set up automated systems to re-run all
regression tests at specified intervals and report any failures (which could imply a
regression or an out-of-date test).[7] Common strategies are to run such a system
after every successful compile (for small projects), every night, or once a week. Those
strategies can be automated by an external tool.
Regression testing is an integral part of the extreme programming software
development method. In this method, design documents are replaced by extensive,
repeatable, and automated testing of the entire software package throughout each
stage of the software development process. Regression testing is done after
functional testing has concluded, to verify that the other functionalities are working.
In the corporate world, regression testing has traditionally been performed by
a software quality assurance team after the development team has completed work.
However, defects found at this stage are the most costly to fix. This problem is being
addressed by the rise of unit testing. Although developers have always written test
cases as part of the development cycle, these test cases have generally been
either functional tests or unit tests that verify only intended outcomes. Developer
testing compels a developer to focus on unit testing and to include both positive and
negative test cases.[8]
Techniques[edit]
The various regression testing techniques are:
Retest all[edit]
This technique checks all the test cases on the current program to check its integrity.
Though it is expensive as it needs to re-run all the cases, it ensures that there are no
errors because of the modified code.[9]
Regression test selection[edit]
Unlike Retest all, this technique runs a part of the test suite (owing to the cost of
retest all) if the cost of selecting the part of the test suite is less than the Retest all
technique.[9]
Test case prioritization[edit]
Prioritize the test cases so as to increase a test suite's rate of fault detection. Test
case prioritization techniques schedule test cases so that the test cases that are

127
higher in priority are executed before the test cases that have a lower priority.[9]
Types of test case prioritization[edit]
General prioritization – Prioritize test cases that will be beneficial on subsequent
versions
Version-specific prioritization – Prioritize test cases with respect to a particular
version of the software.
Hybrid[edit]
This technique is a hybrid of regression test selection and test case prioritization.[9]
Benefits and drawbacks[edit]
Regression testing is performed when changes are made to the existing functionality
of the software or if there is a bug fix in the software. Regression testing can be
achieved through multiple approaches, if a test all approach is followed, it provides
certainty that the changes made to the software have not affected the existing
functionalities, which are unaltered.[10]
In agile software development—where the software development life cycles are very
short, resources are scarce, and changes to the software are very frequent—
regression testing might introduce a lot of unnecessary overhead.[10]
In a software development environment which tends to use black box components
from a third party, performing regression testing can be tricky, as any change in the
third-party component may interfere with the rest of the system (and performing
regression testing on a third-party component is difficult, because it is an unknown
entity).[10]
Uses[edit]
Regression testing can be used not only for testing the correctness of a program but
often also for tracking the quality of its output.[11] For instance, in the design of
a compiler, regression testing could track the code size and the time it takes to
compile and execute the test suite cases.
Also as a consequence of the introduction of new bugs, program maintenance requires far
more system testing per statement written than any other programming. Theoretically, after
each fix, one must run the entire batch of test cases previously run against the system to
ensure that it has not been damaged in an obscure way. In practice, such regression
testing must indeed approximate this theoretical idea, and it is very costly.
— Fred Brooks, The Mythical Man Month, p. 122
Regression tests can be broadly categorized as functional tests or unit tests.
Functional tests exercise the complete program with various inputs. Unit tests
exercise individual functions, subroutines, or object methods. Both functional testing
tools and unit-testing tools tend to be automated and are often third-party products
that are not part of the compiler suite. A functional test may be a scripted series of
program inputs, possibly even involving an automated mechanism for controlling
mouse movements and clicks. A unit test may be a set of separate functions within
the code itself or a driver layer that links to the code without altering the code being

127
tested.

127
Reference http://softwaretestingfundamentals.com/acceptance-testing/

ACCEPTANCE TESTING is a level of software testing where a system is tested for


acceptability. The purpose of this test is to evaluate the system’s compliance with
the business requirements and assess whether it is acceptable for delivery.
Definition by ISTQB
acceptance testing: Formal testing with respect to user needs, requirements, and
business processes conducted to determine whether or not a system satisfies the
acceptance criteria and to enable the user, customers or other authorized entity to
determine whether or not to accept the system.
Analogy
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and
clip, the ink cartridge and the ballpoint are produced separately and unit tested
separately. When two or more units are ready, they are assembled and Integration
Testing is performed. When the complete pen is integrated, System Testing is
performed. Once System Testing is complete, Acceptance Testing is performed so as
to confirm that the ballpoint pen is ready to be made available to the end-users.
Method

128
Usually, Black Box Testing method is used in Acceptance Testing. Testing does not
normally follow a strict procedure and is not scripted but is rather ad-hoc.
Tasks
Acceptance Test Plan
Prepare
Review
Rework
Baseline
Acceptance Test Cases/Checklist
Prepare
Review
Rework
Baseline
Acceptance Test
Perform
When is it performed?
Acceptance Testing is the fourth and last level of software testing performed
after System Testing and before making the system available for actual use.
Who performs it?
Internal Acceptance Testing (Also known as Alpha Testing) is performed by members
of the organization that developed the software but who are not directly involved in
the project (Development or Testing). Usually, it is the members of Product
Management, Sales and/or Customer Support.
External Acceptance Testing is performed by people who are not employees of the
organization that developed the software.
Customer Acceptance Testing is performed by the customers of the
organization that developed the software. They are the ones who asked the
organization to develop the software. [This is in the case of the software not
being owned by the organization that developed it.]
User Acceptance Testing (Also known as Beta Testing) is performed by the end
users of the software. They can be the customers themselves or the
customers’ customers.

128
129

You might also like