Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 35

CHAPTER-1 T-4

Cyber Security
Fundamentals
COURSE
Pengantar Teknologi Informasi dan Komunikasi

INAS RAHMA ALIFIA (11230940000025)


Point of
Discussion
1.1 Network and Security Concept

Information Assurance
Fundamentals Symmetric Encryption
Basic Cryptography

Public Key Encryption The Domain Name


Firewalls
System

Radio-Frequency
Virtualization Identification
Information Assurance
Fundamentals
AUTHENTICATION

Authentication is important to any secure system, as it is the


key to verifying the source of a message or that an individual is
whom he or she claims. The NIAG defines authentication as a
“security measure designed to establish the validity of a
transmis- sion, message, or originator, or a means of verifying
an individual’s authorization to receive specific categories of
information.”
Information Assurance
Fundamentals
AUTHORIZATION

Authorization focuses on determining what a user has


permission to do. The NIAG defines authorization as
“access privileges granted to a user, program, or process.”
Information Assurance
Fundamentals
NONREPUDIATION

NIAG defines as “assurance the sender of data is provided


with proof of delivery and the recipient is provided with
proof of the sender’s identity, so neither can later deny
having processed the data.”
Information Assurance
Fundamentals
CONFIDENTIALITY

The NIAG defines confidentiality as “assurance that


information is not disclosed to unauthorized
individuals, processes, or devices.”
Information Assurance
Fundamentals
INTEGRITY

Integrity normally refers to data integrity, or ensuring


that stored data are accurate and contain no
unauthorized modifications.
Information Assurance
Fundamentals
AVAILABILITY

The NIAG defines availability as “timely, reliable


access to data and information services for
authorized users.”
Basic Cryptography

This Section provides information on basic


cryptography to explain the history and basics of
ciphers and cryptanalysis. Later sections will explain
modern cryptography applied to digital systems.
Basic Cryptography

FREQUENCY OF LETTERS A LOTTERY CAGES THE GERMAN ENIGMA


IN THE ENGLISH RANDOMIZES THE CODING MACHINE. ENIGMA ROTORS
LANGUAGE. NUMBER SELECTION.
Of at is rotated fifteen spaces,
A lottery cages randomizes the resulting in the letter p. The Of this type is the enigma,
Short messages can be number selection. recipient can decrypt the text by invented by the german
difficult to decrypt because Developed the one-time pad, a reversing the function, rotating engineer arthur scherbius at
there is little for the analyst cryptographic cipher that, with the alphabet left by the number the end of world war. The
to study, but long messages a properly randomized key, specified in the key rather than enigma used a series of rotors
encrypted with substitution produces unbreakable right. A frequency analysis will to encrypt each letter typed
ciphers are vurnerable to ciphertext. A one-time pad is fail againts this cipher because the into it with a different key.
frequency analysis. For similar to a substitution cipher, same character in the ciphertext Another user with an enigma
for which another letter based can be the result of different machine could decode the
instance, in the english on a key replaces a letter, but inputs from the cleartext. The key message because their system
language, some letters rather than using the same key to the one-time pad is only using had the same combination of
appear in more words than for the entire message, a new it one time. If the cryptographer encoded rotors. The enigma
others do. Each letter have key is used for each letter. This uses the numbers in a repeating could not perfectly replicate a
a frequency analysis. key must be at least as long as pattern or uses the same numbers one-time pad because any
the message and not contain to encode a second message, a systeem that does not begin
any patterns a cryptanalyst pattern may appear in the with random input will
could use to break the code. ciphertext that would help eventually reveal a pattern.
cryptanalyst break the code.
Symmetric Encryption

Symmetric encryption is a class of reversible encryption algorithms


that use the same key for both encrypting and decrypting messages.
Symmetric encryption, by definition, requires both communication
endpoints to know the same key in order to send and receive encrypted
messages. Symmetric encryption depends upon the secrecy of a key.
Key exchanges or preshared keys present a challenge to keeping the
encrypted text’s confidentiality and are usually performed out of band
using different protocols. Algorithms in this category are usually fast
because their operations use cryptographic primitives.
• Example of Simple Symmetric Encryption with Exclusive OR
(XOR)
• Improving upon Stream Ciphers with Block Cipher
Public Key
Encryption
This section discusses asymmetric encryption, commonly known
as public key encryption. It explains the key principles of public
key encryption, emphasizing the use of two linked keys: a
public key for encryption and a private key for decryption. The
primary advantage over symmetric key encryption is
highlighted: the secure communication recipient publishes their
public key, allowing anyone to encrypt messages, while only the
recipient, with the private key, can decrypt them. The process is
likened to a lock box analogy, where the public key is the lock
and the private key is the key to unlock it.
Public Key
Encryption

The private key has a mathematical relationship to


the public key, but this relationship does not
provide an easy way for an attacker to derive the
private key from the public key. Visually, the
process of encrypting and decrypting a message
using the public key method is similar to the
process of using symmetric encryption with the
notable exception that the keys used in the process
are not the same. Exhibit 1-8 illustrates this
disconnect.
Public Key
Encryption

If the sender were to encrypt the message of 88 (which is between 0 and 186)
7
using the RSA method, the sender would calculate 88 mod 187, which equals 11.
Therefore, the sender would transmit the number 11 as the ciphertext to the
recipient. To recover the original message, the recipient would then need to
transform 11 into the original value by calculating 1123 mod 187, which equals
88. Exhibit 1-9 depicts this process
The Domain Name
System

This section explains the fundamentals of the domain name system


(DNS), which is an often overlooked component of the Web’s
infrastructure, yet is crucial for nearly every networked application.
DNS is a fundamental piece of the Internet architecture. Knowledge
of how the DNS works is necessary to understand how attacks on the
system can affect the Internet as a whole and how criminal infrastruc-
ture can take advantage of it.
The DNS uses computers known as name servers to map domain
names to the corresponding IP addresses using a database of records.
Exhibit 1-10 shows how the hierarchical
nature of the DNS leads to a tree-like
structure con- sisting of domains and
subdomains.

Exhibit 1-11 depicts the most common


This section explains the fundamentals of the domain way for systems to resolve domain
name system (DNS), names: by contacting a recursive DNS
server and allowing it to do the work.
Security and The
DNS

As a fundamental part of the modern Internet, the security of the DNS is


important to all Internet users. In the previous discussion of how the DNS
system works, it is important to note that no authentication of results ever
occurred. This makes the system vulnerable to an attack known as DNS
cache poisoning, wherein an attacker tricks a DNS server into accepting
data from a nonauthoritative server and returns them to other resolvers.
Extensions to the DNS protocol, known as DNSSEC, solve this problem
using cryptographic keys to sign RRs.
TMI
Devices and software that fall under the banner of “firewall”
have become a necessity for any and all computers connected
to the Internet that the user wants to remain safe. While the
term firewall may conjure different images for different
people, the basic concept of a firewall is : “Keep the bad
people out of our computer.”

Firewalls
History Lesson

1980s
At the time, network administrators used network routers to prevent traffic from
one network from interfering with the traffic of a neighboring network.
1990s
Enhanced routers introduced in the 1990s included filtering rules. The designers of
these routers designated the devices as security firewalls.
early 1990s
The next generation of security firewalls improved on these filter- enabled routers.
During the early 1990s, companies such as DEC, Check Point, and Bell Labs
developed new features for firewalls.
Firewalls
What’s in a Name?

The question remains: what exactly is a firewall? Firewalls are


network devices or software that separates one trusted network
from an untrusted network (e.g., the Internet) by means of rule-
based filtering of network traffic as depicted in Exhibit 1-13.
Despite the broad definition of a firewall, the specifics of what
makes up a firewall depend on the type of firewall. There are
three basic types of firewall: packet-filtering firewalls, stateful
firewalls, and application gateway firewalls.

While Exhibit 1-13 identifies the firewall as a separate physical


device at the boundary between an untrusted and trusted
network, in reality a firewall is merely software.
Firewalls
Packet-Filtering Firewalls

Packet-filtering firewalls work at the IP level of the network. Most routers


integrate this type of firewall to perform basic filtering of packets based on
an IP address. The principle behind packet-filtering firewalls is that the
firewall bases the decision to allow a packet from one network into another
network solely on the IP address of the source and destination of the packet.
Packet-filtering firewalls can also expand on the basic principle of IP-
address-only filtering by looking at the Transmission Control Protocol (TCP)
or User Diagram Protocol (UDP) source and destination ports.

In this example, the firewall administrator has placed an ALLOW rule after the
DENY ALL rule. This situation would prevent the processing of the last ALLOW
rule.
Firewalls
Stateful Firewalls

A stateful firewall is a kind of firewall that keeps track and monitors the
state of active network connections while analyzing incoming traffic and
looking for potential traffic and data risks.
Stateful firewalls facilitate secure and efficient communication by allowing
valid sessions, blocking invalid attempts, managing memory effectively,
and reevaluating connections that have become inactive. Their ability to
quickly determine the state of communication contributes to the overall
security and performance of network traffic.
insights into how stateful firewalls handle session establishment, invalid
session attempts, and the efficient management of connection states to
ensure effective and secure network communication.

From this point we can provides insights into how stateful firewalls handle
session establishment, invalid session attempts, and the efficient
management of connection states to ensure effective and secure network
communication.
Firewalls
Application gateway firewalls

Application gateway firewalls, also known as proxies, are the most


recent addition to the firewall family. These firewalls work in a similar
manner to the stateful firewalls, but instead of only understanding the
state of a TCP connection, these firewalls understand the protocol
associated with a particular application or set of applications. A classic
example of an application gateway firewall is a Web proxy or e-mail-
filtering proxy.
Firewalls
Stateful Firewalls

From this section we can summarize that :


• Diverse Forms of Firewalls: Firewalls exist in various forms, ranging from simple packet
filtering to more sophisticated proxy solutions. This diversity allows organizations to
choose the type of firewall that best suits their security needs.
• Complexity and Documentation: The topic of firewalls is acknowledged as complex, with
extensive documentation available. Authors in the IT security community have dedicated
entire books to designing, administering, and implementing firewalls, highlighting the
depth of knowledge and expertise associated with firewall technology.
• High-Level Conceptual Understanding: While the minute details of firewall operation can
be complex, it is emphasized that a high-level conceptual understanding is crucial.
Knowing how firewalls process traffic and prevent unwanted intrusions is key to grasping
their security implications.
• Single Layer of Defense: Firewalls are positioned as one layer within the broader defense-
in-depth strategy. They contribute by reducing the attack surface of a server, blocking
unnecessary ports from the broader Internet. However, it's underscored that firewalls alone
cannot provide comprehensive protection.
• Limitations of Firewalls: Similar to antivirus solutions, the paragraph notes that the idea
that firewalls can stop all threats from the Internet is overstated. Firewalls have limitations,
particularly in protecting against specific vulnerabilities like buffer overflows and privilege
escalation attacks. They cannot safeguard resources that are susceptible to these targeted
exploits.
Virtualization
In the Beginning, There Was Blue ...

This section discusses the challenges and expenses associated with managing
infrastructure resources such as servers and introduces the concept of
virtualization as a solution to alleviate some of these costs.
• Introduction of Virtualization: To address the high operational costs and
administrative burdens, organizations are turning to virtualization.
Virtualization, at its core, involves the simulation or emulation of a real
product (in this case, servers) within a virtual environment.
• Historical Context of Virtualization: In paragraph 2 mentioning the M44/44X
Project in the 1960s at the IBM Thomas J. Watson Research Center. This
project involved the simulation of multiple IBM 7044 mainframes (44X)
within a single 7044 (M44) mainframe. The term "virtual machine (VM)"
was coined to describe this concept of simulating or emulating a computer
inside another computer using both hardware and software.
• Mainframe Virtualization: The use of virtual machines inside mainframes has
been a common practice for decades. Mainframes with virtualization
capabilities can operate not as a single machine but as multiple machines
simultaneously. Each virtual machine can run its operating system
independently on the same physical machine, effectively turning one
machine into multiple machines.
Virtualization
The Virtualization Menu

Virtualization comes in many forms such as platform and


application virtualization. The most predominant platform
virtualization techniques include full virtualization, hardware-
assisted virtualization, paravirtualization, and operating
system virtualization. Each of these techniques accomplishes
the task of virtualization in different ways, but each results in
a single machine performing the function of multiple
machines working at the same time. The distinction between
the different virtualization techniques exists due to the way
the virtual machine application, commonly referred to as the
virtual machine monitor (VMM) or hypervisor, partitions the
physical hardware and presents this hardware to virtual
machines.
Virtualization
The Virtualization Menu

Exhibit 1-15 illustrates the relationship of these


components. The key component, the component that
makes virtualization possible, is the VMM.
Virtualization systems consist of several key
components: a VMM, physical hardware, virtual
hardware, virtual operating systems, and a host (or
real) operating system. The VMM is the application
layer between the various virtual machines and the
underlying physical hardware. The VMM provides the
framework for the virtual machine by creating the
necessary virtual components.
Virtualization
Getting a Helping Hand from the Processor

This section outlines how hardware-assisted virtualization, facilitated by


processor extensions in modern x86-based processors, introduces a new
privilege level for the VMM, handles instructions efficiently, and
minimizes interference between the virtual machine and host operating
systems, ultimately reducing overhead. And it’s related to the maturation of
virtualization technology and the introduction of hardware-assisted
virtualization:
• Evolution of Virtualization Technology: Virtualization technology has
matured, prompting hardware manufacturers like Intel and AMD to
become involved. This evolution has led to the development of
hardware-assisted virtualization.
• Processor Extensions: Intel's Virtualization Technology (VT) and
AMD's AMD-V are features incorporated into newer x86-based
processors. These processor extensions address challenges related to
privileged x86 instructions that Virtual Machine Monitors (VMM)
cannot virtualize effectively.
• New Privilege Level for VMM: With hardware-assisted virtualization,
the VMM operates in a new root mode privilege level below ring-0.
This level is more privileged than ring-0, where the traditional VMM
resides.
Virtualization
If All Else Fails, Break It to Fix It

This section discusses paravirtualization, a virtualization approach


developed before the introduction of hardware-assisted virtualization
technologies in the x86 architecture. Paravirtualization addresses the
nonvirtualizable instruction problem in x86 processors by allowing the
virtual machine's operating system to run in ring-0 after modifying the
system to restrict dangerous x86 instructions. Unlike full virtualization,
paravirtualization breaks instructions causing instability and replaces
them with calls to the Virtual Machine Monitor (VMM) for proper
handling. However, the drawback is that it requires modification of the
virtual machine's operating system kernel. Paravirtualization is
challenging for closed-source operating systems, with most
implementations running modified Linux operating systems.
Commercial applications like VMware support paravirtualization, but
the choice of the virtual machine's operating system can limit its
usefulness.
Virtualization
Use What You Have

Now we discusses the concept of operating system-assisted


virtualization, highlighting its fundamental differences from full
virtualization, paravirtualization, and hardware-assisted virtualization.
Operating system-assisted virtualization, commonly seen in Linux- and
Unix-based systems through tools like chroot, FreeVPS, and FreeBSD
Jail, does not aim to provide a complete virtualized machine with
dedicated I/O, memory, and processors. Operating system-assisted
virtualization is a distinct approach where the emphasis is on providing
the illusion of a dedicated operating system to applications rather than
creating a full virtualized machine. This technique is commonly
observed in Linux- and Unix-based systems through tools like chroot,
FreeVPS, and FreeBSD Jail. Instead, it offers the illusion of a
dedicated operating system to applications. Unlike other virtualization
techniques that support ring-0 instructions, operating system-assisted
virtualization only provides user mode resources, preventing the
execution of privileged instructions requiring ring-0. This form of
virtualization allows a single operating system instance to run multiple
applications in isolation, each having access to necessary operating
Exhibit 1-16 depicts this form of system resources such as disk and network access.
virtualization.
Virtualization
Doing It the Hard Way

This section discusses emulators and how they operate on similar principles
as virtualization systems but with distinct differences. Unlike virtualization
systems, emulators are not restricted by the requirement that the host
machine must match the same basic architecture as the virtual machine.
Emulators emulate all aspects of the virtual machine's hardware, allowing
them to run on host machines with different architectures. Emulators
translate virtual machine instructions into instructions that can run on the
host machine, and they are capable of running virtual machines with
radically different architectures than the host machine. However, this
flexibility comes with a performance cost, as each CPU instruction
executed by the virtual machine must be translated into a series of
instructions for the host machine's CPU, resulting in overhead and
performance penalties. Emulators are not limited to dissimilar architectures;
they can also run virtual machines with the same architecture as the host
machine, providing a more realistic virtual environment without relying on
the translation of certain ring-0 instructions.
Virtualization
Biting the Hand That Feeds

This section begins by acknowledging the benefits of virtualization in


reducing the number of physical servers but quickly shifts focus to the
associated risks. It points out that despite efforts to establish boundaries
between host systems and virtual machines, malicious actors may exploit
vulnerabilities in virtualization systems. Examples from 2009 illustrate the
potential for attackers to breach these boundaries, gaining access to the
host machine's memory from within a virtual machine. The complexity of
virtualization systems makes them prone to vulnerabilities, and
compromises in a single server's operating system or application can have
widespread effects on all virtual machines within the same physical
machine. The mention of cloud computing highlights the broader impact
of such vulnerabilities across enterprises sharing the same virtual
infrastructure. To mitigate risks, the paragraph suggests separating
sensitive virtual machines from public ones, thereby reducing the potential
impact of vulnerabilities at the virtual machine boundary.
Virtualization
Conclusion

Virtualization has many advantages ranging from server


consolidation to program isolation. While the technology
has been available in some form for decades,
advancements in modern computing hardware have led to
a more widespread adaptation of the technology.
Virtualization is a key component of the recent influx of
new cloud-computing technologies currently on the
market. The growth of the virtualization market is far
from reaching its peak.
Radio-Frequency
Identification
IDENTIFY WHAT?
This section discusses the data contained in RFID tags, particularly focusing on
Electronic Product Code (EPC) tags. EPCs are the RFID equivalent of barcodes and
serve to replace them. They are commonly passive RFID tags integrated into stickers,
containing information similar to Universal Product Codes (UPC) but with the capacity
for much more data. The data in a typical EPC is a 96-bit number, representing the
product's manufacturer, type, and serial number. Unlike UPC codes, EPC codes allow
for the use of over 600 billion unique serial numbers, enabling precise product
identification. The paragraph also introduces the use of RFID tags beyond products,
such as RFID-equipped ID cards for building and system access. Access cards, unlike
EPC tags, need to be more secure to prevent cloning or tampering. Contactless smart
cards (CSCs) are introduced as a more complex form of RFID tag, incorporating
cryptography to secure information and verify the identity of the reader. Examples of
CSC products include contactless credit cards, access control badges, and electronic
passports, emphasizing the critical importance of their security to prevent theft of money
or identity without direct contact with the owner.
Radio-Frequency
Identification
SECURITY AND PRIVACY
CONCERNS
This section controversies surrounding RFID security measures and privacy concerns. It
cites a 2005 incident where researchers at Johns Hopkins University, led by Dr. Avi
Rubin, successfully broke the 64-bit encryption used by RFID-enhanced car keys and
Exxon's Speedpass RFID payment system. The encryption, initially considered sufficient
in 1993, was found inadequate, raising security concerns. This section highlights the
broader privacy implications of RFID tags, emphasizing that even tags without
identifying information can be used in conjunction with other data to reveal personal
details. It illustrates a scenario where RFID-tagged items, when linked to a buyer's credit
card information, could enable tracking, targeted advertisements, and location
monitoring, reminiscent of the film Minority Report. The paragraph concludes by urging
individuals and organizations deploying or carrying RFID-enabled devices to carefully
consider these privacy concerns. It suggests the use of RFID wallets, made of metallic
material to block signals, as a countermeasure against unauthorized RFID readers.
Despite the advantages of RFID technology, the paragraph stresses the need for
implementing countermeasures to protect against cloning and modification for
identification and authentication purposes.
Thankyou

You might also like