Professional Documents
Culture Documents
Policy 1
Policy 1
Authors:
Carmine Rizzo (ETSI)
Charles Brookson (Zeata Security and Azenby)
Carmine Rizzo has worked in the ETSI Secretariat in France since November 2007. He is the point of
reference for security standardization activities and responsible for the supervision, coordination and
promotion of ETSI security standardization work within and across various Technical Committees and
Working Groups.
His professional background in the United Kingdom includes experience in the private sector for Nortel
Networks as Data Communications Network Engineer, and over five years' experience in the
international organization ECMWF (European Centre for Medium-range Weather Forecasts), working in
an operational environment for the management of IT projects, services and security.
He has gained, and actively maintains, several professional certifications covering broad aspects of
technical security and security management, as well as project management, change management, IT
audit, control, and service management.
Charles Brookson worked in the UK Department for Business Innovation & Skills (BIS) for 12 years and is
a Professional Electronic Engineer. He previously was Head of Security for one2one (now T-Mobile UK)
for four years, and worked within British Telecom for twenty years before moving to one2one.
He is chairman of ETSI OCG Security, which is responsible for security standardization within ETSI,
chaired the GSM Algorithm Group in 1986, and has been Chairman on the GSM Association Security
Group (representing nearly 800 operators in 220 countries) for over 25 years, and has been involved in
GSM and 3GPP security standards during this time. He was also on the Permanent Stakeholders group of
ENISA, the European Network and Information Security Agency.
He is now a Director of Zeata Security Ltd and Azenby Ltd which offers consultancy in the mobile sector.
In 2015 Charles Brookson was awarded an OBE (Officer of the Order of the British Empire) for his
services to telecommunication security.
Each section introduces a specific technology and outlines ETSI's involvement in the standardization of
security in that area. Some of our major achievements are then highlighted and ongoing activities are
described. At the end of the paper, all ETSI's specifications and standards for security are listed. Listed
documents referenced in the text are indicated by a number in [ ].
Each ETSI document number in the list of publications at the end of this paper links to the ETSI
deliverable available online, from where the latest published version at the time of your search can be
downloaded, as well as any previous versions.
This seventh edition of the ETSI Security White Paper updates all areas as necessary. New publications
have been added, while a large number of previously referenced publications have undergone revision
and now have updated versions.
In the past few years threats to ICT in general have expanded and security has moved on apace, with
areas such as cybersecurity, cloud, mobile, the Internet of Things producing more requirements. We
have seen demands placed by the growing threat of criminal activities and risks to critical infrastructure.
Sensitivity towards the privacy of the citizen and of his data is increasing with media exposure of
insecure Governments businesses, but is lagging behind the rollout of new services and applications
which depend on a culture of openness and sharing.
As a response to such challenges, security standards are essential to ensure interoperability among
systems and networks, compliance with legislation and adequate levels of security. These standards
provide the means for protecting the user, creating a more secure and profitable environment for the
industrial sector, from SMEs to large global companies, and providing benefits for a diverse range of
interest groups that include government organizations, research bodies and universities.
ETSI is an independent, non-profit organization, with 25 years of experience of successfully pursuing its
mission to produce globally-applicable ICT standards. It has always maintained a strong focus on security
matters.
ETSI is committed to the establishment and continuous improvement of effective and interoperable
telecommunications systems for the benefit of the global community. Addressing security issues in all
ICT areas, protecting citizens in emergency scenarios, and combating global climate change by lowering
power consumption are examples which highlight some of ETSI's commitments. ETSI continues to
intensify its focus on matters related to security innovation by participating actively in EU security
research and innovation initiatives which aim to provide Europe, and the rest of the world, with the
tools necessary to create a secure environment for the global citizen.
Standardization activities carried out within various ETSI Technical Committees, Working Groups,
Industry Specification Groups and Partnership Projects cover a broad spectrum of security issues, of
which this White Paper provides an overview.
The main areas of work at ETSI related to security cover mobile/wireless communications, emergency
telecommunications, information technology infrastructure, smart cards, fixed communications and
security algorithms and underpinning this is important work on security design and analysis methods.
Some TCs work specifically within one of these areas, whereas the work of other TCs can overlap several
areas.
This complex and dynamic scenario creates a need for a reference group to both create security
standards and coordinate security matters across the ETSI work areas. For this reason ETSI has created
TC CYBER, with responsibilities for cyber security standardization, while at the same time inheriting the
former role of the Operational Coordination Group on Security (OCG SEC), which was then closed.
ETSI is intensively working in the Network Functions Virtualisation domain, for which security aspects
are of crucial importance.
This White Paper outlines ETSI's work in each of the security-related fields. A complete list of the
relevant publications for each field is included at the end of this document, followed by the reference
numbers of the documents added since the sixth edition of the ETSI Security White Paper published in
January 2014.
Key to ETSI Technical Committees (TCs) / Partnership Projects (PPs) / Industry Specification Groups
(ISGs)
3GPP (PP) Third Generation Partnership Project
MESA (closed PP) Mobility for Emergency and Safety Applications
AERO Aeronautics
ATTM Access, Terminals, Transmission and Multiplexing
BROADCAST Joint TC on broadcasting matters
CABLE Integrated broadband cable telecommunication networks
CYBER Cyber Security
DECT Digital Enhanced Cordless Telecommunications
E2NA (closed EP) End to End Network Architecture
EMTEL Special Committee on Emergency Telecommunications
ERM Electromagnetic Compatibility and Radio Spectrum Matters
ESI Electronic Signatures and Infrastructures
INS (closed ISG) Identity and Access Management for Networks and Services
ISI (ISG) Information Security Indicators
ITS Intelligent Transport Systems
LI Lawful Interception
MSG Mobile Standards Group
MTS Methods for Testing and Specification
This week is an ideal opportunity to learn about the full extent of ETSI’s security related standardization,
as well as to foster debates and discussions with security experts from many other organizations, which
help shape the future direction of ETSI’s security related standardization.
www.etsi.org/SECURITYWEEK
The wireless infrastructure that terminals use to access the network makes these technologies
vulnerable to attack. Over the years, ETSI has developed a unique expertise in securing these forms of
communication, providing encryption techniques and fraud prevention mechanisms.
Security has been a major driver for the success of GSM. Specifications have been developed to prevent
terminal equipment theft, allow encryption and authentication, control payment for copyright material
downloading and respond to many other security threats. The general description of the security
functions can be found in [57].
Standardization work related to specific security aspects of GSM is currently being carried out within
ETSI's Railways Telecommunication committee (TC RT) and within the joint ETSI ERM/MSG group.
C h a lle n g e R Key
Ki Ki
A3 A3
Response SRES
?
A8 A8
S IM
Kc
Kc
ENCRYPTED DATA
A5 A5
IMEI - Mobile terminals are by their nature attractive objects, at great risk of theft, often described by
the acronym CRAVED (Concealable, Removable, Available, Valuable, Enjoyable and Disposable). ETSI
has created a set of standards (see [65] and [66]) which define a system to prevent handset theft,
based on a handset identity number called the International Mobile Equipment Identity (IMEI). This is
a unique number attributed during handset manufacturing, registered by the Mobile Network
Operator (MNO) and implemented into the mobile terminal. Using the IMEI, mobile equipment
declared as stolen can be blacklisted by the MNOs.
IMEI blacklisting is currently in operation, though not yet on a world-wide basis; stolen phones often
leave their original country for less developed countries where people cannot afford the price of a
new handset. To use the handset in the same country it has been stolen in, the IMEI value can also be
changed to an authorized one. To reduce handset theft, some countries have passed laws that make
IMEI alteration illegal. In parallel, handset manufacturers are working on increasing the IMEI's security.
The IMEI offers other benefits too: for example, certain handsets can be tracked by the network for
evaluation or other purposes. The IMEI is also useful for identifying makers of hoax emergency calls.
FIGS - Fraud Information Gathering System (FIGS) is a method of monitoring a subscriber's activities to
limit the accumulation of large unpaid bills whilst roaming (see [1], [5], [14], [16] and [54]). FIGS allows
The UMTS security specifications developed in 3GPP build on the mechanisms used in the GSM
specifications. In addition, they offer numerous enhancements that include the following:
Authentication - To further enhance the security already present in GSM, 3GPP has adopted an
innovative authentication and key agreement protocol for UMTS. The protocol retains the framework
of the GSM authentication mechanism and provides additional features such as mutual
authentication, agreement on an integrity key between the user and the serving network, and
freshness assurance of agreed cipher key and integrity key. As in the GSM authentication mechanism,
the serving network authenticates the user by using authentication data (called authentication
vectors) transferred from the user's home network. In each authentication vector, a protected
sequence number is included, verified by the terminal's smart card (USIM) to achieve authentication
of the network by the user. There are also mechanisms for freshness assurance of agreed cipher and
integrity keys (see [28], [29], [30], [31], [34], [38], and [41]).
Public Safety - 3GPP has invested significant effort in ensuring that emergency calls in UMTS are
always connected and has introduced various public safety functionalities.
Location services are also an important feature (see [70] to [73]). Several techniques have been
specified to improve the accuracy of the positioning, from the simple retrieval of the radio cell where
the mobile is located to the more advanced, assisted GPS positioning. In the specification work,
several ancillary aspects related to location services have been addressed such as privacy protection
for the users and when there is a need for public authorities to trace mobile phones.
3GPP has also been working to enhance the capabilities of cell broadcast services to introduce the
MBMS (Multicast Broadcast Multimedia Service) (see [33]). This enables MNOs to transmit multimedia
content to a selected area of the mobile network, a feature of particular value when the need arises to
issue public warnings.
In 2012, SAGE announced its intention to develop a new 3G authentication and key agreement
algorithm set, as an alternative to MILENAGE. This new algorithm set, named TUAK (see the Algorithms
section), can be used by any operator for any USIM (Universal Subscriber Identity Module) or ISIM (IMS
(IP Multimedia Subsystem) Subscriber Identity Module). This provides options to operators as well as
resilience against future cryptanalysis of either algorithm set.
At the end of 2013 3GPP SA3 agreed to adopt this second algorithm set for the 3GPP authentication and
key generation functions, which include the algorithm specification itself, the implementers’ test data
and the design conformance test data.
LTE forms the basis of 3GPP Release 8, functionally frozen in December 2008. The security architecture
was defined by the 3GPP Services and System Aspects Working Group on Security, SA3.
Authentication and key agreement are based on UMTS AKA (Authentication and Key Agreement) which
is re-used for Evolved Packet Systems (EPS). Subscriber Identity Module (SIM, as used in GSM) access to
LTE is explicitly excluded and only Release 99 or later Universal Subscriber Identity Modules (USIMs) are
allowed.
As far as signalling protection is concerned, core network signalling (Non-Access Stratum (NAS)),
integrity and confidentiality protection terminates in the Mobility Management Entity (MME). Integrity
and confidentiality protection for the radio network signalling (Radio Resource Control, RRC) and for the
MME is maintained over the radio path, i.e. between the UE (User Equipment) such as a mobile terminal
and the eNode B (the base station in the LTE technology), as is the encryption for the User plane
protection. Network domain security is used to protect the internal interfaces.
Two sets of security algorithms were developed for LTE: one set is based on AES (Advanced Encryption
Standard) and the other on SNOW 3G. The principle being adopted is that the two should be as different
from each other as possible, to prevent similar attacks being able to compromise them both. The ETSI
Security Algorithms Group of Experts (SAGE) is responsible for specifying the algorithms. The key length
is of 128 bits, with the possibility to introduce 256-bit keys in the future if necessary. In 2011 a third
algorithm, ZUC, was approved for use in LTE (see section Algorithms). The encryption algorithm 128-
EEA3, and the integrity algorithm 128-EIA3, based on ZUC, were finalized in 2012.
LTE enables efficient interworking with legacy and non-3GPP networks. In this scenario, trust models
become more complex and a deeper key hierarchy than that used in UMTS is needed for LTE. A (one-
way) Key Derivation Function (KDF) is used for LTE. The extended key hierarchy also enables faster intra-
LTE handovers.
Interworking with non-3GPP networks is based on EAP-AKA and its revised version EAP-AKA’, where the
EAP (Extensible Authentication Protocol) server is the 3GPP AAA server residing in the EPC. The EPC
provides a core network suitable for higher-data-rate, lower-latency, packet-optimized system that
supports multiple Radio Access Technologies (RATs). EAP-AKA' is a small revision of the EAP-AKA method
which comprises a new key derivation function that binds the keys derived within the method to the
name of the access network, and employs SHA-256 instead of SHA-1 (SHA stands for Secure Hash
Algorithm) In circumstances where the non-3GPP network is un-trusted, an IPSec tunnel is used.
Ongoing activities
Several studies were ongoing at the time of publication of this white paper:
Normative work on the Security Assurance Methodology for 3GPP Network Elements with the
objective to develop a Security Assurance Specification (SAS) for the MME network product class.
Study on subscriber privacy to identify privacy requirements and risks, privacy risk mitigation
approaches, and to provide privacy guidelines and/or best practices for classes of 3GPP functions
and privacy protection. The overall goal of this study is to develop privacy guidelines that help in
addressing privacy issue in future 3GPP specifications.
Study on aspects of integration of Single Sign On frameworks with 3GPP networks, which aims to
investigate the security aspects of interworking between the operator-centric identity
management with the user-centric Web services provided outside of an operator’s domain.
Define security requirements and related solutions Mission Critical Push To Talk over LTE
(MCPTT), an essential functionality of public safety communication systems
Study on security issues to support Proximity Services (ProSe) and an evaluation of possible
technical solutions needed to support such services.
This type of access may be provided by the Public Land Mobile Network (PLMN) by means of elements
referred to as Home Node B (HNB) and Home (e) Node B (H(e)NB). The H(e)NB provides services to a
Closed Subscriber Group (CSG). CSG membership, including temporary membership, is managed by both
the CSG manager and the network operator.
3GPP standardized Home Node B and Home eNode B technologies. From the security point of view, an
important difference compared to a traditional UMTS or LTE architecture, is that while the Node B is an
element traditionally owned and controlled by the operator, the Home (e) Node B resides in the
customer’s premises.
3GPP standardized the security architecture which specifies how networks using H(e)NBs can ensure an
adequate level of security, and comply with regulatory requirements (see [80] to [82]). The threats that
can occur from this change are collected in the 3GPP document TR 33.820, where the countermeasures
are proposed.
A new feasibility study related to Public Safety has been completed recently: Feasibility Study on
Security Aspects of Isolated E-UTRAN Operation for Public Safety (IOPS). Ensuring the continued ability
of Public Safety users to communicate within mission critical situations is of the utmost importance,
including when the fixed infrastructure is compromised. 3GPP will provide with the necessary security
mechanisms for this isolated E-UTRAN scenario.
In this perspective the security requirements coming from ETSI TC TISPAN, CableLabs and 3GPP2 were
inserted into the IMS specifications. More specifically, several new normative annexes have been added
(see [27]):
NASS (Network Access Subsystem) - IMS Bundled Authentication (NBA), which was a contribution
by ETSI TC TISPAN;
SIP Digest - based authentication, also a contribution by ETSI TC TISPAN;
Access security with TLS (Transport Layer Security), which was a contribution by CableLabs;
3GPP2 Access, a contribution by 3GPP2.
A further annex illustrates the co-existence of authentication schemes. This annex explains how the
disparate authentication mechanisms should be handled in IMS: Full IMS AKA, GIBA (GPRS-IMS Bundled
Authentication), NBA, and SIP Digest.
In addition, IMS media plane security has been specified for real-time media [85]. The media plane can
be protected end-to-access-edge using SDES (Session Description Protocol Security Descriptions for
Media Streams). End-to-end media plane protection is also specified, either using SDES or using a Key
Management Server (KMS). Rel-12 will extend the media plane security to cover IMS Messaging, IMS
Conferencing and Communications Diversion.
In 2014 a report was published [91] on Security for Web Real Time Communication (WebRTC) access to
IMS. A WebRTC IMS client is a WebRTC-capable browser running a JavaScript application that allows a
user to access IMS services. The study focuses on the security mechanisms for authentication control
and media plane.
Ongoing activities
Several studies were ongoing at the time of publication of this white paper:
A new study on enhancements to WebRTC interoperability; the aim is to provide end-to-end
support for specific WebRTC capabilities to reduce the need for protocol conversions between
WebRTC and IMS protocols on the data channel
New work on IMS call spoofing detection and prevention
Relay Node Security is addressed for Release 10 in Annex D of a specification [81], where a solution for
Relay Node Security architecture and authentication is provided.
The basic idea behind the solution for RN security presented is achieving a one-to-one binding of an RN
and a USIM called USIM-RN.
The TETRA standards evolved to counter the interoperability problems of emergency response teams
communicating with each other, due to the lack of standardization in their mobile radio equipment, and
thus have created a common framework for digital radio in the private and emergency services
domains. The mission-critical effectiveness and operational efficiency of TETRA as a wireless
communications technology was demonstrated in the incident management resulting from a number of
terrorist incidents across Europe (notably the Madrid railway bombings in 2004 and the London
transport bombings in 2007), in the logistic support to the Games of the XXVIII Olympiad in Athens in
2004 and were key to the management of the Games of the XXX Olympiad in London in the summer of
2012.
Based on digital, trunked radio technology, TETRA was designed as the next-generation standard for the
PMR and PAMR markets taking lessons learned from mobile radio, digital cellular telephony, paging and
wireless data to offer a multi-user secure radio system for group-centric communication.
Fraud prevention and confidentiality are critical to the success of radio mobile systems such as TETRA
because the air interface is open to being overheard or attacked if not protected. The security-related
functions of the standard comprise the following features (see also [92], [93] and [94]):
Mutual authentication - With mutual authentication over the air interface, a mobile station can check
if a network can be trusted before entering, and the TETRA system can control the access of a mobile
station. This mechanism offers guarantees against an attacker penetrating the network, thus assisting
in the prevention of fraud, Denial of Service (DoS) situations, spoofing and other forms of attack, while
at the same time ensuring correct billing and access as well as a secure data distribution channel. In
addition TETRA offers a Direct Mode Operation (DMO) where although an explicit authentication
mechanism is not available, the use of strictly managed shared Static Cipher Keys can provide implicit
mutual authentication.
Encryption - As the air interface is vulnerable to eavesdropping, encryption is crucial to maintain
confidentiality of communications. Air interface security is intended to secure the connection between
mobile stations and the network. In addition, end-to-end security can be provided to offer a higher
level of security and is enabled in the TETRA standards. The use of several encryption algorithms, both
standard and proprietary, is supported, with work started in 2012 to develop a migration to a new set
of standard algorithms.
TETRA end-to-end security service is achieved by protecting information transmitted from one mobile
station to another, not only over the air interface but also within the network. The technical solution
can be customized to address particular requirements. As TETRA is implemented by diverse user
groups for many purposes, this feature is essential.
Ongoing activities
The security requirements for the second release, high data rate TETRA, have been published and are in
the process of review and maintenance as TETRA-2 systems are being deployed. In addition, as part of
the ongoing maintenance required to ensure TETRA remains at the leading edge of security provisions
with the move to a data-centric, rather than voice-centric, network, close liaison with 3GPP for the use
of LTE as a complementary technology for public safety has been established and in which members of
TETRA and 3GPP-LTE are working together to address adding group call capabilities and group security
extension for LTE, as well as adding a public safety user capable direct mode to 3GPP-LTE.
DECT standardization started in CEPT, and was transferred into ETSI at its creation in 1988. Work today
is the responsibility of ETSI's DECT Technical Committee.
The combination of Time Division Multiple Access/Time Division Duplex (TDMA/TDD) digital radio
technology and dynamic channel selection with additional encryption techniques, authentication and
identification procedures makes DECT radio transmissions extremely secure against unauthorized radio
eavesdropping by third parties.
Ongoing activities
TC DECT is currently working on Ultra Low Energy (ULE). DECT Ultra Low Energy (ULE) is a very promising
technology for the wireless machine-to-machine market segment. ULE keeps all the traditional strengths
of the DECT technology: interference free, license free, high security (due to the recent introduction of
stronger algorithms), secure authentication and long range.
Security in RFID technology must prevent illicit tracking and cloning of tags. In addition, as RFID tags
carry a relatively low amount of computational resources within the tag itself the use of standard
cryptographic techniques is rendered unfeasible. Lighter encryption algorithms need to be created for
the RFID tags.
In 2002, ETSI's Electromagnetic Compatibility and Radio Spectrum Matters Technical Committee (TC
ERM) established a Task Group (ERM TG34) to produce deliverables for future RFID technologies and
products. Two European Standards were published ([105] and [106]) and are updated regularly as
necessary. A Technical Report providing guidelines for the installation and commissioning of RFID
equipment at UHF was also published [107].
A number of recommendations for future work related to RFID security and promotion of "privacy by
design" have been developed by the 3 ESOs (ETSI, CEN and CENELEC) in 2011 as a TR [108] published by
TISPAN WG7. The future of this work is being actively pursued in a number of ETSI committees (TCs ERM
TG34, Human Factors (HF) and the privacy by design guidance is being further developed in NTECH) in
collaboration with partners in CEN.
ERM published a report [109] in 2011 on RFID Evaluation Tests. The report covers RFID Evaluation Tests
that were carried out to evaluate the characteristics and performance of RFID equipment operating at
their three principal frequencies of use. The information derived from these tests is directly relevant to
the work on RFID security and privacy by design mentioned above [108].
Ongoing activities
Efforts are being made to obtain an extension of the UHF frequency band used by RFID technology. The
enhanced band will serve a broader range of applications than just RFID, posing additional security
issues which will need to be addressed within any future related standards.
TC RRS has produced a Technical Report which identifies and defines user requirements for RRS in the
Public Safety and Defence domains [110].
Ongoing activities
TC RRS is working on the following documents.
A Technical Report to identify security related use cases and threats in the deployment and
operation of RRS. The following will be addressed:
use cases and related security threats which may impact the services provision and availability of
RRS networks and equipment;
use cases and related security threats which may cause wireless interference to incumbent
services;
use cases and related security threats to the functionality of download and activation of software
modules on RRS
A Technical Specification to address security requirements for RRS, by identifying available
countermeasures to security threats, as well as possible lack of such countermeasures.
It is important that satellite networks are able to offer IP network services that remain comparable to
and competitive with terrestrial services. These objectives require the development of satellite
standards to keep pace with the rapid evolution of the terrestrial IP network standards.
TC SES has published two Technical Specifications and a Technical Report on network security ([111],
[112] and [113]) in the area of broadband satellite multimedia services.
In addition, the committee's working group on geo-mobile radio interfaces, which is responsible for
standards on radio interfaces for geostationary earth orbit satellite access to the core network of GSM,
has undertaken work on the security of the interface and the services delivered through it ([114] to
[116]). Similarly, another working group within TC SES has produced a Technical Specification for
security aspects of the satellite component of UMTS [117]. The work of these two working groups was
merged recently, and progress continues currently in TC SES.
TC ITS Working Group 5 addresses the security aspects in ITS. This group is developing and maintaining a
suite of standards that are intended to enable secure inter-vehicle communication. The security features
in ITS have been developed from a risk analysis following the method established in [335] and extended
to address aspects of Privacy Impact Assessment (PIA) under the guidance published in [108].
The communications architectures and the services of ITS cover the range from ad-hoc all informed
vehicle-to-vehicle scenarios, through lightweight vehicle-to infrastructure communication, to fully
managed IP and Cellular networks. The security architecture overlaid on these communications
architectures provides support to credential and identity management, privacy enhancing technologies
and applications, integrity protection, authentication and authorization.
TR 102 893
ITS TVRA
TS 102 940
ITS Security
Architecture
TS 102 942
Access
Control
TS 102 943
Confidentiality
The nature of ad-hoc all-informed networks poses a number of difficulties in key distribution to enforce
and validate assertions of identity or authority. The solution designed for ITS, in the absence of an
always available public key infrastructure, has been to adopt and extend the IEEE 1609.2 public key
certificates, and to take advantage of the capabilities of elliptical curve cryptography to allow both
implicit and explicitly signed certificates and to then generate long sequences of "trusted"
pseudonymous signing keys to protect transmitted data sequences from revealing long term private
data. Thus PII (Personally Identifiable Information) is protected over long time durations restricting the
risk of unauthorized tracking of vehicles whilst maintaining the viability of the core safety applications
that require short term tracking of vehicles.
ETSI has agreed, along with other SDOs working in cooperative ITS (C-ITS), to adopt the IEEE 1609.2 data
set and to profile it to the specific protocols developed at ETSI. This has been developed to date in a
number of published documents including [118] representing a harmonized package of four TSs
comprising ITS Security Architecture and Security Management, Trust and Privacy Management, Access
Control and Confidentiality services in ITS communications [119], [120], [121], [122] (see the picture
above showing how the ITS documents are structured and related to each other).
Throughout the development of this work close collaboration between IEEE and ETSI has been
maintained.
One of the challenges in ETSI’s work in this area is to ensure global interoperability of the C-ITS
capabilities with other ITS domains hosted in the same device (be it a vehicle, a smartphone, or
embedded in roadside furniture). One of the areas being explored is ensuring a global data dictionary
for the security capabilities, thus ETSI is working closely with IEEE, and with regulators in Europe and the
US through the US-EU ITS Standards Harmonization Taskforce to have a single ASN.1 data syntax
specification for ITS Security with further harmonization of the semantics and behaviour of protocols
and services for security and privacy protection in the core ITS security deliverables.
Acceptance of ITS in the real world requires significant levels of pre-launch, pre-finalization, testing and
verification of capabilities, therefore the development of Conformance Test Specifications was begun
under the leadership of the ETSI CTI (Centre for Interoperability and Testing), and has been progressively
developed to prepare deliverables that assure interoperability of devices produced in compliance with
[126]. These Conformance Test Specifications have been integrated with the existing ITS Validation
Framework [127] and have been published as a multipart TS, [123], [124] and [125].
Further work is being and will be continuously undertaken to demonstrate the viability of the standards
through a series of ITS PlugtestsTM events, one of which was successfully carried out in November 2013,
and by ongoing industry wide validation and verification events.
Achievements
TC M2M published its Release 1 deliverables in 2011. This specification was later contributed to provide
a basic brick in the oneM2M Partnership Project established jointly by ARIB (Association of Radio
Industries and Businesses, Japan), ATIS (Alliance for Telecommunications Industry Solutions, US), CCSA
(China Communications Standards Association), ETSI, TIA (Telecommunications Industry Association,
US), TTA (Telecommunications Technology Association, Korea) and TTC (Telecommunication Technology
Committee, Japan), subsequently joined by TSDSI (Telecommunications Standards Development Society,
India). The Release 2 of the ETSI specification is being maintained by ETSI TC SmartM2M, while the
responsibility for the development of new features to support M2M and IoT was transferred to
oneM2M. The specifications provide a range of standard mechanisms for mutual authentication, key
agreement and optional secure connection establishment between the Service layers of the
Devices/Gateways and the supporting M2M network infrastructure [128], [129] and [130]. Mechanisms
are provided to distribute security credentials to M2M devices in a variety of manners (independent
provisioning, derivation from pre-existing Access Network credentials, or independent infrastructure
assisted bootstrapping), in order to adapt to the needs of very diverse M2M applications and underlying
telecommunication technologies. The security of the M2M Service Layer may rely on the underlying
Access Network provided mechanisms when trusted, on secure channel establishment at the M2M
Service Layer (e.g. using TLS), or on data security provided at the object level. Release 2 brings
enhancements of main security measures in underlying networks to enhance the protection of security
information and facilitate its remote administration.
oneM2M published its Release 1 in January 2015. This includes a Security Solutions specification [131],
supported by a Threat Analysis Technical Report, [132].
Ongoing activities
The further development of horizontal M2M specifications is now being pursued by ETSI within the
oneM2M Partnership Project. While the scope of the security aspects of oneM2M Release 1 aims to
address client-server based "one to many" industrial M2M deployments (with solutions similar to those
addressed by ETSI SmartM2M), work is under way in the oneM2M Security Working Group to address,
in future Releases, the complex challenges arising from distributed "many to many" dynamic IoT security
scenarios.
TC Smart M2M also investigates the security aspects of European mandates related to Smart Energy,
especially M/441 on Smart Metering and M/490 on Smart Grids, aiming at a harmonized security
approach across European energy infrastructures.
The standardization of Lawful Interception is vital to provide an economically and technically feasible
solution that complies with national and international conventions and legislation. ETSI has played a
leading role in LI standardization since 1991; today the work is concentrated in Technical Committee
Lawful Interception (TC LI), which has the active participation of the major telecom manufacturers,
network operators and regulatory authorities of Europe and from around the world.
ETSI's LI work covers the whole spectrum of interception aspects, from a logical overview of the entire
architecture and the generic intercepted data flow, to the service-specific details for e-mail and Internet,
and the requirements of law enforcement agencies. In the recent years TC LI has intensified its efforts
with regards to standardization of handover of retained data.
NOTE: Handover in the context of LI and RD (Retained Data) means the passing of data from the
(Communication Service Provider) CSP to the Authorized Organization (typically a Law Enforcement
Agency (LEA)) and should not be confused with the term handover as used in cellular telephony.
Achievements
A major achievement of ETSI's work in this area has been the publication of the specifications for the
handover procedure: TS 101 671 [138] and ES 201 671 [133]. These specifications illustrate the flow that
the intercepted data should follow in telecommunication networks and services. In this context, they
specify the network or service protocols necessary to provide handover of lawfully intercepted data and
traffic, as well as the physical or logical point at which the interception has to take place (the handover
interface) both for packet data and circuit-switched communications.
Other ETSI Technical Committees play a major part in Lawful Interception as they have to ensure that LI
is enabled in the core specifications and able to deliver material for handover. TC LI therefore works in
close collaboration with those committees, notably TC NTECH, the committee in charge of maintaining
the specifications for Next Generation Networks (NGN) developed in TISPAN, as well as TC TETRA (who
have passed the maintenance of the TETRA LI specification to TC LI), 3GPP and TC ATTM ([153] to [159]).
The LI handover specifications are already widely used in a number of countries, being first adopted in
2003. Other countries are in the process of implementation or have expressed an interest in adopting
them.
ETSI TC LI has standardized the general requirement placed on European network operators, service
providers and access providers [134] who are obliged under provisions of the Framework Directive to
make available results of interception to the law enforcement agencies. Complementing these
requirements, a Technical Specification [139] relating to handover interfaces for the interception
provides guidance for law enforcement agencies on the cooperation required by network
operators/service providers with the lawful interception of telecommunications.
Two other TRs cover Lawful Interception of public Wireless LAN Internet Access [136] and the Lawful
Interception domain architecture for IP networks [137].
TC LI has produced a TR ([149]), defining a security framework for securing Lawful Interception and
Retained Data environment (see below) of the CSP and the Handover of the information.
A further TR [152] was published to provide generic recommendations for requests for handover and
delivery of real-time or stored information referred to as an eWarrant Implementation Interface.
ETSI organized two PlugtestsTM events for TC LI in order to test the interoperability of equipment from
different vendors against a number of TC LI specifications. During the first LI Plugtests event in 2006, the
following TSs were tested: [142], [143] and [144]. During the second LI Plugtests event in 2007, the
following TSs were tested: [138], [142], [143], [146] and [147]. The outcome of both events highlighted
issues which were looked after by the TC LI through a number of updates to the relevant publications.
Retained Data
Retained Data (RD) is another vital subject for TC LI. The ability for Law Enforcement Agencies to
request, and for operators and service providers to deliver, retained data are crucial. TC LI has produced
a TS [135] which deals with the requirements from Law Enforcement Agencies for the handling of
Retained Data. This document gives guidance for the delivery and associated issues of retained data of
telecommunications and subscribers. It provides a set of requirements relating to handover interfaces
for the retained traffic data and subscriber data by law enforcement and state security agencies and
other authorized requesting authorities.
A report [151] on System Architecture and Internal Interfaces for Retained Data elaborates on RD
system architecture and assigns and describes internal interfaces to specific services and functional
entities on the CSP side. It provides guidance on implementation issues that CSPs have to deal with. This
work was coordinated with TISPAN.
Ongoing activities
TC LI continues to maintain the suite of Lawful Interception and Retained Data publications by updating
them regularly.
Work is ongoing on a TS, expected to be published in 2015, providing a standardized mechanism for the
dynamic triggering and revocation of the interception of communications content to take account of the
increasingly dynamic configuration of CSPs and networks. This involves important security aspects, as
the dynamic triggering functions need to be carried out with adequate levels of security to protect them
from misuse or eavesdropping of the related commands. It is also essential that the triggering interface
does not impact the underlying security of the network or services being intercepted.
TC LI continues its work on RD/LI for Cloud Computing with two TRs to provide recommendations on
requests for handover and delivery of stored information associated with cloud/virtual services. The
reports, expected to be published in 2015, are intended to identify any RD/LI work necessary to ensure
that there are no technical obstacles in the converged cloud/virtual service environment to this aspect
of regulation, thus ensuring that RD/LI obligations can be maintained while allowing businesses to utilise
the advantages and innovations of Cloud Services.
TC LI started a new specification in 2012 to define an electronic interface between two systems for the
exchange of information relating to the establishment and management of Lawful Interception.
Typically this interface would be used between: on one side, a Communications Service Provider; and,
on the other side, a Government or Law Enforcement Agency who is entitled to request Lawful
Interception. While the published report [152] defines a generic interface for electronic Warrants; this
document is a specific and detailed example of one particular Warrant interface.
TC LI started a new specification in 2013 on the internal network interface X1 for LI (to be extended to
X2 and X3 at a later stage), which covers wide area connections between LI systems and (depending on
the network) several network elements from different vendors. Nearly every network element has its
own interface with different transport protocols, authentication (if any), encryption (if any), commands
etc. This makes every new connection highly complicated and costly, as the interfaces between
Administration and Mediation Functions are usually proprietary within an LI product. This work intends
to help solve such issues.
TC LI started a new specification in 2013 on security for LI and DR systems. The need for Security,
(Physical, Hardware, Software, Personnel, Cyber) is a fundamental requirement for all LI and DR
systems. As networks become increasingly IP service centric, globally distributed and frequently
software based, the need for good basic security becomes ever more important. The new TS will build
on work already undertaken in other TC LI specifications and reports, and will use a threat based model
In 2014 TC LI began work on a new specification to create a dictionary of common parameters. This
specification aims to provide an easy way to introduce commonly used parameters in other
specifications, in order to reduce the amount of work involved in defining such parameters.
In 2014 TC LI began to revise the specification for a Lawful Interception interface originally developed
for ETSI’s TETRA (Terrestrial Trunked Radio) standard. This will be updated to address new capabilities of
TETRA related to LI and to bring the document in line with LI activity as a whole.
The committee began work on two new ETSI Special Reports in 2014. One will provide a guide to Lawful
Interception and Retained Data standards and concepts (including an evolutionary overview) and the
other will offer guidance on LI for LTE™ in the form of Frequently Asked Questions.
Liaisons and Cooperation - As noted earlier, effective cooperation between organizations and
committees working on Lawful Interception is imperative. TC LI works closely with other committees
outside and within ETSI, including:
ETSI’s Industry Specification Group on Network Functions Virtualisation (ISG NFV) to ensure the LI
requirements on NFV are met in an appropriate way
ETSI TC CYBER for cyber security matters related to the security of LI/RD interfaces and
information flow
The International Organization for Standardization (ISO) TC 204 on building an open dialogue on LI
matters
ETSI's Operational Coordination Group on Electronic Communications Networks and Services
Directives (OCG ECN&S) on an assessment of the consequences of the European Union ECN&S
regulatory viewpoint on the standardization of Next Generation Networks (NGN).
TC LI also collaborates closely with the LI group in the Third Generation Partnership Project (3GPP™)
(SA3-LI) on LI for the Universal Mobile Telecommunications System (UMTS) and the Global System for
Mobile Communication (GSM). By monitoring each other's activities, the groups ensure that their
respective LI specifications are aligned.
A digital signature is one form of electronic signature that uses a cryptographic transformation of the
data to allow the recipient of the data to prove the origin and integrity of the data, to protect against
forgery of the data by the recipient and to support signer non-repudiation.
Standards to support the use of digital signatures and public key certificates are a key driver in enabling
the successful evolution of electronic transactions. ETSI's Electronic Signatures and Infrastructures
Technical Committee (TC ESI) is responsible for standardization in the areas of digital signatures and
Public Key Infrastructure (PKI) to support electronic transactions in open environments.
ETSI's involvement in this area began in September 1996, with the provision of specifications related to
electronic signatures with the contribution of CEN. Activities in this area intensified with the release of
the European Directive 1999/93/EC on a Community framework for electronic signatures. In December
2009 the European Commission issued a standardization mandate (M/460) aiming at achieving the
interoperability of electronic signatures throughout Europe, by providing a rationalized European
electronic signature standardization framework which will allow mutual recognition and cross-border
interoperability. In July 2014, Regulation (EU) No 910/2014 on electronic identification and trust services
for electronic transactions in the internal market was adopted, repealing the Directive 1999/93/EC. TC
ESI mainly works on standards, globally applicable, in support of this new Regulation.
Achievements
In 2012, CEN and ETSI worked on the definition of a rationalized framework for electronic signatures
standardization which was published a Special Report (SR) [213]. Based on this SR and its successor that
TC ESI is working on (will be published as TR 119 000), the standards published in the past are and will
be reviewed, and step by step migrated to the new framework. This framework comprises the use of a
common numbering for standards related to digital signatures in both CEN and ETSI. The number range
is x19 yyy with x being 0, 1, 2, or 3 for ETSI documents, and x being 4 for CEN documents. This
rationalized framework is structured in six domains:
ETSI has organized several XAdES and CAdES Plugtests TM events since 2003. The first ones were face-to-
face meetings. They became remote events since 2008. All the outcomes of the events were used as
inputs to update the related deliverables as necessary.
In 2009 TC ESI published a set of profiles for PDF Advanced Electronic Signatures (PAdES) as a multipart
TS. A TR [206] was also published, providing guidelines for the use and implementation of PAdES. The
possible techniques that may be used for printable representations of advanced electronic signatures
In 2011 TC ESI published a TS [212] on Associated Signature containers, a package format for associating
advanced electronic signatures with one or more files to which the signature applies. A first ASiC
Remote PlugtestsTM was held in 2014.
Four AdES baseline profiles for XAdES [215], PAdES [216], CAdES [217] and ASiC (Associated Signature
Containers) [218] were published in 2012. They are be used for interoperability of AdES signatures used
in electronic documents issued by competent authorities to be interchanged across borders in the
context of the EU Services Directive 2006/123/EC.
Conformance testing was carried out for XAdES baseline profile [220], and test suites for PAdES and ASiC
interoperability [221] and [222]. In such areas, PlugtestsTM events were organized by the ETSI Centre for
Testing and Interoperability (CTI).
3. Cryptographic suites
ETSI TC ESI regularly updates a set of hash functions and asymmetric algorithms for Advanced Electronic
Signatures [230].
In 2014, TC ESI published a study on standardization requirements for electronic delivery applying
electronic signatures [232].
TC ESI published standards on information preservation systems security. The goal is to provide a
common, objective and reliable basis both for preservation service providers to implement and manage
secure Information Preservation Systems [210], and, for assessors to measure whether these systems
meet the quality requirements of the EU’s Directive on services in the internal market (2006/123/EC)
[211].
Interoperability activities
Recent PlugtestsTM events and other interoperability related work carried out by the ETSI Centre for
Testing and Interoperability (CTI) regarding TC ESI work include:
ASiC PlugtestsTM
Conformance Checker
ETSI CTI provides free online tools that perform numerous checks in order to verify the conformity of
XAdES baseline, CAdES and PAdES signatures, and ASiC containers.
Ongoing activities
ETSI ESI current work focuses on the execution of the European Commission (EC) Mandate on Electronic
Signature Standardization (M/460) for which CEN and ETSI are cooperating to ensure the alignment of
standards and avoid overlapping work. ETSI ESI is executing phase 2 of the mandate, i.e. implementing
the work programme as defined in [213]. The work covers:
Business Guidance documents on the use of electronic signature standards in each of the
Rationalized Structure areas
General Requirements on Policy and Conformity Assessment for Signature Creation and Validation
Signature creation & validation area: to align specifications with the rationalized framework,
addressing any gaps identified in the rationalized framework work plan, and progressing all the
relevant specifications to EN status. Work includes signature formats, signature creation and
validation procedures and signature policies
Requirements for Conformity Assessment Bodies (CAB) assessing Trust Service Providers, and
revise the related specification [231] into an EN
Requirements for Trust Service Providers issuing certificates
Testing compliance and interoperability: to develop a set of technical specifications and tools that
act as catalysers for implementing essential standards of the framework, particularly for signature
creation and verification. In addition, new Plugtests TM events are planned for CAdES, XAdES,
PAdES and ASiC.
A stakeholders' mailing list has been set up to provide regular news and updates on the progress of the
execution of the mandate: Subscribe to the E-SIGNATURES_NEWS mailing list.
TC ESI also plans to initiate new work in order to complete the framework of standards. New activities
will address in particular e-delivery, a preliminary study addressing the future standardization of the
preservation of digital signatures, and work to be done in the context of issuance of certificates as trust
services and on related TSP activities.
Achievements
Accomplishments include algorithms for 3GPP [264], DECT, GSM and TETRA ([245] to [249], [259] and
[260]), audiovisual services ([236], [237]), GPRS and Universal Personal Telecommunications (UPT) [242].
SAGE also collaborates with other ETSI committees to produce encryption algorithms.
All of the standardized security algorithms for UMTS were developed by SAGE (for an overview of the
overall algorithm mechanisms in UMTS, see [40]:
The initial set of algorithms for the UMTS radio interface (UTRA) - UEA1 and UIA1 - was developed
by SAGE in collaboration with the 3GPP Organizational Partners. UEA1 is the standard encryption
algorithm, and UIA1 is the standard integrity algorithm; both are based on the Kasumi block
cipher, also designed by SAGE (as a variation of Mitsubishi's MISTY1 algorithm). The specifications
for the algorithms (which are only for the development and operation of 3G mobile
communications and services) can be found in TS 135 201 ([45], see also [46] to [48]).
SAGE also developed a second set of algorithms, UEA2 for encryption and UIA2 for integrity, again
in collaboration with 3GPP. These algorithms are based on the SNOW 3G stream cipher, which
was in turn developed by SAGE as a variant of the public domain cipher SNOW 2.0 ([265] to
[269]). A strong motivation for the design is that the algorithms should be fundamentally different
in nature from UEA1 and UIA1, so that any new advances in the science of cryptanalysis are
unlikely to impact both sets of algorithms.
SAGE was also responsible for the specification of the Milenage algorithm set, an example
algorithm set for the UMTS authentication and key generation functions f1, f1*, f2, f3, f4, f5 and
f5* ([49] to [53]).
SAGE has completed the design and specification of a second example algorithm set for the UMTS
authentication and key generation functions, with a view particularly to the nascent embedded
UICC market where it may be desirable to have two algorithms built in as options. This second
algorithm is called TUAK, and is based on the public KECCAK hash function family (which will also
serve as the SHA-3 hash function standard). TUAK is currently being put forward for publication as
a 3GPP Technical Specification.
SAGE has also specified standardized cryptographic algorithms for the Long Term Evolution (LTE™)
mobile radio access architecture. Two sets of algorithms have been defined for the radio interface:
The encryption algorithm 128-EEA1 (EPS Encryption Algorithm 1) and the integrity algorithm 128-
EIA1 (EPS Integrity Algorithm 1) are identical to the UMTS algorithms UEA2 and UIA2, with a
defined mapping of LTE parameters onto UMTS parameters.
The encryption algorithm 128-EEA2 and the integrity algorithm 128-EIA2 are both modes of
operation of the Advanced Encryption Standard (AES), with SAGE specifying the precise details
and generating test vectors.
Other achievements include the design of encryption algorithms for GSM, EDGE and GPRS (A5/3 and
A5/4 for GSM and EDGE. GEA3 and GEA4 for GPRS) which provide users of GSM mobile phones with a
higher level of protection against eavesdropping than previously available. A5/4 and GEA4 [64] use 128-
bit cipher keys, longer than is available in traditional GSM, so these rely on recently standardized
changes to other network nodes to deliver a 128-bit key. Again, all of these algorithms were developed
in collaboration with the 3GPP organizational partners ([59] to [63], [254], [255] and [262]), and are
closely based on the UMTS algorithm UEA1. A5/3 is now being adopted by some operators within their
networks.
Some of the earlier work of SAGE is not publicly available, although most algorithms produced in recent
years have been made public. Their implementation is generally subject to a license which restricts their
utilization to the equipment or service for which they have been designed. ETSI acts as a Custodian for
the algorithms and is responsible for the distribution and licensing of the confidential information and
documents.
The Internet has become a critical infrastructure for both businesses and individual users. Growing
dependence on networked digital systems has brought with it an increase in both the variety and
quantity of cyber-threats. The different methods governing secure communication in the various
Member States of the European Union, as well as beyond Europe, sometimes make it difficult to assess
the respective risks and to ensure adequate security. Building on its world-leading expertise in the
security of Information and Communications Technologies (ICT), ETSI set up a new cyber security
committee (TC CYBER) to meet the growing demand for standards to protect the Internet and the
communications and business it carries.
TC CYBER is working closely with relevant stakeholders to develop appropriate standards to increase
privacy and security for organizations and citizens across Europe. The committee is looking in particular
at the security of infrastructure, devices, services and protocols, as well as security tools and techniques
to ensure security. It offers security advice and guidance to users, manufacturers and network and
infrastructure operators.
Among related duties, TC CYBER is in charge of the coordination of work with other groups in ETSI and
outside, such as SDOs and other groups/platforms, and is also responsible to provide and coordinate
answers to policy requests on cyber security and ICT security in broad sense.
There was enormous support for the establishment of the new committee, with more than 50 members
from industry and academia attending the first meeting in May 2014 to draw up a plan of work.
TC CYBER began work on eight ETSI Technical Reports (TRs) and an ETSI Guide (EG).
Achievements
TC CYBER published a report in May 2015 to provide guidance related to the provision of Security
Assurance by Default by means of Critical Security Controls for Effective Cyber Defence [233].
Ongoing activities
Listed below are the documents TC CYBER is working on at the time of publication of this paper. Several
are expected to be published by summer/autumn 2015, and new ones to be created.
To be published as TR 103 303: "Protection measures for ICT in the context of Critical
Infrastructure" to provide guidance for the deployment of security technologies and security
management to deliver and maintain effective Critical Infrastructures that are reliant on ICT
technology
To be published as TR 103 304: "PII Protection and Retention" as guidance for the protection and
retention of PII (Personally Identifiable Information)
To be published as TR 103 306: "Global Cyber Security Ecosystem" to provide an overview of
cyber security work being undertaken in multiple forums worldwide
To be published as TR 103 307: "Security Aspects for LI and RD interfaces" to provide guidance to
protect information flows and interfaces from a security perspective (confidentiality, integrity and
ETSI’S ISG NFV-SEC WG addresses the security aspects of the overall mission of ISG NFV. The security
challenges of moving to a virtualised, software network cannot be understated and it is the role of NFV-
SEC to quantify and qualify the risks of NFV and to identify strategies and models for giving assured
security to the future networks that will be built from NFV and SDN (Software Defined Network)
capabilities.
Addressing the security issues of NFV in the context of the ISG has been undertaken through an analysis
of the problems that NFV introduces when compared to conventional networking architectures. The
output of this has been captured in the specification [234]. The continuing work programme of NFV
security has taken many of the key problems identified in [234] and expanded them to identify key
system requirements. This has initially led to a review of trust in the NFV environment published as
specification [235]. In turn this has led to the current work programme of the ISG that is expanding
discussion on multi-level authorization and authentication that ensures separation of functions from a
conventional super-user/user type of operating system model to allow for functions to be enabled on
hardware that may be run by another body.
The working practices in the ISG encourage close co-operation with other bodies, with the ISG providing
a framework and guiding other bodies to develop solutions. In this way the ISG enables new work for
the platform without developing potentially competing standards. Thus the ISG has expanded the
problem statement with collaboration of members of ETSI TC LI to address the issue of dealing with LI in
an NFV environment. Similar expansion of problems and their resolution is being addressed alongside
3GPP SA3 and ETSI TC CYBER addressing such issues as root of trust, privacy protection, key
management for transient platforms, accountability (including non-repudiation services) and system
integrity assurance for a highly dynamic network configuration. The latter is one issue introduced by the
ability to virtualise functions that have required specialised hardware in previous technologies, but that
through virtualisation are able to run on general purpose hardware. Hence the concept of a fixed system
architecture is challenged by the possibility to have multiple concurrent system architectures running on
a common hardware platform. Such models present a new challenge to security as the conventional
thinking of function mapped to specific and specialised hardware has been broken and the assurances of
secure partitioning need to be reinvented.
One broad area of standards innovation addressed in the ISG is collaboration with the open-source
community. The intent of this is to drive both standards and open source closer together without losing
the strengths of either approach. Within the ISG this has led to close involvement with communities
across the IETF and W3C and the development of a programme of standards concept verification
through the Proof of Concept (PoC) programme that is managed through ETSI’s CTI (Centre for Testing
and Interoperability) group of experts. In due course many of the core concepts of protection of NFV will
be evaluated in such PoC programmes and the results fed back to further develop ETSI’s ISG NFV
specifications.
Quantum cryptography has great potential to become the key technology for securing confidentiality
and privacy of communication in the future ICT world and thus to become the driver for the success of a
series of services in the field of e-government, e-commerce, e-health, transmission of biometric data,
intelligent transport systems and many others. Its power stems from the fact that quantum
communication allows for a new primitive, which permits two parties to establish a secret key from a
short pre-shared secret and a public exchange, i.e. something which was never possible with classical,
non-quantum means.
ETSI's QKD Industry Specification Group develops ETSI Group Specifications (GSs) that describe quantum
cryptography for ICT networks. Quantum Key Distribution is the essential credential in order to use
quantum cryptography on a broad basis. It is the main task of the ISG QKD to specify a system for
Quantum Key Distribution and its environment.
Achievements
Below is the list of the publications of the ETSI ISG QKD.
A catalogue of security and other functional requirements for different user groups and different
fields of application [274];
A definition of properties of components and internal interfaces of QKD Systems [275];
An evaluation of application interfaces over which a QKD system is attached to a cryptographic
information and communication technology (ICT) system [276];
A study and systematization of existing security proofs to serve as a reference textbook for
assessing the capabilities of different QKD systems and constructing respective requirements and
evaluation criteria for practical security evaluation [277];
A generic security specification for QKD systems, based on a thorough security analysis for which
the steps foreseen in the Common Criteria ISO/EN 15408 have been carried out. This document
enables QKD system developers to assess their systems in terms of security evaluation and
accreditation for qualified use [278].
Ongoing activities
The current work of the ISG QKD is on the following four specifications which focus on:
A common ontology, vocabulary and terms of reference for the quantum cryptography domain
Aspects of the design, construction, characterization and operation of QKD systems that are
intended to protect against Trojan horse attacks
Characterization of optical components for use in QKD systems
Characteristics of QKD devices and of the required communication channels in the context of QKD
deployment on a point-to-point link.
The ISG QSC aims to make recommendations on core cryptographic primitives and develop ETSI Group
Specifications (GSs) for quantum-safe ICT applications highlighted by industry. It also aims to offer
practical advice and guidance to industry on real-world deployment issues, such as transition timescales,
generic requirements from operators or vendors, assessment of threats and risks.
Ongoing activities
Below are listed the first five Group Specification ISG QSC started to work on following its first meeting
in March 2015.
Quantum safe algorithmic framework (to be published as GS QSC 001 in 2016)
Cryptographic primitive characterization (to be published as GS QSC 002 in 2017)
Cryptographic primitive suitability assessment (to be published as GS QSC 003 in 2016)
Quantum safe threat assessment (to be published as GS QSC 004 in 2016)
Quantum safe standards assessment (to be published as GS QSC 005 in 2016).
Having achieved its mission ISG INS has been closed in 2014.
Achievements
ISG INS published the following set of GSs to support interoperability and incorporate privacy into the
telecoms services and networks domain. The objectives of the specifications are:
On IdM Interoperability between Operators, or ISPs with Enterprise, to provide mechanisms,
interfaces and protocols allowing third party providers to retrieve attributes through the operator
[279]
To provide requirements on the use and application of distributed policy management, decision
and enforcement in a hybrid environment (operator and services domains) [280]
To analyze the telecommunication operator's role acting as Identity Broker to facilitate the anchor
functionalities for the management of distributed user profile information. To also define the
protocol and data model required to access the user profile information via the Identity Broker
[281]
To analyse mechanisms, protocols and procedures to allow federation establishment based on
dynamic SLA negotiations. To identify gaps regarding the definition of formal SLA exchange,
attributes and privacy issues associated, and dynamic negotiation protocols [282]
To provide the requirements on the enforcement of policies in a distributed environment
supporting interoperability between different players [283]
To identify the need for a Global, Distributed Discovery Mechanism and to provide a gap analysis
for global distributed discovery of identifiers, providers and capabilities [284]
To create a generic architecture for distributed access control and enforcement. It includes the
identification of the necessary identities as well as a general description of their interface and
related interactions [285].
To provide security and privacy requirements for distributed network monitoring in order to
identify gaps regarding distributed processing and computation, protocols, and anonymized data
exchange [286].
To identify the requirements to develop a globally distributed discovery of identifiers, providers
and capabilities [287].
- Develop and build up a full set of Information Security Indicators (to become an ETSI Group
Specification), that will be the basis for further state-of-the-art figures;
- Select the relevant Priority One Indicators (with a detailed description in compliance with ISO 27004)
- Develop an underlying Security Event Classification Model (to become an ETSI Group Specification),
linked and consistent with the set of IS Indicators;
- Define a possible implementation of a subset of Indicators, with definition of the relevant monitoring
tools and/or methods (with the goal to become an ETSI Group Specification);
- Encourage the innovation and pragmatism in inviting for contributions from the circles of both users
companies and providers, towards developing common reference draft.
ETSI ISG ISI, created in 2011, has been working on a first set of five GSs, four of which have been
published:
GS ISI 001, published, in two parts [288] and [289], to provide ISI indicators as a way to assess
security measures level of effectiveness, and an accompanying guide
GS ISI 002, published [290], linked strictly to ISI 001 is a comprehensive security event
classification model covering incidents, vulnerabilities and nonconformities (with detailed
taxonomy and representation)
GS ISI 003, published [291], to assess the maturity level regarding overall event detection through
dedicated KPIs (technology/people/process) and to weigh event detection results
GS ISI 004, published [292], to demonstrate through examples how to produce indicators and how
to detect the related events with various means and methods (with categories of use
cases/symptoms)
GS ISI 005 to propose a way to produce security events and to test the effectiveness of existing
detection means, expected to be published in 2015.
Smart cards are an important enabler in applications where a user's credentials (e.g. a private key in a
PKI scheme or a biometric template) are used for authentication and secure communication. A card may
perform security-related processing and it may be certified both from the platform and the application
standpoints. The card may require a user's Personal Identification Number (PIN) or biometric sample in
order to perform any tasks, thus minimising the risk of security breaches associated with sending
authentication data over computer networks. Smart cards are used in a wide range of applications in the
banking, ID and telecom worlds, among others. Access control, payments, network authentication,
electronic purses, storage of confidential information, loyalty and ticketing are further examples of
common smart card applications.
As any other device, a smart card may be vulnerable to physical attacks. However such attacks are very
unlikely to be successful without the use of very advanced and complex technology, as a range of strong
security features and countermeasures are usually implemented to prevent unauthorized access to
smart cards and tampering with the data they contain.
Achievements
ETSI standardized the Subscriber Identity Module (SIM) card for GSM, which is one of the most widely
deployed smart cards ever. The concepts developed in the GSM specifications have also been imported
into the 3GPP specifications to create the USIM (Universal SIM) card used in UMTS. The UICC platform
can also be used in TETRA and 3GPP2 systems.
An important milestone in the evolution of the Smart Card Platform was the completion in 2008 of
Release 7 of all specifications with the addition of two new interfaces to the card with respect to the
legacy ISO/IEC 7816-based interface, as well as new logical interfaces enabling IP and HTTP connectivity.
A Secure Channel was also specified to be used over either the legacy or the high-speed interface. Since
then TC SCP has further evolved and enhanced the Smart Card Platform specifications, with a special
focus on delivering test specifications for all the new features. Release 12 will also be a milestone with
the current work on embedded UICC.
Ongoing activities
In addition to the maintenance of the existing set of specifications, the committee is focusing on
delivering specifications regarding:
The converged and access-independent network model which has been referred to popularly as Next
Generation Networks (NGN) is based on the extensive use of IP, and is designed to accommodate the
diversity of applications inherent in emerging broadband technologies. ETSI is already heavily
committed to, and is well advanced in, developing the necessary standards to bridge disparate networks
and domains and enable them to interoperate. The work on NGN has been looked at in the ETSI Project
on End-to-End Network Architectures (EP E2NA) that aimed to foster an environment in which the
technologies and standards necessary to achieve inclusive end-to-end communication are managed. In
addition, TC NTECH (Network Technologies) worked on NGN standardization.
EP E2NA aimed to bring together a wide set of standards and technologies to ensure they integrate in
delivering end to end services. The driver behind this is to ensure that technologies which enable any
part of the end-to-end chain are driven by a common set of requirements and are able to plug into a
common architecture. The scope of this includes security.
Achievements
The aim of EP E2NA and TC NTECH was to start from the platform established by TC TISPAN WG7 in
establishing the security requirements for the subsystems of Next Generation Networks [330] and the
related Security Design Guides ([324], [325] and [326]. This work references the guidelines on the use of
the Common Criteria for the evaluation of IT security (ISO/IEC 15408). The Common Criteria are a set of
drivers to be used as the basis for the evaluation of security properties of IT products and systems,
establishing the framework for an IT security evaluation that is meaningful to a wide audience. The
Common Criteria primarily address the protection of information from unauthorized disclosure,
modification or loss.
One of the Design Guides [324] provides guidelines for the preparation of Protection Profiles. A
Protection Profile defines an implementation-independent set of security requirements for a category of
communication equipment or system which is subject to evaluation under the Common Criteria. The
Protection Profile relevant to an ICT product could be used without modification to specify the security
requirements of a specific product or service. This ETSI Standard describes the steps necessary to create
such a Protection Profile.
Additional guidance on the preparation of Security Targets (STs) based upon ETSI communication
standards was also provided. The concept of Common Criteria evaluation involves the preparation of an
ST that specifies the security requirements for an identified IT product and describes the functional and
assurance security measures offered by that component to meet the stated requirements.
A TR [327] provides an analysis of the security provisions made in IPv6 and outlines how they may be
used to support the implementation of Public Key Infrastructure (PKI) solutions and the further
deployment of IPv6 and IP security (IPsec).
The challenge of security in Next Generation Networks was addressed with an analysis of risks and
threats [332] and by defining an extensible NGN security architecture [333].
Other publications include an ETSI Guide on the application of security countermeasures to service
capabilities [336], an analysis of security mechanisms for customer networks connected to TISPAN NGN
Release 2 [337], a feasibility study on Media Security in TISPAN NGN [338], a NAT traversal feasibility
study report [339], a feasibility study on the prevention of unsolicited communication in NGN [340], a
report on the security of identity in NGN [341], and a report on the application of ISO 15408-2
requirements to ETSI standards (method and application with examples) [342].
For the purposes of Lawful Interception and retained data, TC TISPAN identified appropriate interfaces,
reference points and entities in the NGN architecture (see TS [331]) and published a TR [344].
Publications include:
A report [345] on a feasibility study on IPTV security architecture, which details study models and
key management systems for service protection, a study of functional entities and mechanisms
for service protection and a study of a framework open to the integration of content protection
solutions;
A report [346] on prevention of unsolicited communication in the NGN, which addresses the
methodologies for preventing the terminating party from receiving Unsolicited Communication
(UC) and also addresses the legal implications. A UC detection & handling framework for entities
of the NGN is proposed;
A specification [347] on Identity Protection (Protection Profile) to provide countermeasures in
order to assure that users of the NGN have protection from abuse of identity. It covers Identity
protection, authentication and integrity, and defines credential management;
Ongoing activities
The work in TC NTECH is based on a new vision of future networks with greater autonomy,
virtualization, and changes to the ways in which services are both used and offered to customers that
modify the single vision of threat and countermeasures to a much more nuanced protection model for
users and providers.
EMTEL
Special Committee on Emergency Telecommunications (EMTEL) is the focal point in ETSI for the
coordination and collection of European requirements for emergency service communications. The
committee's scope includes issues related to user needs, network architectures, network resilience,
contingency planning, priority communications, priority access technologies and network management,
national security and Public Protection and Disaster Relief (PPDR).
EMTEL works mainly with ETSI TCs such as NTECH, previously HF and MSG, 3GPP and entities such as the
European Commission COCOM EGEA (Committee on Communications Expert Group on Emergency
Access), European Emergency Number Association (EENA), IETF Working Group on Emergency Context
Resolution with Internet Technologies (IETF-ECRIT), ITU-T and European projects.
Achievements
Public protection and emergency preparedness is a key topic for EMTEL, and the committee has
examined communications networks and the requirements for telecommunication and data
transmission to enable the efficient functioning of the emergency services in response to disasters.
These studies have resulted in the publication of a series of related TRs ([355], [356], [357] and [358]).
EMTEL has also contributed to Public Warning System, as initially defined by 3GPP, with European
requirements [359].
Ongoing activities
Four main EMTEL deliverables are considered as key documents and therefore regularly revised: [351]
on emergency call handling, [352] on the European regulations covering communication during
emergency situations, [353] on communications between authorities and organizations and [354] on
communications from authorities/organizations to individuals, groups or the general public during
emergencies.
Having achieved its mission Project MESA has been closed at the end of 2010.
MESA-capable communications systems directly improve the effectiveness of law enforcement, disaster
response, fire fighting, peacekeeping and emergency medical services. Typical applications include the
sending of vital information about operators, the transmission of building maps and plans, video
monitoring, robotic control, suspect identification and the sensing of hazardous material. To provide a
speedier solution than the development of brand new technologies, Project MESA adopted a ‘System of
Systems' approach, which involves linking together a variety of existing and foreseen technologies and
systems. The key factor is interoperability.
Project MESA's Service Specification Group published the system technical requirements ([360] to
[362]), whilst the MESA Technical Specification Group published a system overview [363], a system and
network architecture document [364] and the system's functional requirements definition [365].
In this context safety aspects are taken into account in cooperation with the European Aviation Safety
agency (EASA).
The SES interoperability regulation gives standards a central role in achieving the objectives of the SES
Programme, and requires the production and adoption of European Standards (ENs) to be referenced in
the Official Journal of the European Union as "Community Specifications", which are always drafted in
response to requests ("Mandates") from the European Commission.
Safety requirements are essential requirements of the interoperability regulation and therefore safety
aspects are extensively considered. TC AERO has already produced "Community Specifications" on A-
CDM (Airport Collaborative Decision Making), A-SMGCS (Advanced- Surface Movement Guidance and
Control System) and DLS (Data Link Services). In addition, a task force between TC ERM and TC AERO is
responsible for harmonized standards for ground Air Traffic Management (ATM) equipment.
ETSI is performing security work in this area in its Joint Technical Committee (JTC) Broadcast, which
brings the Institute together with the European Broadcasting Union (EBU) and the European Committee
for Electrotechnical Standardization (CENELEC). JTC Broadcast coordinates the drafting of standards for
broadcasting and related fields.
Two areas in which JTC Broadcast is involved address specific security features: TV-Anytime and the
Digital Video Broadcasting (DVB) Project. TV-Anytime is a set of specifications for the controlled delivery
of multimedia content to a user's personal device (Personal Video Recorder). It seeks to exploit the
evolution in the convenient, high capacity storage of digital information to provide consumers with a
highly personalized TV experience. Users will have access to content from a wide variety of sources,
tailored to their needs and personal preferences. ETSI standards for TV-Anytime have been developed in
JTC Broadcast, based on proposals from the TV-Anytime Forum, which has now closed after publishing
the TV-Anytime specifications. The documents are regularly updated by JTC Broadcast.
The DVB Project is an industry-led consortium of over 200 broadcasters, manufacturers, network
operators, software developers, regulatory bodies and others from around the world, committed to
designing global specifications for the delivery of digital television and data services. ETSI standards for
DVB systems are developed in JTC Broadcast, based on proposals from the DVB Project.
Achievements
A major achievement of the DVB Project has been the release of the DVB Common Scrambling
Algorithm. The Common Scrambling Algorithm is composed of the Common Descrambling System and
the Scrambling Technology. The specification for each is distributed separately under arrangements with
ETSI, which acts as Custodian for the companies which developed the Common Scrambling Algorithm.
The last version, known as CSA3, is also available from ETSI. The various agreements, together with the
licensing conditions, are available on the ETSI website at:
www.etsi.org/about/what-we-do/security-algorithms-and-codes/csa3-licences
The TV-Anytime specifications were developed in two phases. Phase 1 has been published by ETSI in
2003 as the multi-part TS 102 822. Part 7 of this standard [371] specifies how the TLS (Transport Layer
Security) Protocol is used in TV-Anytime to protect the delivery of data: the primary goal of the protocol
is to provide privacy and data integrity between two communicating applications. TLS also provides
choices of cipher suites where data encryption may be disabled. It can thus be used to ensure the data
integrity of metadata conveyed between service provider (server) and user (client).
At the request of the TV-Anytime Forum, JTC Broadcast has worked on the second phase, incorporating
an enhanced feature set. These Phase 2 specifications have now also been published by ETSI within the
same TS 102 822 series.
Since then a five-part standard for a Second Generation DVB Interactive Satellite System (commonly
named DVB-RCS2) has been published. Part 1 is a specification for an Overview and System Level
specification [388], part 2 is an EN on Lower Layers for Satellite [389], part 3 is a Higher Layers for
Satellite specification [390], part 4 [391] provides the Guidelines for Implementation and use of part 2
[389] and part 5 [392] provides the Guidelines for Implementation and use of part 3 [390].
An ETSI TS [393] has been published related to production scrambling algorithms and IP or higher layer
security mechanisms, to specify the new common DVB Conditional Access elements (DVB-CSA3);
specifically those aspects which are required for co-existence of multiple Conditional Access Systems in
a single data stream.
Also a specification on Content Scrambling Algorithms for DVB-IPTV Services using MPEG2 Transport
Streams [394] has been published.
The DVB Project has published a multipart collection of TR and TS deliverables (14 parts, [374] to [387])
on CPCM (Content Protection and Copy Management).
DVB has defined a specification for CI (Common Interface) Plus LLP (Limited Liability Partnership)
published as CI Plus v1.4, in which security aspects are taken into account [395].
EP E2NA addressed MCD security aspects with an analysis of the mechanisms addressing
interoperability of multimedia service and content distribution and consumption, with respect to a
CA/DRM (Conditional Access / Digital Rights Management) solution [396].
As parts of its overall responsibilities, TC MTS also addresses issues related to security testing.
Achievements
In 2014 TC MTS published a report [397] on the application of model-based security testing in different
industrial domains. The document focuses on the results and conclusions from such work, thus giving an
insight into how applicable such methods are today for testing, and indicating their strengths and
weaknesses.
In 2015 TC MTS published a report [398] on basic terminology for security testing, which provides the
basis for a common understanding of security testing techniques that can be used to test
communication products and systems. The document provides information to practitioners on
techniques used in testing, and assessment of security, robustness and resilience throughout product
and system development lifecycles.
Ongoing activities
TC MTS is working on the two following ETSI Guides (EGs):
Guidance related to security assurance activities during the phases of the system lifecycle (to be
published as EG 203 250)
Guidance on a set of methodologies that combine risk assessment and testing. It distinguishes
between risk-based testing - risk assessment to improve testing, and test-based risk assessment -
testing to improve risk assessment- (to be published as EG 203 251).
TC MTS has produced the following ETSI TSs dealing with IPv6 security aspects:
A catalogue of all of the security-related IPv6 requirements extracted from Internet Engineering
Task Force (IETF) specifications [399]
A Test Suite Structure and Test Purposes (TSS&TP) ([400]) for conformance tests of the security
IPv6 protocol based on the requirements defined in [399]
A specification for interoperability tests for IPv6 security [401]
An Abstract Test Suite (ATS) [402] for the mobility functions of IPv6, based on [399] and [400]; this
document provides a basis for conformance tests for IPv6 equipment to achieve a high probability
of interoperability between IPv6 equipment from different manufacturers.
In the years 2004-2006 TC MTS has carried out an IPv6 Testing project co-funded by ETSI, the EC and
EFTA (European Free Trade Association), taking into account the needs of bodies such as 3GPP and ETSI
TC TISPAN. This project provided a publicly available test development framework for four key areas of
IPv6: core protocols, security, mobility and migration from IPv4 to IPv6.
The security part of this work resulted in a conformance test suite containing 90 tests, covering IETF RFC
4306, i.e. Internet Key Exchange version 2 (IKEv2) later updated as RFC 5282, RFC 4302 i.e.
Authentication Header (AH) and RFC 4303, i.e. Encapsulating Security Payload (ESP). The tests were
based on authentication and hash algorithms such as HMAC-SHA1 (Hash-based Message Authentication
Code – Secure Hash Algorithm 1) and HMAC-MD5 (Hash-based Message Authentication Code – Message
Digest 5), and encryption algorithms such as DES (Data Encryption Standard), 3DES (Triple Data
Encryption Standard) and AES (Advanced Encryption Standard).
The objective of this project was to design, build and test a TTCN-3 based Test System Prototype for
ePassport Reader Conformance Testing. This project was a joint effort between the EC Joint Research
Centre (JRC) and ETSI.
TTCN-3 (Testing and Test Control Notation Version 3) is an internationally standardized programming
language that has been specifically designed for use in specifying and controlling testing scenarios.
TTCN-3 has been developed and is maintained by the ETSI TC Methods for Testing and Specification
(MTS).
As an outcome of the ePassport Readers project, ETSI TC MTS published a report on ePassport Readers
Interoperability Support; Framework for Developing Conformance Test Specifications [403]. This
deliverable contains an extended selection of test purposes, the Abstract Test Suite of the sample test
cases, a report on the validation of the prototype and the lab procedure for standardized test reporting.
In 2012, ETSI has placed a renewed focus on IPCablecom standardization, with the creation of a
dedicated Technical Committee on the subject, TC Cable. IPCablecom work ongoing in TC ATTM was
then moved to the new committee.
Security is a key issue for IPCablecom, since it is a shared network providing valuable content. As well as
the standards on Lawful Interception ([158] and [159]), TC AT has produced a security specification for
the technology [404], covering security for the entire IPCablecom architecture, identifying security risks
and specifying mechanisms to secure the architecture.
TC CABLE has delivered an ETSI standard for requirements for both CMTSs (Cable Modem Termination
Systems) and CMs (Cable Modems) in order to implement a DOCSIS (Data-Over Cable Service Interface
Specifications) Layer-2 Virtual Private Network (DOCSIS L2VPN) feature [405]. The L2VPN feature allows
cable operators to offer a Layer 2 Transparent LAN Service (TLS) to commercial enterprises.
As ICT becomes ever more essential for business, public administration, public safety and commercial
needs, a vast number of new technologies are being developed and becoming mature for
standardization. Security is not an additional feature that can be patched on after the adoption of a
technology: it must be taken into account from the beginning of the standardization process. Indeed, in
many cases it can be a winning driver that enables the overall success of the technology, and with
increasing demands of privacy and integrity of data it is legally essential for businesses which generate
and handle user or customer data.
The threat to the security of our ICT systems grows daily. There is increased interest in defending
national and critical infrastructure through Cybersecurity, and innovations such as cloud computing
emphasize the need for good security. Experience has revealed many breaches of security that have had
a significant impact on ICT systems, whether the causes were deliberate or accidental. Ways must be
found to protect customers. There has also been a noticeable increase in legislation world-wide, driven
by growing security concerns in recent years. These continue to drive our activities.
Solutions will certainly include a reliable and secure network infrastructure. But they will also depend on
trust on the part of users - both citizens and businesses - that privacy, confidentiality, secure
identification and other issues are rightly addressed. Security standardization, sometimes in support of
legislative actions, therefore has an important role to play in the future development of ICT.
Technology is constantly evolving. Criminals are becoming ever more inventive. The personal security of
the individual citizen is far too frequently at risk from privacy threats, terrorism and natural disasters.
Security standardization must evolve too to keep pace with the developing risks and threats.
Throughout its lifetime, ETSI has already proved it can adapt to changing situations; it will continue to do
so, moving into new technical areas as they emerge and tackling new issues.
1 Some deliverables that are relevant to security algorithms, or that are for internal use, are only available on a restricted
basis. This is made explicit for each of these deliverables with the statement "NOT AVAILABLE FOR DOWNLOAD".
TETRA
[92] ETSI TR 102 021-7: "Terrestrial Trunked Radio (TETRA); User Requirement Specification TETRA
Release 2; Part 7: Security".
DECT
[99] ETSI EN 300 175-7: "Digital Enhanced Cordless Telecommunications (DECT); Common Interface
(CI); Part 7: Security features".
[100] ETSI EN 300 176-1: "Digital Enhanced Cordless Telecommunications (DECT); Test specification;
Part 1: Radio".
[101] ETSI ETS 300 759: "Digital Enhanced Cordless Telecommunications (DECT); DECT Authentication
Module (DAM); Test specification for DAM".
[102] ETSI ETS 300 760: "Digital Enhanced Cordless Telecommunications (DECT); DECT Authentication
Module (DAM); Implementation Conformance Statement (ICS) proforma specification".
[103] ETSI ETS 300 825: "Digital Enhanced Cordless Telecommunications (DECT); 3 Volt DECT
Authentication Module (DAM)".
[104] ETSI EN 300 175-6: "Digital Enhanced Cordless Telecommunications (DECT); Common Interface
(CI); Part 6: Identities and addressing".
RFID
[105] ETSI EN 302 208-1 (ERM TG34): "Electromagnetic compatibility and Radio spectrum Matters
(ERM); Radio Frequency Identification Equipment operating in the band 865 MHz to 868 MHz
with power levels up to 2 W; Part 1: Technical requirements and methods of measurement".
[106] ETSI EN 302 208-2 (ERM TG34): "Electromagnetic compatibility and Radio spectrum Matters
(ERM); Radio Frequency Identification Equipment operating in the band 865 MHz to 868 MHz
with power levels up to 2 W; Part 2: Harmonized EN covering essential requirements of article
3.2 of the R&TTE Directive".
[107] ETSI TR 102 436 (ERM TG34): "Electromagnetic compatibility and Radio spectrum Matters
(ERM); Short Range Devices (SRD) intended for operation in the band 865 MHz to 868 MHz;
Guidelines for the installation and commissioning of Radio Frequency Identification (RFID)
equipment at UHF".
RRS
[110] ETSI TR 102 745 (RRS WG4): "Reconfigurable Radio Systems (RRS); User Requirements for Public
Safety".
Satellite
[111] ETSI TR 102 287: "Satellite Earth Stations and Systems (SES); Broadband Satellite Multimedia
(BSM); IP Interworking over satellite; Security aspects".
[112] ETSI TS 102 465 (SES BSM): "Satellite Earth Stations and Systems (SES); Broadband Satellite
Multimedia (BSM); General Security Architecture".
[113] ETSI TS 102 466 (SES BSM): "Satellite Earth Stations and Systems (SES); Broadband Satellite
Multimedia (BSM); Multicast Security Architecture".
[114] ETSI TS 101 376-3-9: "GEO-Mobile Radio Interface Specifications; Part 3: Network specifications;
Sub-part 9: Security related Network Functions; GMR-1 03.020".
[115] ETSI TS 101 377-2-3 (SES GMR): "GEO-Mobile Radio Interface Specifications; Part 2: Service
specifications; Sub-part 3: Security Aspects; GMR-2 02.009".
[116] ETSI TS 101 377-3-10 (SES GMR): "GEO-Mobile Radio Interface Specifications; Part 3: Network
specifications; Sub-part 10: Security related Network Functions; GMR-2 03.020".
[117] ETSI TS 102 442-6: "Satellite Earth Stations and Systems (SES); Satellite Component of
UMTS/IMT-2000; Multimedia Broadcast/Multicast Services; Part 6: Security".
Machine to Machine
[128] ETSI TR 103 167: "Machine to Machine (M2M); Threat analysis and counter measures to M2M
service layer".
[129] ETSI TS 102 690: "Machine-to-Machine communications (M2M); Functional architecture".
[130] ETSI TS 102 921: "Machine-to-Machine communications (M2M); mIa, dIa and mId interfaces".
[131] ETSI TS 118 103: "Security solutions" (oneM2M).
[132] ETSI TR 118 508: "Analysis of Security Solutions for the oneM2M System" (oneM2M).
Lawful interception
Published by TC LI
[133] ETSI ES 201 671: "Lawful Interception (LI); Handover interface for the lawful interception of
telecommunications traffic".
[134] ETSI ES 201 158: "Telecommunications security; Lawful Interception (LI); Requirements for
network functions".
[135] ETSI TS 102 656: "Lawful Interception (LI); Retained Data; Requirements of Law Enforcement
Agencies for handling Retained Data".
[136] ETSI TR 102 519: "Lawful Interception of public Wireless LAN Internet Access".
[137] ETSI TR 102 528: "Lawful Interception (LI) Interception domain Architecture for IP networks".
[138] ETSI TS 101 671: "Lawful Interception (LI); Handover interface for the lawful interception of
telecommunications traffic".
[139] ETSI TS 101 331: "Lawful Interception (LI); Requirements of Law Enforcement Agencies".
[140] ETSI TR 101 944: "Telecommunications security; Lawful Interception (LI); Issues on IP
Interception".
[141] ETSI TR 102 503: "Lawful Interception (LI); ASN.1 Object Identifiers in Lawful Interception
Specifications".
[142] ETSI TS 102 232-1: "Lawful Interception (LI); Handover Interface and Service-Specific Details
(SSD) for IP delivery; Part 1: Handover specification for IP delivery".
[143] ETSI TS 102 232-2: "Lawful Interception (LI); Handover Interface and Service-Specific Details
(SSD) for IP delivery; Part 2: Service-specific details for messaging services".
[144] ETSI TS 102 232-3: "Lawful Interception (LI); Handover Interface and Service-Specific Details
(SSD) for IP delivery; Part 3: Service-specific details for internet access services".
[145] ETSI TS 102 232-4: "Lawful Interception (LI); Handover Interface and Service-Specific Details
(SSD) for IP delivery; Part 4: Service-specific details for Layer 2 services".
[146] ETSI TS 102 232-5: "Lawful Interception (LI); Handover Interface and Service-Specific Details
(SSD) for IP delivery; Part 5: Service-specific details for IP Multimedia Services".
[147] ETSI TS 102 232-6: "Lawful Interception (LI); Handover Interface and Service-Specific Details
(SSD) for IP delivery; Part 6: Service-specific details for PSTN/ISDN services".
[148] ETSI TS 102 232-7: "Lawful Interception (LI); Handover Interface and Service-Specific Details
(SSD) for IP delivery; Part 7: Service-specific details for Mobile Services".
Electronic Signatures
[160] ETSI TR 102 044: "Electronic Signatures and Infrastructures (ESI); Requirements for role and
attribute certificates".
[161] ETSI TR 102 045: "Electronic Signatures and Infrastructures (ESI); Signature policy for extended
business model".
[162] ETSI TR 102 046: "Electronic Signatures and Infrastructures (ESI); Maintenance report".
[163] ETSI TR 102 047: "Electronic Signatures and Infrastructures (ESI); International Harmonization of
Electronic Signature Formats".
[164] ETSI TR 102 040: "Electronic Signatures and Infrastructures (ESI); International Harmonization of
Policy Requirements for CAs issuing Certificates".
[165] ETSI TS 102 231: "Electronic Signatures and Infrastructures (ESI); Provision of harmonized Trust-
service status information".
2 The lawful interception references for GSM and UMTS can be found among the GSM and UMTS references
Security Algorithms
[236] ETSI TCTR 003 - NOT AVAILABLE FOR DOWNLOAD: "Security Algorithms Group of Experts (SAGE);
European Encryption Algorithm for the use in audiovisual systems".
[237] ETSI TCTR 001 - NOT AVAILABLE FOR DOWNLOAD: "Security Algorithms Group of Experts (SAGE);
Requirements specification for an encryption algorithm for use in audio visual systems".
[238] ETSI TCTR 002 - NOT AVAILABLE FOR DOWNLOAD: "Security Algorithms Group of Experts (SAGE);
Report on the specification and evaluation of the GSM cipher algorithm A5/2".
[239] ETSI TCTR 004 - NOT AVAILABLE FOR DOWNLOAD: "Security Algorithms Group of Experts (SAGE);
Cryptographic Algorithm for the European Multi-Application IC-Card".
[240] ETSI TCTR 005 - NOT AVAILABLE FOR DOWNLOAD: "Security Algorithms Group of Experts (SAGE);
UPT Authentication Algorithm for the use in DTMF Devices".
[241] ETSI TCRTR 032: "Security Algorithms Group of Experts (SAGE); Rules for the management of the
TESA-7 algorithm".
[242] ETSI TCRTR 031: "Security Algorithms Group of Experts (SAGE); Universal Personal
Telecommunication (UPT) authentication; Rules for the management of the USA-4".
[243] MI/SAGE-0008 - NOT AVAILABLE FOR DOWNLOAD: "Cryptographic algorithm for Public Network
Operators".
[244] ETSI TCRTR 035: "Security Algorithms Group of Experts (SAGE); Rules for the management of the
Baras algorithm".
[245] ETSI TR 101 053-1: "Security Algorithms Group of Experts (SAGE); Rules for the management of
the TETRA standard encryption algorithms; Part 1: TEA1".
NB: the next version will be published as TS
[246] MI/SAGE-00010-2 - NOT AVAILABLE FOR DOWNLOAD: "Standard Trans European Trunked RAdio
(TETRA) air interface encryption algorithm TEA1 and TEA2".
[247] ETSI TS 101 053-2: "Security Algorithms Group of Experts (SAGE); Rules for the management of
the TETRA standard encryption algorithms; Part 2: TEA2".
[248] ETSI TR 101 052: "Security Algorithms Group of Experts (SAGE); Rules for the management of the
TETRA standard authentication and key management algorithm set TAA1".
[249] MI/SAGE-00011-2 - NOT AVAILABLE FOR DOWNLOAD: "Standard Trans European Trunked RAdio
(TETRA) set of air interface authentication and key management algorithms TAA1".
[250] ETSI TR 101 054: "Security Algorithms Group of Experts (SAGE); Rules for the management of the
HIPERLAN Standard Encryption Algorithm (HSEA)".
Smart Cards
[293] ETSI TS 101 220: "Smart Cards; ETSI numbering system for telecommunication application
providers".
[294] ETSI TS 102 124: "Smart Cards; Transport Protocol for UICC based Applications; Stage 1".
[295] ETSI TR 102 151: "Smart Cards; Measurement of Electromagnetic Emission of SIM Cards".
[296] ETSI TS 102 600: "Smart Cards; UICC-Terminal interface; Characteristics of the USB interface".
[297] ETSI TS 102 613: "Smart Cards; UICC - Contactless Front-end (CLF) Interface; Part 1: Physical and
data link layer characteristics".
[298] ETSI TS 102 622: "Smart Cards; UICC - Contactless Front-end (CLF) Interface; Host Controller
Interface (HCI) ".
[299] ETSI TS 102 483: "Smart cards; UICC-Terminal interface; Internet Protocol connectivity between
UICC and terminal".
[300] ETSI TS 102 484: "Smart Cards; Secure channel between a UICC and an end-point terminal".
[301] ETSI TS 102 588: "Smart Cards; Application invocation Application Programming Interface (API)
by a UICC webserver for Java Card™ platform; ".
[302] ETSI TS 102 221: "Smart Cards; UICC-Terminal interface; Physical and logical characteristics".
[303] ETSI TS 102 223: "Smart Cards; Card Application Toolkit (CAT) ".
[304] ETSI TS 102 224: "Smart Cards; Security mechanisms for UICC based Applications - Functional
requirements".
[305] ETSI TS 102 225: "Smart Cards; Secured packet structure for UICC based applications".
[306] ETSI TS 102 226: "Smart Cards; Remote APDU structure for UICC based applications".
[307] ETSI TS 102 230: "Smart cards; UICC-Terminal interface; Physical, electrical and logical test
specification".
[308] ETSI TS 102 240: "Smart Cards; UICC Application Programming Interface and Loader
Requirements; Service description".
[309] ETSI TS 102 222: "Integrated Circuit Cards (ICC); Administrative commands for
telecommunications applications".
[310] ETSI TS 102 310: "Smart Cards; Extensible Authentication Protocol support in the UICC (Release
6)".
Future Networks
[324] ETSI ES 202 382: "Telecommunications and Internet converged Services and Protocols for
Advanced Networking (TISPAN); Security Design Guide; Method and proforma for defining
Protection Profiles".
[325] ETSI ES 202 383: "Telecommunications and Internet converged Services and Protocols for
Advanced Networking (TISPAN); Security Design Guide; Method and proforma for defining
Security Targets".
[326] ETSI EG 202 387: "Telecommunications and Internet converged Services and Protocols for
Advanced Networking (TISPAN); Security Design Guide; Method for application of Common
Criteria to ETSI deliverables".
[327] ETSI TR 102 419: "Telecommunications and Internet converged Services and Protocols for
Advanced Networking (TISPAN); Security analysis of IPv6 application in telecommunications
standards".
[328] ETSI TR 102 420: "Telecommunications and Internet converged Services and Protocols for
Advanced Networking (TISPAN); Review of activity on security".
[329] ETSI TR 102 055: "Telecommunications and Internet converged Services and Protocols for
Advanced Networking (TISPAN); ENUM scenarios for user and infrastructure ENUM".
Emergency Telecommunications
[351] ETSI TR 102 180: "Basis of requirements for communication of individuals with
authorities/organizations in case of distress (Emergency call handling)".
[352] ETSI TR 102 299: "Emergency Communications; Collection of European Regulatory Texts and
orientations".
[353] ETSI TS 102 181: "Emergency Communications (EMTEL); Requirements for communication
between authorities/organizations during emergencies".
[354] ETSI TS 102 182: "Emergency Communications (EMTEL); Requirements for communications from
authorities/organizations to individuals, groups or the general public during emergencies".
[355] ETSI TR 102 410: "Emergency Communications (EMTEL); Basis of requirements for
communications between individuals and between individuals and authorities whilst
emergencies are in progress".
[356] ETSI TR 102 444: "Emergency Communications (EMTEL); Analysis of the Short Message Service
(SMS) and Cell Broadcast Service (CBS) for Emergency Messaging applications; Emergency
Messaging; SMS and CBS".
[357] ETSI TR 102 445: "Emergency Communications (EMTEL); Overview of Emergency
Communications Network Resilience and Preparedness".
[358] ETSI TR 102 476: "Emergency Communications (EMTEL); Emergency calls and VoIP: possible
short and long term solutions and standardization activities".
[359] ETSI TS 102 900:" Emergency Communications (EMTEL); European Public Warning System (EU-
ALERT) using the Cell Broadcast Service".
[360] ETSI TS 170 001: "Project MESA; Service Specification Group - Services and Applications;
Statement of Requirements (SoR)".
[361] ETSI TR 170 002: "Project MESA; Service Specification Group - Services and Applications;
Definitions, symbols and abbreviations".
[362] ETSI TR 170 003: "Project MESA; Service Specification Group - Services and Applications; Basic
requirements".
[363] ETSI TR 170 012: "Project MESA; Technical Specification Group - System; System Overview".
[364] ETSI TR 102 653: "Project MESA; Technical Specification Group - System; System and Network
Architecture".
Broadcasting
[366] ETSI TS 101 197-1 (Broadcast): "Digital Video Broadcasting (DVB); DVB SimulCrypt; Part 1: Head-
end architecture and synchronization".
[367] ETSI EN 300 744 (Broadcast): "Digital Video Broadcasting (DVB); Framing structure, channel
coding and modulation for digital terrestrial television".
[368] ETSI EN 301 192 (Broadcast): "Digital Video Broadcasting (DVB); DVB specification for data
broadcasting".
[369] ETSI TS 102 201 (Broadcast): "Digital Video Broadcasting (DVB); Interfaces for DVB Integrated
Receiver Decoder (DVB-IRD)".
[370] ETSI TS 103 197 (Broadcast): "Digital Video Broadcasting (DVB); Head-end implementation of
DVB SimulCrypt".
[371] ETSI TS 102 822-7 (Broadcast): Broadcast and On-line Services: Search, select and rightful use of
content on personal storage systems ("Broadcast and On-line Services: Search, select, and
rightful use of content on personal storage systems ("TV-Anytime Phase 1"); Part 7: Bi-
directional metadata delivery protection".
[372] ETSI EN 301 790 (Broadcast): "Digital Video Broadcasting (DVB); Interaction channel for satellite
distribution systems".
[373] ETSI TS 102 812 (Broadcast): "Digital Video Broadcasting (DVB); Multimedia Home Platform
(MHP) Specification 1.1.1".
[374] ETSI TS 102 825-1 (Broadcast): "Digital Video Broadcasting (DVB); Content Protection and Copy
Management (DVB-CPCM); Part 1: CPCM Abbreviations, Definitions and Terms ".
[375] ETSI TS 102 825-2 (Broadcast): "Digital Video Broadcasting (DVB); Content Protection and Copy
Management (DVB-CPCM); Part 2: CPCM Reference Model".
[376] ETSI TS 102 825-3 (Broadcast): "Digital Video Broadcasting (DVB); Content Protection and Copy
Management (DVB-CPCM); Part 3: CPCM Usage State Information".
[377] ETSI TS 102 825-4 (Broadcast): "Digital Video Broadcasting (DVB); Content Protection and Copy
Management (DVB-CPCM); Part 4: CPCM System Specification".
[378] ETSI TS 102 825-5 (Broadcast): "Digital Video Broadcasting (DVB); Content Protection and Copy
Management (DVB-CPCM); Part 5: CPCM Security Toolbox".
[379] ETSI TR 102 825-6 (Broadcast): "Digital Video Broadcasting (DVB); Content Protection and Copy
Management (DVB-CPCM); Part 6: CPCM Security Test Vectors".
[380] ETSI TS 102 825-7 (Broadcast): "Digital Video Broadcasting (DVB); Content Protection and Copy
Management (DVB-CPCM); Part 7: CPCM Authorized Domain Management".
[381] ETSI TR 102 825-8 (Broadcast): "Digital Video Broadcasting (DVB); Content Protection and Copy
Management (DVB-CPCM); Part 8: CPCM Authorized Domain Management scenarios".
[382] ETSI TS 102 825-9 (Broadcast): "Digital Video Broadcasting (DVB); Content Protection and Copy
Management (DVB-CPCM); Part 9: CPCM System Adaptation Layers".
[383] ETSI TS 102 825-10 (Broadcast): "Digital Video Broadcasting (DVB); Content Protection and Copy
Management (DVB-CPCM); Part 10: CPCM Acquisition, Consumption and Export Mappings".
Security Testing
[397] ETSI TR 101 582: "Methods for Testing and Specification (MTS); Security Testing; Case Study
Experiences".
[398] ETSI TR 101 583: "Methods for Testing and Specification (MTS); Security Testing; Basic
Terminology".
IPv6
[399] ETSI TS 102 558: "Methods for Testing and Specification (MTS); Internet Protocol Testing (IPT):
IPv6 Security; Requirements Catalogue".
ePassport Readers
[403] ETSI TR 103 200: "Methods for Testing and Specification (MTS); ePassport Readers
Interoperability Support; Framework for Developing Conformance Test Specifications".
IP Cablecom
[404] ETSI TS 101 909-11: "Digital Broadband Cable Access to the Public Telecommunications
Network; IP Multimedia Time Critical Services; Part 11: Security".
[405] ETSI ES 203 385: ""CABLE; DOCSIS® Layer 2 Virtual Private Networking"".
Reference numbers of security-related documents added since the 6th Edition of this White Paper
(January 2014):
[86], [87], [88], [89], [90], [91], [126], [127], [131], [132], [230], [231], [232], [233], [234], [235], [247],
[287], [291], [391], [392], [393], [394], [395], [396], [397], [398] and [405].
This White Paper is issued for information only. It does not constitute an official or agreed position of ETSI, nor of its Members. The
views expressed are entirely those of the author(s).
ETSI declines all responsibility for any errors and any loss or damage resulting from use of the contents of this White Paper.
ETSI also declines responsibility for any infringement of any third party's Intellectual Property Rights (IPR), but will be pleased to
acknowledge any IPR and correct any infringement of which it is advised.
Copyright Notification
Copying or reproduction in whole is permitted if the copy is complete and unchanged (including this copyright statement).
© European Telecommunications Standards Institute 2015. All rights reserved.
DECT™, PLUGTESTS™, UMTS™, TIPHON™, IMS™, INTEROPOLIS™, FORAPOLIS™, and the TIPHON and ETSI logos are Trade Marks of ETSI
registered for the benefit of its Members.
3GPP™ and LTE™ are Trade Marks of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners.
GSM™, the Global System for Mobile communication, is a registered Trade Mark of the GSM Association.
Amazon Web Services:
d
Overview of Security Processes
e
March 2020
i v
This paper has been archived.
h
For the latest technical content on
c
Security and Compliance, see
r
https://aws.amazon.com/
A
architecture/security-identity-
compliance/
Notices
Customers are responsible for making their own independent assessment of the
information in this document. This document: (a) is for informational purposes only, (b)
represents current AWS product offerings and practices, which are subject to change
without notice, and (c) does not create any commitments or assurances from AWS and
its affiliates, suppliers or licensors. AWS products or services are provided “as is”
without warranties, representations, or conditions of any kind, whether express or
implied. The responsibilities and liabilities of AWS to its customers are controlled by
d
AWS agreements, and this document is not part of, nor does it modify, any agreement
between AWS and its customers.
v e
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
h i
rc
A
Contents
Introduction ..........................................................................................................................1
Shared Security Responsibility Model ................................................................................1
AWS Security Responsibilities.........................................................................................2
Customer Security Responsibilities .................................................................................2
AWS Global Infrastructure Security ....................................................................................3
d
AWS Compliance Program ..............................................................................................3
e
Physical and Environmental Security ..............................................................................4
Business Continuity Management ...................................................................................6
v
Network Security ..............................................................................................................7
i
AWS Access ...................................................................................................................11
h
Secure Design Principles ...............................................................................................12
Change Management.....................................................................................................12
c
AWS Account Security Features ...................................................................................14
r
Individual User Accounts ...............................................................................................19
Secure HTTPS Access Points .......................................................................................19
A
Security Logs..................................................................................................................20
AWS Trusted Advisor Security Checks .........................................................................20
AWS Config Security Checks ........................................................................................21
AWS Service-Specific Security .........................................................................................21
Compute Services ..........................................................................................................21
Networking Services ......................................................................................................28
Storage Services ............................................................................................................43
Database Services .........................................................................................................55
Application Services .......................................................................................................66
Analytics Services ..........................................................................................................73
Deployment and Management Services .......................................................................77
Mobile Services ..............................................................................................................82
Applications ....................................................................................................................85
Document Revisions..........................................................................................................88
e d
h i v
rc
A
Abstract
This document is intended to answer questions, such as How does AWS help me
ensure that my data is secure? Specifically, this paper describes AWS physical and
operational security processes for the network and server infrastructure under the
management of AWS.
e d
h i v
rc
A
Amazon Web Services Amazon Web Services: Overview of Security Processes
Introduction
Amazon Web Services (AWS) delivers a scalable cloud computing platform with high
availability and dependability, providing the tools that enable customers to run a wide
range of applications. Helping to protect the confidentiality, integrity, and availability of
our customers’ systems and data is of the utmost importance to AWS, as is maintaining
customer trust and confidence.
d
Shared Security Responsibility Model
e
Before covering the details of how AWS secures its resources, it is important to
understand how security in the cloud is slightly different than security in your on-
v
premises data centers. When you move computer systems and data to the cloud,
i
security responsibilities become shared between you and your cloud service provider. In
this case, AWS is responsible for securing the underlying infrastructure that supports
the cloud, and you’re responsible for anything you put on the cloud or connect to the
h
cloud. This shared security responsibility model can reduce your operational burden in
many ways, and in some cases may even improve your default security posture without
c
additional action on your part.
A r
Figure 1: AWS shared security responsibility model
The amount of security configuration work you have to do varies depending on which
services you select and how sensitive your data is. However, there are certain security
Page 1
Amazon Web Services Amazon Web Services: Overview of Security Processes
d
software, networking, and facilities that run AWS services. Protecting this infrastructure
is the number one priority of AWS. Although, you can’t visit our data centers or offices to
see this protection firsthand, we provide several reports from third-party auditors who
e
have verified our compliance with a variety of computer security standards and
regulations. For more information, visit AWS Compliance.
i v
Note that in addition to protecting this global infrastructure, AWS is responsible for the
security configuration of its products that are considered managed services. Examples
of these types of services include Amazon DynamoDB, Amazon RDS, Amazon
h
Redshift, Amazon EMR, Amazon WorkSpaces, and several other services. These
c
services provide the scalability and flexibility of cloud-based resources with the
additional benefit of being managed. For these services, AWS handles basic security
r
tasks like guest operating system (OS) and database patching, firewall configuration,
and disaster recovery. For most of these managed services, all you have to do is
configure logical access controls for the resources and protect your account credentials.
A
A few of them may require additional tasks, such as setting up database user accounts,
but overall the security configuration work is performed by the service.
Page 2
Amazon Web Services Amazon Web Services: Overview of Security Processes
software or utilities you install on the instances, and the configuration of the AWS
provided firewall (called a security group) on each instance. These are basically the
same security tasks that you’re used to performing no matter where your servers are
located.
AWS managed services like Amazon RDS or Amazon Redshift provide all of the
resources you need to perform a specific task—but without the configuration work that
can come with them. With managed services, you don’t have to worry about launching
and maintaining instances, patching the guest OS or database, or replicating
databases—AWS handles that for you. But as with all services, you should protect your
d
AWS Account credentials and set up individual user accounts with Amazon Identity and
Access Management (IAM) so that each of your users has their own credentials and
e
you can implement segregation of duties. We also recommend using multi-factor
authentication (MFA) with each account, requiring the use of SSL/TLS to communicate
v
with your AWS resources, and setting up API/user activity logging with AWS CloudTrail.
i
For more information about additional measures you can take, refer to the AWS
Security Best Practices whitepaper and recommended reading on the AWS Security
h
Learning webpage.
c
AWS Global Infrastructure Security
r
AWS operates the global cloud infrastructure that you use to provision a variety of basic
computing resources such as processing and storage. The AWS global infrastructure
A
includes the facilities, network, hardware, and operational software (e.g., host OS,
virtualization software, etc.) that support the provisioning and use of these resources.
The AWS global infrastructure is designed and managed according to security best
practices as well as a variety of security compliance standards. As an AWS customer,
you can be assured that you’re building web architectures on top of some of the most
secure computing infrastructure in the world.
Page 3
Amazon Web Services Amazon Web Services: Overview of Security Processes
that AWS provides to its customers is designed and managed in alignment with security
best practices and a variety of IT security standards, including:
• SOC 2
• SOC 3
d
• PCI DSS Level 1
e
• ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018
v
ITAR
i
• FIPS 140-2
• MTCS Level 3
h
• HITRUST
c
In addition, the flexibility and control that the AWS platform provides allows customers
r
to deploy solutions that meet several industry-specific standards, including:
A
• Cloud Security Alliance (CSA)
Page 4
Amazon Web Services Amazon Web Services: Overview of Security Processes
branded as AWS facilities. Physical access is strictly controlled both at the perimeter
and at building ingress points by professional security staff utilizing video surveillance,
intrusion detection systems, and other electronic means.
Authorized staff must pass two-factor authentication a minimum of two times to access
data center floors. All visitors are required to present identification and are signed in and
continually escorted by authorized staff.
AWS only provides data center access and information to employees and contractors
who have a legitimate business need for such privileges. When an employee no longer
d
has a business need for these privileges, his or her access is immediately revoked,
even if they continue to be an employee of Amazon or Amazon Web Services. All
e
physical access to data centers by AWS employees is logged and audited routinely.
v
Fire Detection and Suppression
i
Automatic fire detection and suppression equipment has been installed to reduce risk.
The fire detection system utilizes smoke detection sensors in all data center
h
environments, mechanical and electrical infrastructure spaces, chiller rooms and
generator equipment rooms. These areas are protected by either wet-pipe, double-
c
interlocked pre-action, or gaseous sprinkler systems.
r
Power
The data center electrical power systems are designed to be fully redundant and
A
maintainable without impact to operations, 24 hours a day, and seven days a week.
Uninterruptible Power Supply (UPS) units provide back-up power in the event of an
electrical failure for critical and essential loads in the facility. Data centers use
generators to provide back-up power for the entire facility.
Page 5
Amazon Web Services Amazon Web Services: Overview of Security Processes
Management
AWS monitors electrical, mechanical, and life support systems and equipment so that
any issues are immediately identified. Preventative maintenance is performed to
maintain the continued operability of equipment.
d
exposed to unauthorized individuals. AWS uses the techniques detailed in NIST 800-88
(“Guidelines for Media Sanitization”) as part of the decommissioning process.
e
Business Continuity Management
v
Amazon’s infrastructure has a high level of availability and provides customers the
i
features to deploy a resilient IT architecture. AWS has designed its systems to tolerate
system or hardware failures with minimal customer impact. Data center Business
h
Continuity Management at AWS is under the direction of the Amazon Infrastructure
Group.
c
Availability
r
Data centers are built in clusters in various global regions. All data centers are online
and serving customers; no data center is “cold.” In case of failure, automated processes
A
move customer data traffic away from the affected area. Core applications are deployed
in an N+1 configuration, so that in the event of a data center failure, there is sufficient
capacity to enable traffic to be load- balanced to the remaining sites.
AWS provides you with the flexibility to place instances and store data within multiple
geographic regions as well as across multiple availability zones within each region.
Each availability zone is designed as an independent failure zone.
This means that availability zones are physically separated within a typical metropolitan
region and are located in lower risk flood plains (specific flood zone categorization
varies by Region). In addition to discrete uninterruptable power supply (UPS) and onsite
backup generation facilities, they are each fed via different grids from independent
utilities to further reduce single points of failure. Availability zones are all redundantly
connected to multiple tier-1 transit providers.
You should architect your AWS usage to take advantage of multiple regions and
availability zones. Distributing applications across multiple availability zones provides
Page 6
Amazon Web Services Amazon Web Services: Overview of Security Processes
the ability to remain resilient in the face of most failure modes, including natural
disasters or system failures.
Incident Response
The Amazon Incident Management team employs industry-standard diagnostic
procedures to drive resolution during business-impacting events. Staff operators provide
24x7x365 coverage to detect incidents and to manage the impact and resolution.
d
Amazon’s Internal Audit group has recently reviewed the AWS services resiliency plans,
e
which are also periodically reviewed by members of the Senior Executive management
team and the Audit Committee of the Board of Directors.
i v
Communication
AWS has implemented various methods of internal communication at a global level to
h
help employees understand their individual roles and responsibilities and to
communicate significant events in a timely manner. These methods include orientation
c
and training programs for newly hired employees; regular management meetings for
updates on business performance and other matters; and electronics means such as
r
video conferencing, electronic mail messages, and the posting of information via the
Amazon intranet.
A
AWS has also implemented various methods of external communication to support its
customer base and the community. Mechanisms are in place to allow the customer
support team to be notified of operational issues that impact the customer experience. A
Service Health Dashboard is available and maintained by the customer support team to
alert customers to any issues that may be of broad impact. The AWS Cloud Security
Center is available to provide you with security and compliance details about AWS. You
can also subscribe to AWS Support offerings that include direct communication with the
customer support team and proactive alerts to any customer impacting issues.
Network Security
The AWS network has been architected to permit you to select the level of security and
resiliency appropriate for your workload. To enable you to build geographically
dispersed, fault-tolerant web architectures with cloud resources, AWS has implemented
a world-class network infrastructure that is carefully monitored and managed.
Page 7
Amazon Web Services Amazon Web Services: Overview of Security Processes
ACLs, or traffic flow policies, are established on each managed interface, which
manage and enforce the flow of traffic. ACL policies are approved by Amazon
d
Information Security. These policies are automatically pushed using AWS’s ACL-
Manage tool, to help ensure these managed interfaces enforce the most up-to-date
e
ACLs.
v
Secure Access Points
i
AWS has strategically placed a limited number of access points to the cloud to allow for
a more comprehensive monitoring of inbound and outbound communications and
h
network traffic. These customer access points are called API endpoints, and they allow
secure HTTP access (HTTPS), which allows you to establish a secure communication
c
session with your storage or compute instances within AWS. To support customers with
FIPS cryptographic requirements, the SSL-terminating load balancers in AWS
r
GovCloud (US) are FIPS 140-2-compliant.
In addition, AWS has implemented network devices that are dedicated to managing
A
interfacing communications with Internet service providers (ISPs). AWS employs a
redundant connection to more than one communication service at each Internet-facing
edge of the AWS network. These connections each have dedicated network devices.
Transmission Protection
You can connect to an AWS access point via HTTP or HTTPS using Secure Sockets
Layer (SSL), a cryptographic protocol that is designed to protect against eavesdropping,
tampering, and message forgery.
For customers who require additional layers of network security, AWS offers the
Amazon Virtual Private Cloud (VPC), which provides a private subnet within the AWS
cloud, and the ability to use an IPsec Virtual Private Network (VPN) device to provide an
encrypted tunnel between the Amazon VPC and your data center. For more information
about VPC configuration options, see the Amazon Virtual Private Cloud (Amazon VPC)
Security section.
Page 8
Amazon Web Services Amazon Web Services: Overview of Security Processes
Approved AWS personnel then connect to the AWS network through a bastion host that
d
restricts access to network devices and other cloud components, logging all activity for
security review. Access to bastion hosts require SSH public- key authentication for all
e
user accounts on the host. For more information on AWS developer and administrator
logical access, see AWS Access below.
i v
Fault-Tolerant Design
Amazon’s infrastructure has a high level of availability and provides you with the
h
capability to deploy a resilient IT architecture. AWS has designed its systems to tolerate
system or hardware failures with minimal customer impact.
c
Data centers are built in clusters in various global regions. All data centers are online
r
and serving customers; no data center is “cold.” In case of failure, automated processes
move customer data traffic away from the affected area. Core applications are deployed
in an N+1 configuration, so that in the event of a data center failure, there is sufficient
A
capacity to enable traffic to be load-balanced to the remaining sites.
AWS provides you with the flexibility to place instances and store data within multiple
geographic regions as well as across multiple availability zones within each region.
Each availability zone is designed as an independent failure zone. This means that
availability zones are physically separated within a typical metropolitan region and are
located in lower risk flood plains (specific flood zone categorization varies by region). In
addition to utilizing discrete uninterruptable power supply (UPS) and onsite backup
generators, they are each fed via different grids from independent utilities to further
reduce single points of failure. Availability zones are all redundantly connected to
multiple tier-1 transit providers.
You should architect your AWS usage to take advantage of multiple regions and
availability zones. Distributing applications across multiple availability zones provides
the ability to remain resilient in the face of most failure scenarios, including natural
disasters or system failures. However, you should be aware of location-dependent
Page 9
Amazon Web Services Amazon Web Services: Overview of Security Processes
privacy and compliance requirements, such as the EU Data Privacy Directive. Data is
not replicated between regions unless proactively done so by the customer, thus
allowing customers with these types of data placement and privacy requirements the
ability to establish compliant environments. It should be noted that all communications
between regions is across public internet infrastructure; therefore, appropriate
encryption methods should be used to protect sensitive data.
Data centers are built in clusters in various global regions, including: US East (Northern
Virginia), US West (Oregon), US West (Northern California), AWS GovCloud (US)
(Oregon), EU (Frankfurt), EU (Ireland), Asia Pacific (Seoul) Asia Pacific (Singapore),
d
Asia Pacific (Tokyo), Asia Pacific (Sydney), China (Beijing), and South America (Sao
Paulo). For a complete list of AWS Regions, see the AWS Global Infrastructure page.
e
AWS GovCloud (US) is an isolated AWS Region designed to allow US government
v
agencies and customers to move workloads into the cloud by helping them meet certain
i
regulatory and compliance requirements. The AWS GovCloud (US) framework allows
US government agencies and their contractors to comply with U.S. International Traffic
in Arms Regulations (ITAR) regulations as well as the Federal Risk and Authorization
h
Management Program (FedRAMP) requirements. AWS GovCloud (US) has received an
Agency Authorization to Operate (ATO) from the US Department of Health and Human
c
Services (HHS) utilizing a FedRAMP accredited Third Party Assessment Organization
r
(3PAO) for several AWS services.
The AWS GovCloud (US) Region provides the same fault-tolerant design as other
A
regions, with two Availability Zones. In addition, the AWS GovCloud (US) region is a
mandatory AWS Virtual Private Cloud (VPC) service by default to create an isolated
portion of the AWS cloud and launch Amazon EC2 instances that have private (RFC
1918) addresses. For more information, see AWS GovCloud (US).
Systems within AWS are extensively instrumented to monitor key operational metrics.
Alarms are configured to automatically notify operations and management personnel
when early warning thresholds are crossed on key operational metrics. An on-call
Page 10
Amazon Web Services Amazon Web Services: Overview of Security Processes
d
so the root cause is captured and preventative actions are taken in the future.
Implementation of the preventative measures is tracked during weekly operations
e
meetings.
v
AWS Access
i
The AWS Production network is segregated from the Amazon Corporate network and
requires a separate set of credentials for logical access. The Amazon Corporate
h
network relies on user IDs, passwords, and Kerberos, whereas the AWS Production
c
network requires SSH public-key authentication through a bastion host.
r
AWS developers and administrators on the Amazon Corporate network who need to
access AWS cloud components must explicitly request access through the AWS access
management system. All requests are reviewed and approved by the appropriate owner
A
or manager.
Requests for changes in access are captured in the Amazon permissions management
tool audit log. When changes in an employee’s job function occur, continued access
must be explicitly approved to the resource or it will be automatically revoked.
Page 11
Amazon Web Services Amazon Web Services: Overview of Security Processes
Background Checks
AWS has established formal policies and procedures to delineate the minimum
standards for logical access to AWS platform and infrastructure hosts. AWS conducts
criminal background checks, as permitted by law, as part of pre- employment screening
practices for employees and commensurate with the employee’s position and level of
access. The policies also identify functional responsibilities for the administration of
logical access and security.
Credentials Policy
d
AWS Security has established a credentials policy with required configurations and
e
expiration intervals. Passwords must be complex and are forced to be changed every
90 days.
v
Secure Design Principles
i
The AWS development process follows secure software development best practices,
h
which include formal design reviews by the AWS Security Team, threat modeling, and
completion of a risk assessment. Static code analysis tools are run as a part of the
c
standard build process, and all deployed software undergoes recurring penetration
testing performed by carefully selected industry experts. Our security risk assessment
r
reviews begin during the design phase and the engagement lasts through launch to
ongoing operations.
A
Change Management
Routine, emergency, and configuration changes to existing AWS infrastructure are
authorized, logged, tested, approved, and documented in accordance with industry
norms for similar systems. Updates to the AWS infrastructure are done to minimize any
impact on the customer and their use of the services. AWS will communicate with
customers, either via email, or through the AWS Service Health Dashboard when
service use is likely to be adversely affected.
Software
AWS applies a systematic approach to managing change so that changes to customer-
impacting services are thoroughly reviewed, tested, approved, and well-communicated.
The AWS change management process is designed to avoid unintended service
disruptions and to maintain the integrity of service to the customer. Changes deployed
into production environments are:
Page 12
Amazon Web Services Amazon Web Services: Overview of Security Processes
• Tested – Changes being applied are tested to help ensure they will behave as
expected and not adversely impact performance.
Changes are typically pushed into production in a phased deployment starting with
lowest impact areas. Deployments are tested on a single system and closely monitored
so impacts can be evaluated. Service owners have a number of configurable metrics
d
that measure the health of the service’s upstream dependencies. These metrics are
closely monitored with thresholds and alarming in place. Rollback procedures are
e
documented in the Change Management (CM) ticket.
v
When possible, changes are scheduled during regular change windows. Emergency
i
changes to production systems that require deviations from standard change
management procedures are associated with an incident and are logged and approved
as appropriate.
h
Periodically, AWS performs self-audits of changes to key services to monitor quality,
c
maintain high standards, and facilitate continuous improvement of the change
management process. Any exceptions are analyzed to determine the root cause, and
r
appropriate actions are taken to bring the change into compliance or roll back the
change if necessary. Actions are then taken to address and remediate the process or
A
people issue.
Infrastructure
Amazon’s Corporate Applications team develops and manages software to automate IT
processes for UNIX/Linux hosts in the areas of third-party software delivery, internally
developed software, and configuration management. The Infrastructure team maintains
and operates a UNIX/Linux configuration management framework to address hardware
scalability, availability, auditing, and security management. By centrally managing hosts
through the use of automated processes that manage change, Amazon is able to
achieve its goals of high availability, repeatability, scalability, security, and disaster
recovery.
Systems and network engineers monitor the status of these automated tools on a
continuous basis, reviewing reports to respond to hosts that fail to obtain or update their
configuration and software.
Page 13
Amazon Web Services Amazon Web Services: Overview of Security Processes
d
AWS provides a variety of tools and features that you can use to keep your AWS
Account and resources safe from unauthorized use. This includes credentials for access
e
control, HTTPS endpoints for encrypted data transmission, the creation of separate IAM
user accounts, user activity logging for security monitoring, and Trusted Advisor security
v
checks. You can take advantage of all of these security tools no matter which AWS
i
services you select.
h
AWS Credentials
To help ensure that only authorized users and processes access your AWS Account
c
and resources, AWS uses several types of credentials for authentication. These include
r
passwords, cryptographic keys, digital signatures, and certificates. We also provide the
option of requiring multi-factor authentication (MFA) to log into your AWS Account or
IAM user accounts. The following table highlights the various AWS credentials and their
A
uses.
Passwords AWS root account or IAM A string of characters used to log into
user account login to the your AWS account or IAM account.
AWS Management Console AWS passwords must be a minimum of
6 characters and may be up to 128
characters.
Page 14
Amazon Web Services Amazon Web Services: Overview of Security Processes
Access Keys Digitally signed requests to Includes an access key ID and a secret
AWS APIs (using the AWS access key. You use access keys to
SDK, CLI, or REST/Query digitally sign programmatic requests
APIs) that you make to AWS.
Key Pairs SSH login to EC2 instances A key pair is required to connect to an
CloudFront signed URLs EC2 instance launched from a public
AMI.
d
The supported lengths are 1024, 2048,
and 4096. If you connect using SSH
e
while using the EC2 Instance Connect
API, the supported lengths are 2048
v
and 4096. You can have a key pair
i
generated automatically for you when
you launch the instance or you can
upload your own.
h
X.509 Certificates Digitally signed SOAP X.509 certificates are only used to sign
requests to AWS APIs SOAP-based requests (currently used
c
SSL server certificates for only with Amazon S3). You can have
r
HTTPS AWS create an X.509 certificate and
private key that you can download, or
you can upload your own certificate by
A
using the Security Credentials page.
You can download a Credential Report for your account at any time from the Security
Credentials page. This report lists all of your account’s users and the status of their
credentials—whether they use a password, whether their password expires and must
be changed regularly, the last time they changed their password, the last time they
rotated their access keys, and whether they have MFA enabled.
For security reasons, if your credentials have been lost or forgotten, you cannot recover
them or re-download them. However, you can create new credentials and then disable
or delete the old set of credentials.
In fact, AWS recommends that you change (rotate) your access keys and certificates on
a regular basis. To help you do this without potential impact to your application’s
availability, AWS supports multiple concurrent access keys and certificates. With this
feature, you can rotate keys and certificates into and out of operation on a regular basis
without any downtime to your application. This can help to mitigate risk from lost or
Page 15
Amazon Web Services Amazon Web Services: Overview of Security Processes
compromised access keys or certificates. The AWS IAM API enables you to rotate the
access keys of your AWS Account as well as for IAM user accounts.
Passwords
Passwords are required to access your AWS Account, individual IAM user accounts,
AWS Discussion Forums, and the AWS Support Center. You specify the password
when you first create the account, and you can change it at any time by going to the
Security Credentials page. AWS passwords can be up to 128 characters long and
contain special characters, so we encourage you to create a strong password that
d
cannot be easily guessed.
e
You can set a password policy for your IAM user accounts to ensure that strong
passwords are used and that they are changed often. A password policy is a set of rules
v
that define the type of password an IAM user can set. For more information about
i
password policies, see Managing Passwords for IAM Users.
h
AWS Multi-Factor Authentication (MFA) is an additional layer of security for accessing
c
AWS services. When you enable this optional feature, you must provide a six-digit
single-use code in addition to your standard user name and password credentials
r
before access is granted to your AWS Account settings or AWS services and resources.
You get this single-use code from an authentication device that you keep in your
A
physical possession. This is called multi-factor authentication because more than one
authentication factor is checked before access is granted: a password (something you
know) and the precise code from your authentication device (something you have). You
can enable MFA devices for your AWS Account as well as for the users you have
created under your AWS Account with AWS IAM. In addition, you add MFA protection
for access across AWS Accounts, for when you want to allow a user you’ve created
under one AWS Account to use an IAM role to access resources under another AWS
Account. You can require the user to use MFA before assuming the role as an
additional layer of security.
AWS MFA supports the use of both hardware tokens and virtual MFA devices. Virtual
MFA devices use the same protocols as the physical MFA devices, but can run on any
mobile hardware device, including a smartphone. A virtual MFA device uses a software
application that generates six-digit authentication codes that are compatible with the
Time-Based One-Time Password (TOTP) standard, as described in RFC 6238. Most
virtual MFA applications allow you to host more than one virtual MFA device, which
makes them more convenient than hardware MFA devices. However, you should be
Page 16
Amazon Web Services Amazon Web Services: Overview of Security Processes
aware that because a virtual MFA might be run on a less secure device such as a
smartphone, a virtual MFA might not provide the same level of security as a hardware
MFA device.
You can also enforce MFA authentication for AWS service APIs in order to provide an
extra layer of protection over powerful or privileged actions such as terminating Amazon
EC2 instances or reading sensitive data stored in Amazon S3. You do this by adding an
MFA-authentication requirement to an IAM access policy. You can attach these access
policies to IAM users, IAM groups, or resources that support Access Control Lists
(ACLs) like Amazon S3 buckets, SQS queues, and SNS topics.
d
It is easy to obtain hardware tokens from a participating third-party provider or virtual
e
MFA applications from an AppStore and to set it up for use via the AWS website. More
information is available at AWS Multi-Factor Authentication (MFA).
i v
Access Keys
AWS requires that all API requests be signed—that is, they must include a digital
h
signature that AWS can use to verify the identity of the requestor. You calculate the
digital signature using a cryptographic hash function. The input to the hash function in
c
this case includes the text of your request and your secret access key. If you use any of
the AWS SDKs to generate requests, the digital signature calculation is done for you;
r
otherwise, you can have your application calculate it and include it in your REST or
Query requests by following the directions in Making Requests Using the AWS SDKs.
A
Not only does the signing process help protect message integrity by preventing
tampering with the request while it is in transit, it also helps protect against potential
replay attacks. A request must reach AWS within 15 minutes of the time stamp in the
request. Otherwise, AWS denies the request.
The most recent version of the digital signature calculation process is Signature Version
4, which calculates the signature using the HMAC-SHA256 protocol. Version 4 provides
an additional measure of protection over previous versions by requiring that you sign
the message using a key that is derived from your secret access key rather than using
the secret access key itself. In addition, you derive the signing key based on credential
scope, which facilitates cryptographic isolation of the signing key.
Because access keys can be misused if they fall into the wrong hands, we encourage
you to save them in a safe place and not embed them in your code. For customers with
large fleets of elastically scaling EC2 instances, the use of IAM roles can be a more
secure and convenient way to manage the distribution of access keys. IAM roles
Page 17
Amazon Web Services Amazon Web Services: Overview of Security Processes
provide temporary credentials, which not only get automatically loaded to the target
instance, but are also automatically rotated multiple times a day.
Key Pairs
Amazon EC2 instances created from a public AMI use a public/private key pair rather
than a password for signing in via Secure Shell (SSH).The public key is embedded in
your instance, and you use the private key to sign in securely without a password. After
you create your own AMIs, you can choose other mechanisms to securely log in to your
new instances.
d
You can have a key pair generated automatically for you when you launch the instance
e
or you can upload your own. Save the private key in a safe place on your system, and
record the location where you saved it.
v
For Amazon CloudFront, you use key pairs to create signed URLs for private content,
i
such as when you want to distribute restricted content that someone paid for. You
create Amazon CloudFront key pairs by using the Security Credentials page.
h
CloudFront key pairs can be created only by the root account and cannot be created by
IAM users.
c
X.509 Certificates
r
X.509 certificates are used to sign SOAP-based requests. X.509 certificates contain a
public key and additional metadata (like an expiration date that AWS verifies when you
A
upload the certificate), and is associated with a private key. When you create a request,
you create a digital signature with your private key and then include that signature in the
request, along with your certificate. AWS verifies that you're the sender by decrypting
the signature with the public key that is in your certificate. AWS also verifies that the
certificate you sent matches the certificate that you uploaded to AWS.
For your AWS Account, you can have AWS create an X.509 certificate and private key
that you can download, or you can upload your own certificate by using the Security
Credentials page. For IAM users, you must create the X.509 certificate (signing
certificate) by using third-party software. In contrast with root account credentials, AWS
cannot create an X.509 certificate for IAM users. After you create the certificate, you
attach it to an IAM user by using IAM.
Page 18
Amazon Web Services Amazon Web Services: Overview of Security Processes
private key. You’ll need the private key to create the Certificate Signing Request (CSR)
that you submit to a certificate authority (CA) to obtain the server certificate. You’ll then
use the AWS CLI to upload the certificate, private key, and certificate chain to IAM.
You’ll also need an X.509 certificate to create a customized Linux AMI for EC2
instances. The certificate is only required to create an instance-backed AMI (as
opposed to an EBS-backed AMI). You can have AWS create an X.509 certificate and
private key that you can download, or you can upload your own certificate by using the
Security Credentials page.
d
Individual User Accounts
e
AWS provides a centralized mechanism called AWS Identity and Access Management
(IAM) for creating and managing individual users within your AWS Account. A user can
v
be any individual, system, or application that interacts with AWS resources, either
i
programmatically or through the AWS Management Console or AWS Command Line
Interface (CLI). Each user has a unique name within the AWS Account, and a unique
set of security credentials not shared with other users. AWS IAM eliminates the need to
h
share passwords or keys, and enables you to minimize the use of your AWS Account
c
credentials.
r
With IAM, you define policies that control which AWS services your users can access
and what they can do with them. You can grant users only the minimum permissions
they need to perform their jobs. See the AWS Identity and Access Management (AWS
A
IAM) section for more information.
Several services also now offer more advanced cipher suites that use the Elliptic Curve
Diffie-Hellman Ephemeral (ECDHE) protocol. ECDHE allows SSL/TLS clients to provide
Perfect Forward Secrecy, which uses session keys that are ephemeral and not stored
anywhere. This helps prevent the decoding of captured data by unauthorized third
parties, even if the secret long-term key itself is compromised.
Page 19
Amazon Web Services Amazon Web Services: Overview of Security Processes
Security Logs
As important as credentials and encrypted endpoints are for preventing security
problems, logs are just as crucial for understanding events after a problem has
occurred. And to be effective as a security tool, a log must include not just a list of what
happened and when, but also identify the source. To help you with your after-the-fact
investigations and near-real time intrusion detection, AWS CloudTrail provides a log of
events within your account. For each event, you can see what service was accessed,
what action was performed, and who made the request.
d
CloudTrail captures API calls, as well as other things such as console sign-in events.
Once you have enabled CloudTrail, event logs are delivered about every 5 minutes.
e
You can configure CloudTrail so that it aggregates log files from multiple regions and/or
accounts into a single Amazon S3 bucket. By default, a single trail will record and
v
deliver events in all current and future regions. In addition to S3, you can send events to
i
CloudWatch Logs, for custom metrics and alarming, or you can upload the logs to your
favorite log management and analysis solutions to perform security analysis and detect
h
user behavior patterns. For rapid response, you can create CloudWatch Events rules to
take timely action to specific events. By default, log files are stored securely in Amazon
c
S3, but you can also archive them to Amazon S3 Glacier to help meet audit and
compliance requirements.
r
In addition to CloudTrail’s user activity logs, you can use the Amazon CloudWatch Logs
feature to collect and monitor system, application, and custom log files from your EC2
A
instances and other sources in near-real time. For example, you can monitor your web
server's log files for invalid user messages to detect unauthorized login attempts to your
guest OS.
Page 20
Amazon Web Services Amazon Web Services: Overview of Security Processes
The AWS Trusted Advisor service provides four checks at no additional charge to all
users, including three important security checks: specific ports unrestricted, IAM use,
and MFA on root account. When you sign up for Business- or Enterprise-level AWS
Support, you receive full access to all Trusted Advisor checks.
d
AWS Config is a continuous monitoring and assessment service that records changes
to the configuration of your AWS resources. You can view the current and historic
e
configurations of a resource and use this information to troubleshoot outages, conduct
security attack analysis, and much more. You can view the configuration at any point in
v
time and use that information to re-configure your resources and bring them into a
i
steady state during an outage situation.
Using AWS Config Rules, you can run continuous assessment checks on your
h
resources to verify that they comply with your own security policies, industry best
practices, and compliance regimes such as PCI/HIPAA. For example, AWS Config
c
provides a managed AWS Config Rules to ensure that encryption is turned on for all
r
EBS volumes in your account. You can also write a custom AWS Config Rule to
essentially “codify” your own corporate security policies. AWS Config alerts you in real
time when a resource is misconfigured, or when a resource violates a particular security
A
policy.
Compute Services
Amazon Web Services provides a variety of cloud-based computing services that
include a wide selection of compute instances that can scale up and down automatically
to meet the needs of your application or enterprise.
Page 21
Amazon Web Services Amazon Web Services: Overview of Security Processes
d
Security within Amazon EC2 is provided on multiple levels: the operating system (OS)
of the host platform, the virtual instance OS or guest OS, a firewall, and signed API
e
calls. Each of these items builds on the capabilities of the others. The goal is to prevent
data contained within Amazon EC2 from being intercepted by unauthorized systems or
v
users and to provide Amazon EC2 instances themselves that are as secure as possible
i
without sacrificing the flexibility in configuration that customers demand.
Hypervisor
h
Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor, taking
c
advantage of paravirtualization (in the case of Linux guests). Because para-virtualized
guests rely on the hypervisor to provide support for operations that normally require
r
privileged access, the guest OS has no elevated access to the CPU. The CPU provides
four separate privilege modes: 0-3, called rings. Ring 0 is the most privileged and 3 the
A
least. The host OS executes in Ring 0. However, rather than executing in Ring 0 as
most operating systems do, the guest OS runs in a lesser-privileged Ring 1 and
applications in the least privileged Ring 3. This explicit virtualization of the physical
resources leads to a clear separation between guest and hypervisor, resulting in
additional security separation between the two.
Traditionally, hypervisors protect the physical hardware and bios, virtualize the CPU,
storage, networking, and provide a rich set of management capabilities. With the Nitro
System, we are able to break apart those functions, offload them to dedicated hardware
and software, and reduce costs by delivering all of the resources of a server to your
instances.
The Nitro Hypervisor provides consistent performance and increased compute and
memory resources for EC2 virtualized instances by removing host system software
components. It allows AWS to offer larger instance sizes (like c5.18xlarge) that provide
practically all of the resources from the server to customers. Previously, C3 and C4
instances each eliminated software components by moving VPC and EBS functionality
Page 22
Amazon Web Services Amazon Web Services: Overview of Security Processes
to hardware designed and built by AWS. This hardware enables the Nitro Hypervisor to
be very small and uninvolved in data processing tasks for networking and storage.
Nevertheless, as AWS expands its global cloud infrastructure, Amazon EC2’s use of its
Xen-based hypervisor will also continue to grow. Xen will remain a core component of
EC2 instances for the foreseeable future.
Instance Isolation
Different instances running on the same physical machine are isolated from each other
d
via the Xen hypervisor. Amazon is active in the Xen community, which provides
awareness of the latest developments. In addition, the AWS firewall resides within the
hypervisor layer, between the physical network interface and the instance's virtual
e
interface. All packets must pass through this layer, thus an instance’s neighbors have
no more access to that instance than any other host on the Internet and can be treated
v
as if they are on separate physical hosts. The physical RAM is separated using similar
i
mechanisms.
Customer instances have no access to raw disk devices, but instead are presented with
h
virtualized disks. The AWS proprietary disk virtualization layer automatically resets
every block of storage used by the customer, so that one customer’s data is never
c
unintentionally exposed to another. In addition, memory allocated to guests is scrubbed
r
(set to zero) by the hypervisor when it is unallocated to a guest. The memory is not
returned to the pool of free memory available for new allocations until the memory
scrubbing is complete.
A
AWS recommends customers further protect their data using appropriate means. One
common solution is to run an encrypted file system on top of the virtualized disk device.
Page 23
Amazon Web Services Amazon Web Services: Overview of Security Processes
e d
i v
Figure 2: Amazon EC2 multiple layers of security
h
management plane are required to use multi- factor authentication to gain access to
purpose-built administration hosts. These administrative hosts are systems that are
c
specifically designed, built, configured, and hardened to protect the management plane
r
of the cloud. All such access is logged and audited. When an employee no longer has a
business need to access the management plane, the privileges and access to these
hosts and relevant systems can be revoked.
A
Guest Operating System: Virtual instances are completely controlled by you, the
customer. You have full root access or administrative control over accounts, services,
and applications. AWS does not have any access rights to your instances or the guest
OS. AWS recommends a base set of security best practices to include disabling
password-only access to your guests, and utilizing some form of multi-factor
authentication to gain access to your instances (or at a minimum certificate-based SSH
Version 2 access). Additionally, you should employ a privilege escalation mechanism
with logging on a per-user basis. For example, if the guest OS is Linux, after hardening
your instance you should utilize certificate- based SSHv2 to access the virtual instance,
disable remote root login, use command-line logging, and use ‘sudo’ for privilege
escalation.
You should generate your own key pairs in order to guarantee that they are unique, and
not shared with other customers or with AWS.
Page 24
Amazon Web Services Amazon Web Services: Overview of Security Processes
AWS also supports the use of the Secure Shell (SSH) network protocol to enable you to
log in securely to your UNIX/Linux EC2 instances. Authentication for SSH used with
AWS is via a public/private key pair to reduce the risk of unauthorized access to your
instance. You can also connect remotely to your Windows instances using Remote
Desktop Protocol (RDP) by utilizing an RDP certificate generated for your instance.
You also control the updating and patching of your guest OS, including security
updates. Amazon-provided Windows and Linux-based AMIs are updated regularly with
the latest patches, so if you do not need to preserve data or customizations on your
running Amazon AMI instances, you can simply relaunch new instances with the latest
d
updated AMI. In addition, updates are provided for the Amazon Linux AMI via the
Amazon Linux yum repositories.
e
Firewall: Amazon EC2 provides a complete firewall solution; this mandatory inbound
v
firewall is configured in a default deny-all mode and Amazon EC2 customers must
i
explicitly open the ports needed to allow inbound traffic. The traffic may be restricted by
protocol, by service port, as well as by source IP address (individual IP or Classless
Inter-Domain Routing (CIDR) block).
h
The firewall can be configured in groups permitting different classes of instances to
c
have different rules. Consider, for example, the case of a traditional three-tiered web
r
application. The group for the web servers would have port 80 (HTTP) and/or port 443
(HTTPS) open to the Internet. The group for the application servers would have port
8000 (application specific) accessible only to the web server group. The group for the
A
database servers would have port 3306 (MySQL) open only to the application server
group. All three groups would permit administrative access on port 22 (SSH), but only
from the customer’s corporate network. Highly secure applications can be deployed
using this expressive mechanism. See the following figure.
Page 25
Amazon Web Services Amazon Web Services: Overview of Security Processes
e d
i v
Figure 3: Amazon EC2 security group firewall
h
The firewall isn’t controlled through the guest OS; rather it requires your X.509
certificate and key to authorize changes, thus adding an extra layer of security. AWS
c
supports the ability to grant granular access to different administrative functions on the
r
instances and the firewall, therefore enabling you to implement additional security
through separation of duties. The level of security afforded by the firewall is a function of
which ports you open, and for what duration and purpose. The default state is to deny
A
all incoming traffic, and you should plan carefully what you will open when building and
securing your applications. Well- informed traffic management and security design are
still required on a per- instance basis. AWS further encourages you to apply additional
per- instance filters with host-based firewalls such as IPtables or the Windows Firewall
and VPNs. This can restrict both inbound and outbound traffic.
API Access: API calls to launch and terminate instances, change firewall parameters,
and perform other functions are all signed by your Amazon Secret Access Key, which
could be either the AWS Accounts Secret Access Key or the Secret Access key of a
user created with AWS IAM. Without access to your Secret Access Key, Amazon EC2
API calls cannot be made on your behalf. In addition, API calls can be encrypted with
SSL to maintain confidentiality. Amazon recommends always using SSL-protected API
endpoints.
Permissions: AWS IAM also enables you to further control what APIs a user has
permissions to call.
Page 26
Amazon Web Services Amazon Web Services: Overview of Security Processes
Storage volumes behave like raw, unformatted block devices, with user supplied device
names and a block device interface. You can create a file system on top of Amazon
EBS volumes, or use them in any other way you would use a block device (like a hard
drive). Amazon EBS volume access is restricted to the AWS Account that created the
volume, and to the users under the AWS Account created with AWS IAM if the user has
d
been granted access to the EBS operations, thus denying all other AWS Accounts and
users the permission to view or access the volume.
e
Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations
as part of normal operation of those services and at no additional charge. However,
v
Amazon EBS replication is stored within the same availability zone, not across multiple
i
zones; therefore, it is highly recommended that you conduct regular snapshots to
Amazon S3 for long-term data durability. For customers who have architected complex
h
transactional databases using EBS, it is recommended that backups to Amazon S3 be
performed through the database management system so that distributed transactions
c
and logs can be checkpointed. AWS does not perform backups of data that are
maintained on virtual disks attached to running instances on Amazon EC2.
r
You can make Amazon EBS volume snapshots publicly available to other AWS
Accounts to use as the basis for creating your own volumes. Sharing Amazon EBS
A
volume snapshots does not provide other AWS Accounts with the permission to alter or
delete the original snapshot, as that right is explicitly reserved for the AWS Account that
created the volume. An EBS snapshot is a block-level view of an entire EBS volume.
Note that data that is not visible through the file system on the volume, such as files that
have been deleted, may be present in the EBS snapshot. If you want to create shared
snapshots, you should do so carefully. If a volume has held sensitive data or has had
files deleted from it, a new EBS volume should be created. The data to be contained in
the shared snapshot should be copied to the new volume, and the snapshot created
from the new volume.
Amazon EBS volumes are presented to you as raw unformatted block devices that have
been wiped prior to being made available for use. Wiping occurs immediately before
reuse so that you can be assured that the wipe process completed. If you have
procedures requiring that all data be wiped via a specific method, such as those
detailed in NIST 800-88 (“Guidelines for Media Sanitization”), you have the ability to do
Page 27
Amazon Web Services Amazon Web Services: Overview of Security Processes
so on Amazon EBS. You should conduct a specialized wipe procedure prior to deleting
the volume for compliance with your established requirements.
Encryption of sensitive data is generally a good security practice, and AWS provides the
ability to encrypt EBS volumes and their snapshots with AES-256. The encryption
occurs on the servers that host the EC2 instances, providing encryption of data as it
moves between EC2 instances and EBS storage. In order to be able to do this
efficiently and with low latency, the EBS encryption feature is only available on EC2's
more powerful instance types (e.g., M3, C3, R3, G2).
d
Auto Scaling Security
e
Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down
according to conditions you define, so that the number of Amazon EC2 instances you
v
are using scales up seamlessly during demand spikes to maintain performance, and
i
scales down automatically during demand lulls to minimize costs.
Like all AWS services, Auto Scaling requires that every request made to its control API
h
be authenticated so only authenticated users can access and manage Auto Scaling.
Requests are signed with an HMAC-SHA1 signature calculated from the request and
c
the user’s private key. However, getting credentials out to new EC2 instances launched
with Auto Scaling can be challenging for large or elastically scaling fleets. To simplify
r
this process, you can use roles within IAM, so that any new instances launched with a
role will be given credentials automatically. When you launch an EC2 instance with an
A
IAM role, temporary AWS security credentials with permissions specified by the role are
securely provisioned to the instance and are made available to your application via the
Amazon EC2 Instance Metadata Service. The Metadata Service makes new temporary
security credentials available prior to the expiration of the current active credentials, so
that valid credentials are always available on the instance. In addition, the temporary
security credentials are automatically rotated multiple times per day, providing
enhanced security.
You can further control access to Auto Scaling by creating users under your AWS
Account using AWS IAM, and controlling what Auto Scaling APIs these users have
permission to call. For more information about using roles when launching instances,
see Identity and Access Management for Amazon EC2.
Networking Services
Amazon Web Services provides a range of networking services that enable you to
create a logically isolated network that you define, establish a private network
Page 28
Amazon Web Services Amazon Web Services: Overview of Security Processes
connection to the AWS cloud, use a highly available and scalable DNS service and
deliver content to your end users with low latency at high data transfer speeds with a
content delivery web service.
d
• Takes over the encryption and decryption work from the Amazon EC2 instances
e
and manages it centrally on the load balancer
• Offers clients a single point of contact, and can also serve as the first line of
v
defense against attacks on your network
i
• When used in an Amazon VPC, supports creation and management of security
groups associated with your Elastic Load Balancing to provide additional
h
networking and security options
c
• Supports end-to-end traffic encryption using TLS (previously SSL) on those
networks that use secure HTTP (HTTPS) connections. When TLS is used, the
r
TLS server certificate used to terminate client connections can be managed
centrally on the load balancer, rather than on every individual instance.
A
HTTPS/TLS uses a long-term secret key to generate a short-term session key to be
used between the server and the browser to create the ciphered (encrypted) message.
Elastic Load Balancing configures your load balancer with a pre-defined cipher set that
is used for TLS negotiation when a connection is established between a client and your
load balancer. The pre-defined cipher set provides compatibility with a broad range of
clients and uses strong cryptographic algorithms. However, some customers may have
requirements for allowing only specific ciphers and protocols (such as PCI, SOX, etc.)
from clients to ensure that standards are met. In these cases, Elastic Load Balancing
provides options for selecting different configurations for TLS protocols and ciphers.
You can choose to enable or disable the ciphers depending on your specific
requirements.
To help ensure the use of newer and stronger cipher suites when establishing a secure
connection, you can configure the load balancer to have the final say in the cipher suite
selection during the client-server negotiation. When the Server Order Preference option
is selected, the load balancer selects a cipher suite based on the server’s prioritization
Page 29
Amazon Web Services Amazon Web Services: Overview of Security Processes
of cipher suites rather than the client’s. This gives you more control over the level of
security that clients use to connect to your load balancer.
For even greater communication privacy, Elastic Load Balancing allows the use of
Perfect Forward Secrecy, which uses session keys that are ephemeral and not stored
anywhere. This prevents the decoding of captured data, even if the secret long-term key
itself is compromised.
Elastic Load Balancing allows you to identify the originating IP address of a client
connecting to your servers, whether you’re using HTTPS or TCP load balancing.
d
Typically, client connection information, such as IP address and port, is lost when
requests are proxied through a load balancer. This is because the load balancer sends
e
requests to the server on behalf of the client, making your load balancer appear as
though it is the requesting client. Having the originating client IP address is useful if you
v
need more information about visitors to your applications in order to gather connection
i
statistics, analyze traffic logs, or manage whitelists of IP addresses.
Elastic Load Balancing access logs contain information about each HTTP and TCP
h
request processed by your load balancer. This includes the IP address and port of the
requesting client, the backend IP address of the instance that processed the request,
c
the size of the request and response, and the actual request line from the client (for
r
example, GET http://www.example.com: 80/HTTP/1.1). All requests sent to the load
balancer are logged, including requests that never made it to backend instances.
A
Amazon Virtual Private Cloud (Amazon VPC) Security
Normally, each Amazon EC2 instance that you launch is randomly assigned a public IP
address in the Amazon EC2 address space. Amazon VPC enables you to create an
isolated portion of the AWS cloud and launch Amazon EC2 instances that have private
(RFC 1918) addresses in the range of your choice (e.g., 10.0.0.0/16). You can define
subnets within your VPC, grouping similar kinds of instances based on IP address
range, and then set up routing and security to control the flow of traffic in and out of the
instances and subnets.
AWS offers a variety of VPC architecture templates with configurations that provide
varying levels of public access:
• VPC with a single public subnet only. Your instances run in a private, isolated
section of the AWS cloud with direct access to the Internet. Network ACLs and
security groups can be used to provide strict control over inbound and outbound
network traffic to your instances.
Page 30
Amazon Web Services Amazon Web Services: Overview of Security Processes
• VPC with public and private subnets. In addition to containing a public subnet,
this configuration adds a private subnet whose instances are not addressable
from the Internet. Instances in the private subnet can establish outbound
connections to the Internet via the public subnet using Network Address
Translation (NAT).
• VPC with public and private subnets and hardware VPN access. This
configuration adds an IPsec VPN connection between your Amazon VPC and
your data center, effectively extending your data center to the cloud while also
providing direct access to the Internet for public subnet instances in your Amazon
d
VPC. In this configuration, customers add a VPN appliance on their corporate
data center side.
e
• VPC with private subnet only and hardware VPN access. Your instances run
v
in a private, isolated section of the AWS cloud with a private subnet whose
instances are not addressable from the Internet. You can connect this private
i
subnet to your corporate data center via an IPsec VPN tunnel.
h
You can also connect two VPCs using a private IP address, which allows instances in
the two VPCs to communicate with each other as if they are within the same network.
c
You can create a VPC peering connection between your own VPCs, or with a VPC in
another AWS account within a single region.
r
Security features within Amazon VPC include security groups, network ACLs, routing
tables, and external gateways. Each of these items is complementary to providing a
A
secure, isolated network that can be extended through selective enabling of direct
Internet access or private connectivity to another network.
Amazon EC2 instances running within an Amazon VPC inherit all of the benefits
described below related to the guest OS and protection against packet sniffing.
Note, however, that you must create VPC security groups specifically for your Amazon
VPC; any Amazon EC2 security groups you have created will not work inside your
Amazon VPC. Also, Amazon VPC security groups have additional capabilities that
Amazon EC2 security groups do not have, such as being able to change the security
group after the instance is launched and being able to specify any protocol with a
standard protocol number (as opposed to just TCP, UDP, or ICMP).
Each Amazon VPC is a distinct, isolated network within the cloud; network traffic within
each Amazon VPC is isolated from all other Amazon VPCs. At creation time, you select
an IP address range for each Amazon VPC. You may create and attach an Internet
Page 31
Amazon Web Services Amazon Web Services: Overview of Security Processes
API Access: Calls to create and delete Amazon VPCs, change routing, security group,
and network ACL parameters, and perform other functions are all signed by your
Amazon Secret Access Key, which could be either the AWS Account’s Secret Access
Key or the Secret Access key of a user created with AWS IAM. Without access to your
Secret Access Key, Amazon VPC API calls cannot be made on your behalf. In addition,
API calls can be encrypted with SSL to maintain confidentiality. Amazon recommends
always using SSL-protected API endpoints. AWS IAM also enables a customer to
d
further control what APIs a newly created user has permissions to call.
e
Subnets and Route Tables: You create one or more subnets within each Amazon
VPC; each instance launched in the Amazon VPC is connected to one subnet.
v
Traditional Layer 2 security attacks, including MAC spoofing and ARP spoofing, are
i
blocked.
Each subnet in an Amazon VPC is associated with a routing table, and all network
h
traffic leaving the subnet is processed by the routing table to determine the destination.
c
Firewall (Security Groups): Like Amazon EC2, Amazon VPC supports a complete
firewall solution enabling filtering on both ingress and egress traffic from an instance.
r
The default group enables inbound communication from other members of the same
group and outbound communication to any destination.
A
Traffic can be restricted by any IP protocol, by service port, as well as
source/destination IP address (individual IP or Classless Inter-Domain Routing (CIDR)
block).
The firewall isn’t controlled through the guest OS; rather, it can be modified only through
the invocation of Amazon VPC APIs. AWS supports the ability to grant granular access
to different administrative functions on the instances and the firewall, therefore enabling
you to implement additional security through separation of duties. The level of security
afforded by the firewall is a function of which ports you open, and for what duration and
purpose. Well-informed traffic management and security design are still required on a
per-instance basis.
AWS further encourages you to apply additional per-instance filters with host-based
firewalls such as IP tables or the Windows Firewall.
Page 32
Amazon Web Services Amazon Web Services: Overview of Security Processes
e d
h i v
rc Figure 4: Amazon VPC network architecture
A
Network Access Control Lists: To add a further layer of security within Amazon VPC,
you can configure network ACLs. These are stateless traffic filters that apply to all traffic
inbound or outbound from a subnet within Amazon VPC. These ACLs can contain
ordered rules to allow or deny traffic based upon IP protocol, by service port, as well as
source/destination IP address.
Like security groups, network ACLs are managed through Amazon VPC APIs, adding
an additional layer of protection and enabling additional security through separation of
duties. The diagram below depicts how the security controls above inter-relate to enable
flexible network topologies while providing complete control over network traffic flows.
Page 33
Amazon Web Services Amazon Web Services: Overview of Security Processes
e d
h i v
c
Figure 5: Flexible network topologies
r
Virtual Private Gateway: A virtual private gateway enables private connectivity
between the Amazon VPC and another network. Network traffic within each virtual
A
private gateway is isolated from network traffic within all other virtual private gateways.
You can establish VPN connections to the virtual private gateway from gateway devices
at your premises. Each connection is secured by a pre-shared key in conjunction with
the IP address of the customer gateway device.
This access can only be modified through the invocation of Amazon VPC APIs. AWS
supports the ability to grant granular access to different administrative functions on the
instances and the Internet gateway, therefore enabling you to implement additional
security through separation of duties. You can use a network address translation (NAT)
Page 34
Amazon Web Services Amazon Web Services: Overview of Security Processes
gateway to enable instances in a private subnet to connect to the internet or other AWS
services, but prevent the internet from initiating a connection with those instances.
Dedicated Instances: Within a VPC, you can launch Amazon EC2 instances that are
physically isolated at the host hardware level (i.e., they will run on single-tenant
hardware). An Amazon VPC can be created with ‘dedicated’ tenancy, so that all
instances launched into the Amazon VPC use this feature. Alternatively, an Amazon
VPC may be created with ‘default’ tenancy, but you can specify dedicated tenancy for
particular instances launched into it.
d
Elastic Network Interfaces: Each Amazon EC2 instance has a default network
interface that is assigned a private IP address on your Amazon VPC network.
e
You can create and attach an additional network interface, known as an elastic network
interface, to any Amazon EC2 instance in your Amazon VPC for a total of two network
v
interfaces per instance. Attaching more than one network interface to an instance is
i
useful when you want to create a management network, use network and security
appliances in your Amazon VPC, or create dual-homed instances with workloads/roles
h
on distinct subnets. A network interface's attributes, including the private IP address,
elastic IP addresses, and MAC address, follows the network interface as it is attached
c
or detached from an instance and reattached to another instance. For more information
r
about Amazon VPC, see Amazon Virtual Private Cloud.
A
If you launch instances in a Region where you did not have instances before AWS
launched the new EC2-VPC feature (also called Default VPC), all instances are
automatically provisioned in a ready-to-use default VPC. You can choose to create
additional VPCs, or you can create VPCs for instances in regions where you already
had instances before we launched EC2-VPC.
If you create a VPC later, using regular VPC, you specify a CIDR block, create subnets,
enter the routing and security for those subnets, and provision an Internet gateway or
NAT instance if you want one of your subnets to be able to reach the Internet. When
you launch EC2 instances into an EC2-VPC, most of this work is automatically
performed for you.
When you launch an instance into a default VPC using EC2-VPC, we do the following to
set it up for you:
Page 35
Amazon Web Services Amazon Web Services: Overview of Security Processes
• Create a main route table for your default VPC with a rule that sends all traffic
destined for the Internet to the Internet gateway
• Create a default security group and associate it with your default VPC
• Create a default network access control list (ACL) and associate it with your
default VPC
• Associate the default DHCP options set for your AWS account with your default
d
VPC
In addition to the default VPC having its own private IP range, EC2 instances launched
e
in a default VPC can also receive a public IP.
v
The following table summarizes the differences between instances launched into EC2-
i
Classic, instances launched into a default VPC, and instances launched into a non-
default VPC.
h
Table 2: Differences between different EC2 instances
c
EC2-VPC
Characteristic EC2-Classic (Default VPC) Regular VPC
r
IP address by Unless you specify otherwise
default, unless you during launch.
A
specify otherwise
during launch.
Multiple private IP We select a single You can assign You can assign multiple private
addresses IP address for your multiple private IP IP addresses to your instance.
instance. Multiple addresses to your
IP addresses are instance.
not supported.
Page 36
Amazon Web Services Amazon Web Services: Overview of Security Processes
EC2-VPC
Characteristic EC2-Classic (Default VPC) Regular VPC
DNS hostnames DNS hostnames DNS hostnames DNS hostnames are disabled
are enabled by are enabled by by default.
default. default.
d
Security group A security group A security group A security group can reference
e
can reference can reference security groups for your VPC
security groups that security groups for only.
belong to other your VPC only.
v
AWS accounts.
i
Security group You must terminate You can change You can change the security
association your instance to the security group group of your running
h
change its security of your running instance.
group. instance.
c
Security group You can add rules You can add rules You can add rules for inbound
r
rules for inbound traffic for inbound and and outbound traffic.
only. outbound traffic.
A
Tenancy Your instance runs You can run your You can run your instance on
on shared instance on shared shared hardware or single-
hardware; you hardware or tenant hardware.
cannot run an single- tenant
instance on single- hardware.
tenant hardware.
Page 37
Amazon Web Services Amazon Web Services: Overview of Security Processes
d
reference a security group for EC2-Classic, and vice versa.
e
Amazon Route 53 Security
v
Amazon Route 53 is a highly available and scalable Domain Name System (DNS)
i
service that answers DNS queries, translating domain names into IP addresses so
computers can communicate with each other. Route 53 can be used to connect user
requests to infrastructure running in AWS – such as an Amazon EC2 instance or an
h
Amazon S3 bucket – or to infrastructure outside of AWS.
c
Amazon Route 53 lets you manage the IP addresses (records) listed for your domain
r
names and it answers requests (queries) to translate specific domain names into their
corresponding IP addresses. Queries for your domain are automatically routed to a
nearby DNS server using anycast in order to provide the lowest latency possible. Route
A
53 makes it possible for you to manage traffic globally through a variety of routing types,
including Latency Based Routing (LBR), Geo DNS, and Weighted Round- Robin (WRR)
—all of which can be combined with DNS Failover in order to help create a variety of
low- latency, fault-tolerant architectures. The failover algorithms implemented by
Amazon Route 53 are designed not only to route traffic to endpoints that are healthy,
but also to help avoid making disaster scenarios worse due to misconfigured health
checks and applications, endpoint overloads, and partition failures.
Route 53 also offers Domain Name Registration – you can purchase and manage
domain names such as example.com and Route 53 will automatically configure default
DNS settings for your domains. You can buy, manage, and transfer (both in and out)
domains from a wide selection of generic and country- specific top-level domains
(TLDs). During the registration process, you have the option to enable privacy
protection for your domain. This option will hide most of your personal information from
the public Whois database in order to help thwart scraping and spamming.
Page 38
Amazon Web Services Amazon Web Services: Overview of Security Processes
Amazon Route 53 is built using AWS’s highly available and reliable infrastructure. The
distributed nature of the AWS DNS servers helps ensure a consistent ability to route
your end users to your application. Route 53 also helps ensure the availability of your
website by providing health checks and DNS failover capabilities. You can easily
configure Route 53 to check the health of your website on a regular basis (even secure
web sites that are available only over SSL), and to switch to a backup site if the primary
one is unresponsive.
Like all AWS Services, Amazon Route 53 requires that every request made to its control
API be authenticated so only authenticated users can access and manage Route 53.
d
API requests are signed with an HMAC-SHA1 or HMAC- SHA256 signature calculated
from the request and the user’s AWS Secret Access key. Additionally, the Amazon
e
Route 53 control API is only accessible via SSL- encrypted endpoints. It supports both
IPv4 and IPv6 routing.
i v
You can control access to Amazon Route 53 DNS management functions by creating
users under your AWS Account using AWS IAM, and controlling which Route 53
operations these users have permission to perform.
h
Amazon CloudFront Security
c
Amazon CloudFront gives customers an easy way to distribute content to end users
r
with low latency and high data transfer speeds. It delivers dynamic, static, and
streaming content using a global network of edge locations. Requests for customers’
A
objects are automatically routed to the nearest edge location, so content is delivered
with the best possible performance. Amazon CloudFront is optimized to work with other
AWS services, like Amazon S3, Amazon EC2, Elastic Load Balancing, and Amazon
Route 53. It also works seamlessly with any non-AWS origin server that stores the
original, definitive versions of your files.
Amazon CloudFront requires every request made to its control API be authenticated so
only authorized users can create, modify, or delete their own Amazon CloudFront
distributions. Requests are signed with an HMAC-SHA1 signature calculated from the
request and the user’s private key. Additionally, the Amazon CloudFront control API is
only accessible via SSL-enabled endpoints.
Page 39
Amazon Web Services Amazon Web Services: Overview of Security Processes
If you want control over who is able to download content from Amazon CloudFront, you
can enable the service’s private content feature. This feature has two components: the
first controls how content is delivered from the Amazon CloudFront edge location to
viewers on the Internet. The second controls how the Amazon CloudFront edge
locations access objects in Amazon S3. CloudFront also supports Geo Restriction,
which restricts access to your content based on the geographic location of your viewers.
To control access to the original copies of your objects in Amazon S3, Amazon
CloudFront allows you to create one or more “Origin Access Identities” and associate
these with your distributions. When an Origin Access Identity is associated with an
d
Amazon CloudFront distribution, the distribution will use that identity to retrieve objects
from Amazon S3. You can then use Amazon S3’s ACL feature, which limits access to
e
that Origin Access Identity so the original copy of the object is not publicly readable.
v
To control who is able to download objects from Amazon CloudFront edge locations, the
i
service uses a signed-URL verification system. To use this system, you first create a
public-private key pair, and upload the public key to your account via the AWS
Management Console. Second, you configure your Amazon CloudFront distribution to
h
indicate which accounts you would authorize to sign requests – you can indicate up to
five AWS Accounts you trust to sign requests. Third, as you receive requests you will
c
create policy documents indicating the conditions under which you want Amazon
r
CloudFront to serve your content. These policy documents can specify the name of the
object that is requested, the date and time of the request, and the source IP (or CIDR
range) of the client making the request. You then calculate the SHA1 hash of your
A
policy document and sign this using your private key. Finally, you include both the
encoded policy document and the signature as query string parameters when you
reference your objects. When Amazon CloudFront receives a request, it will decode the
signature using your public key. Amazon CloudFront only serves requests that have a
valid policy document and matching signature.
Page 40
Amazon Web Services Amazon Web Services: Overview of Security Processes
configure CloudFront distributions to allow HTTP for some objects but require HTTPS
for other objects.
e d
Figure 6: Amazon CloudFront encrypted transmission
v
You can configure one or more CloudFront origins to require CloudFront fetch objects
from your origin using the protocol that the viewer used to request the objects. For
i
example, when you use this CloudFront setting and the viewer uses HTTPS to request
an object from CloudFront, CloudFront also uses HTTPS to forward the request to your
h
origin.
c
Amazon CloudFront uses the SSLv3 or TLSv1 protocols and a selection of cipher suites
that includes the Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) protocol on
r
connections to both viewers and the origin. ECDHE allows SSL/TLS clients to provide
Perfect Forward Secrecy, which uses session keys that are ephemeral and not stored
A
anywhere. This helps prevent the decoding of captured data by unauthorized third
parties, even if the secret long- term key itself is compromised.
Note: If you're using your own server as your origin, and you want to use
HTTPS both between viewers and CloudFront and between CloudFront
and your origin, you must install a valid SSL certificate on the HTTP server
that is signed by a third-party certificate authority, for example, VeriSign or
DigiCert.
By default, you can deliver content to viewers over HTTPS by using your CloudFront
distribution domain name in your URLs; for example,
https://dxxxxx.cloudfront.net/image.jpg. If you want to deliver your content over HTTPS
using your own domain name and your own SSL certificate, you can use SNI Custom
SSL or Dedicated IP Custom SSL. With Server Name Identification (SNI) Custom SSL,
CloudFront relies on the SNI extension of the TLS protocol, which is supported by most
modern web browsers. However, some users may not be able to access your content
Page 41
Amazon Web Services Amazon Web Services: Overview of Security Processes
because some older browsers do not support SNI. (For a list of supported browsers,
visit CloudFront FAQs.) With Dedicated IP Custom SSL, CloudFront dedicates IP
addresses to your SSL certificate at each CloudFront edge location so that CloudFront
can associate the incoming requests with the proper SSL certificate.
d
AWS Direct Connect Security
e
With AWS Direct Connect, you can provision a direct link between your internal network
v
and an AWS region using a high-throughput, dedicated connection.
i
Doing this may help reduce your network costs, improve throughput, or provide a more
consistent network experience. With this dedicated connection in place, you can then
h
create virtual interfaces directly to the AWS Cloud (for example, to Amazon EC2 and
Amazon S3) and Amazon VPC.
c
With Direct Connect, you bypass internet service providers in your network path. You
r
can procure rack space within the facility housing the AWS Direct Connect location and
deploy your equipment nearby. Once deployed, you can connect this equipment to
AWS Direct Connect using a cross-connect. Each AWS Direct Connect location enables
A
connectivity to the geographically nearest AWS region as well as access to other US
regions. For example, you can provision a single connection to any AWS Direct
Connect location in the US and use it to access public AWS services in all US Regions
and AWS GovCloud (US).
Using industry standard 802.1q VLANs, the dedicated connection can be partitioned
into multiple virtual interfaces. This allows you to use the same connection to access
public resources such as objects stored in Amazon S3 using public IP address space,
and private resources such as Amazon EC2 instances running within an Amazon VPC
using private IP space, while maintaining network separation between the public and
private environments.
Amazon Direct Connect requires the use of the Border Gateway Protocol (BGP) with an
Autonomous System Number (ASN). To create a virtual interface, you use an MD5
cryptographic key for message authorization. MD5 creates a keyed hash using your
Page 42
Amazon Web Services Amazon Web Services: Overview of Security Processes
secret key. You can have AWS automatically generate a BGP MD5 key or you can
provide your own.
Storage Services
Amazon Web Services provides low-cost data storage with high durability and
availability. AWS offers storage choices for backup, archiving, and disaster recovery, as
well as block and object storage.
d
Amazon Simple Storage Service (Amazon S3) Security
Amazon Simple Storage Service (Amazon S3) allows you to upload and retrieve data at
e
any time, from anywhere on the web. Amazon S3 stores data as objects within buckets.
An object can be any kind of file: a text file, a photo, a video, etc.
v
When you add a file to Amazon S3, you have the option of including metadata with the
i
file and setting permissions to control access to the file. For each bucket, you can
control access to the bucket (who can create, delete, and list objects in the bucket),
h
view access logs for the bucket and its objects, and choose the geographical region
where Amazon S3 will store the bucket and its contents.
c
Data Access
r
Access to data stored in Amazon S3 is restricted by default; only bucket and object
owners have access to the Amazon S3 resources they create (note that a bucket/object
A
owner is the AWS Account owner, not the user who created the bucket/object). There
are multiple ways to control access to buckets and objects:
• Access Control Lists (ACLs). Within Amazon S3, you can use ACLs to give
read or write access on buckets or objects to groups of users. With ACLs, you
can only grant other AWS accounts (not specific users) access to your Amazon
S3 resources.
Page 43
Amazon Web Services Amazon Web Services: Overview of Security Processes
Type of Access Control AWS Account Level Control User Level Control
d
IAM Policies No Yes
e
ACLs Yes No
v
Bucket Policies Yes Yes
i
You can further restrict access to specific resources based on certain conditions. For
example, you can restrict access based on request time (Date Condition), whether the
h
request was sent using SSL (Boolean Conditions), a requester’s IP address (IP Address
Condition), or based on the requester's client application (String Conditions). To identify
c
these conditions, you use policy keys. For more information about action-specific policy
keys available within Amazon S3, see the Amazon Simple Storage Service Developer
r
Guide.
Amazon S3 also gives developers the option to use query string authentication, which
A
allows them to share Amazon S3 objects through URLs that are valid for a predefined
period of time. Query string authentication is useful for giving HTTP or browser access
to resources that would normally require authentication. The signature in the query
string secures the request.
Data Transfer
For maximum security, you can securely upload/download data to Amazon S3 via the
SSL encrypted endpoints. The encrypted endpoints are accessible from both the
Internet and from within Amazon EC2, so that data is transferred securely both within
AWS and to and from sources outside of AWS.
Data Storage
Amazon S3 provides multiple options for protecting data at rest. For customers who
prefer to manage their own encryption, they can use a client encryption library like the
Amazon S3 Encryption Client to encrypt data before uploading to Amazon S3.
Page 44
Amazon Web Services Amazon Web Services: Overview of Security Processes
Alternatively, you can use Amazon S3 Server-Side Encryption (SSE) if you prefer to
have Amazon S3 manage the encryption process for you. Data is encrypted with a key
generated by AWS or with a key you supply, depending on your requirements. With
Amazon S3 SSE, you can encrypt data on upload simply by adding an additional
request header when writing the object. Decryption happens automatically when data is
retrieved.
Note: Metadata, which you can include with your object, is not encrypted.
Therefore, AWS recommends that customers not place sensitive
d
information in Amazon S3 metadata.
e
Amazon S3 SSE uses one of the strongest block ciphers available – 256-bit Advanced
Encryption Standard (AES-256). With Amazon S3 SSE, every protected object is
v
encrypted with a unique encryption key. This object key itself is then encrypted with a
i
regularly rotated master key. Amazon S3 SSE provides additional security by storing
the encrypted data and encryption keys in different hosts. Amazon S3 SSE also makes
it possible for you to enforce encryption requirements. For example, you can create and
h
apply bucket policies that require that only encrypted data can be uploaded to your
c
buckets.
r
For long-term storage, you can automatically archive the contents of your Amazon S3
buckets to AWS’s archival service called Amazon S3 Glacier. You can have data
transferred at specific intervals to Amazon S3 Glacier by creating lifecycle rules in
A
Amazon S3 that describe which objects you want to be archived to Amazon S3 Glacier
and when. As part of your data management strategy, you can also specify how long
Amazon S3 should wait after the objects are put into Amazon S3 to delete them.
When an object is deleted from Amazon S3, removal of the mapping from the public
name to the object starts immediately, and is generally processed across the distributed
system within several seconds. Once the mapping is removed, there is no remote
access to the deleted object. The underlying storage area is then reclaimed for use by
the system.
Page 45
Amazon Web Services Amazon Web Services: Overview of Security Processes
the objects by quickly detecting and repairing any lost redundancy. Amazon S3 also
regularly verifies the integrity of data stored using checksums. If corruption is detected,
it is repaired using redundant data. In addition, Amazon S3 calculates checksums on all
network traffic to detect corruption of data packets when storing or retrieving data.
Amazon S3 provides further protection via Versioning. You can use Versioning to
preserve, retrieve, and restore every version of every object stored in an Amazon S3
bucket. With Versioning, you can easily recover from both unintended user actions and
application failures. By default, requests will retrieve the most recently written version.
Older versions of an object can be retrieved by specifying a version in the request. You
d
can further protect versions using Amazon S3 Versioning's MFA Delete feature. Once
enabled for an Amazon S3 bucket, each version deletion request must include the six-
e
digit code and serial number from your multi-factor authentication device.
v
Access Logs
i
An Amazon S3 bucket can be configured to log access to the bucket and objects within
it. The access log contains details about each access request including request type,
h
the requested resource, the requestor’s IP, and the time and date of the request. When
logging is enabled for a bucket, log records are periodically aggregated into log files and
c
delivered to the specified Amazon S3 bucket.
r
Cross-Origin Resource Sharing (CORS)
AWS customers who use Amazon S3 to host static web pages or store objects used by
A
other web pages can load content securely by configuring an Amazon S3 bucket to
explicitly enable cross-origin requests. Modern browsers use the Same Origin policy to
block JavaScript or HTML5 from allowing requests to load content from another site or
domain as a way to help ensure that malicious content is not loaded from a less
reputable source (such as during cross-site scripting attacks). With the Cross-Origin
Resource Sharing (CORS) policy enabled, assets such as web fonts and images stored
in an Amazon S3 bucket can be safely referenced by external web pages, style sheets,
and HTML5 applications.
Page 46
Amazon Web Services Amazon Web Services: Overview of Security Processes
Amazon S3 Glacier stores files as archives within vaults. Archives can be any data such
as a photo, video, or document, and can contain one or several files. You can store an
unlimited number of archives in a single vault and can create up to 1,000 vaults per
region. Each archive can contain up to 40 TB of data.
Data Upload
To transfer data into Amazon S3 Glacier vaults, you can upload an archive in a single
upload operation or a multipart operation. In a single upload operation, you can upload
archives up to 4 GB in size. However, customers can achieve better results using the
d
Multipart Upload API to upload archives greater than 100 MB. Using the Multipart
Upload API allows you to upload large archives, up to about 40,000 GB. The Multipart
e
Upload API call is designed to improve the upload experience for larger archives; it
enables the parts to be uploaded independently, in any order, and in parallel. If a
v
multipart upload fails, you only need to upload the failed part again and not the entire
i
archive.
When you upload data to Amazon S3 Glacier, you must compute and supply a tree
h
hash. Amazon S3 Glacier checks the hash against the data to help ensure that it has
not been altered en route. A tree hash is generated by computing a hash for each
c
megabyte-sized segment of the data, and then combining the hashes in tree fashion to
represent ever-growing adjacent segments of the data.
r
As an alternate to using the Multipart Upload feature, customers with very large uploads
to Amazon S3 Glacier may consider using the AWS Snowball service instead to transfer
A
the data. AWS Snowball facilitates moving large amounts of data into AWS using
portable storage devices for transport. AWS transfers your data directly off of storage
devices using Amazon’s high-speed internal network, bypassing the Internet.
You can also set up Amazon S3 to transfer data at specific intervals to Amazon S3
Glacier. You can create lifecycle rules in Amazon S3 that describe which objects you
want to be archived to Amazon S3 Glacier and when. You can also specify how long
Amazon S3 should wait after the objects are put into Amazon S3 to delete them.
To achieve even greater security, you can securely upload/download data to Amazon
S3 Glacier via the SSL-encrypted endpoints. The encrypted endpoints are accessible
from both the Internet and from within Amazon EC2, so that data is transferred securely
both within AWS and to and from sources outside of AWS.
Page 47
Amazon Web Services Amazon Web Services: Overview of Security Processes
Data Retrieval
Retrieving archives from Amazon S3 Glacier requires the initiation of a retrieval job,
which is generally completed in 3 to 5 hours. You can then access the data via HTTP
GET requests. The data will remain available to you for 24 hours.
You can retrieve an entire archive or several files from an archive. If you want to retrieve
only a subset of an archive, you can use one retrieval request to specify the range of
the archive that contains the files you are interested or you can initiate multiple retrieval
requests, each with a range for one or more files. You can also limit the number of vault
d
inventory items retrieved by filtering on an archive creation date range or by setting a
maximum items limit. Whichever method you choose, when you retrieve portions of your
e
archive, you can use the supplied checksum to help ensure the integrity of the files
provided that the range that is retrieved is aligned with the tree hash of the overall
archive.
i v
Data Storage
Amazon S3 Glacier automatically encrypts the data using AES-256 and stores it durably
h
in an immutable form. Amazon S3 Glacier is designed to provide average annual
durability of 99.999999999% for an archive. It stores each archive in multiple facilities
c
and multiple devices. Unlike traditional systems which can require laborious data
r
verification and manual repair, Amazon S3 Glacier performs regular, systematic data
integrity checks and is built to be automatically self-healing.
A
When an object is deleted from Amazon S3 Glacier, removal of the mapping from the
public name to the object starts immediately, and is generally processed across the
distributed system within several seconds. Once the mapping is removed, there is no
remote access to the deleted object. The underlying storage area is then reclaimed for
use by the system.
Data Access
Only your account can access your data in Amazon S3 Glacier. To control access to
your data in Amazon S3 Glacier, you can use AWS IAM to specify which users within
your account have rights to operations on a given vault.
Page 48
Amazon Web Services Amazon Web Services: Overview of Security Processes
upload data to AWS’ scalable, reliable, and secure Amazon S3 storage service for cost-
effective backup and rapid disaster recovery.
AWS Storage Gateway transparently backs up data off-site to Amazon S3 in the form of
Amazon EBS snapshots. Amazon S3 redundantly stores these snapshots on multiple
devices across multiple facilities, detecting and repairing any lost redundancy. The
Amazon EBS snapshot provides a point-in-time backup that can be restored on-
premises or used to instantiate new Amazon EBS volumes.
Data is stored within a single region that you specify. AWS Storage Gateway offers
d
three options:
e
Gateway-Stored Volumes (where the cloud is backup). In this option, your
volume data is stored locally and then pushed to Amazon S3, where it is stored
in redundant, encrypted form, and made available in the form of Amazon Elastic
v
Block Storage (Amazon EBS) snapshots. When you use this model, the on-
i
premises storage is primary, delivering low-latency access to your entire dataset,
and the cloud storage is the backup.
h
• Gateway-Cached Volumes (where the cloud is primary). In this option, your
c
volume data is stored encrypted in Amazon S3, visible within your enterprise's
network via an iSCSI interface. Recently accessed data is cached on- premises
r
for low-latency local access. When you use this model, the cloud storage is
primary, but you get low- latency access to your active working set in the cached
volumes on premises.
A
• Gateway-Virtual Tape Library (VTL). In this option, you can configure a
Gateway-VTL with up to 10 virtual tape drives per gateway, 1 media changer and
up to 1500 virtual tape cartridges. Each virtual tape drive responds to the SCSI
command set, so your existing on-premises backup applications (either disk-to-
tape or disk-to-disk-to- tape) will work without modification.
No matter which option you choose, data is asynchronously transferred from your on-
premises storage hardware to AWS over SSL. The data is stored encrypted in Amazon
S3 using Advanced Encryption Standard (AES) 256, a symmetric- key encryption
standard using 256-bit encryption keys. The AWS Storage Gateway only uploads data
that has changed, minimizing the amount of data sent over the Internet.
The AWS Storage Gateway runs as a virtual machine (VM) that you deploy on a host in
your data center running VMware ESXi Hypervisor v 4.1 or v 5 or Microsoft Hyper-V
(you download the VMware software during the setup process). You can also run within
EC2 using a gateway AMI. During the installation and configuration process, you can
Page 49
Amazon Web Services Amazon Web Services: Overview of Security Processes
create up to 12 stored volumes, 20 Cached volumes, or 1500 virtual tape cartridges per
gateway. Once installed, each gateway will automatically download, install, and deploy
updates and patches. This activity takes place during a maintenance window that you
can set on a per-gateway basis.
The iSCSI protocol supports authentication between targets and initiators via CHAP
(Challenge-Handshake Authentication Protocol). CHAP provides protection against
man-in-the-middle and playback attacks by periodically verifying the identity of an iSCSI
initiator as authenticated to access a storage volume target. To set up CHAP, you must
configure it in both the AWS Storage Gateway console and in the iSCSI initiator
d
software you use to connect to the target.
e
After you deploy the AWS Storage Gateway VM, you must activate the gateway using
the AWS Storage Gateway console. The activation process associates your gateway
v
with your AWS Account. Once you establish this connection, you can manage almost all
i
aspects of your gateway from the console. In the activation process, you specify the IP
address of your gateway, name your gateway, identify the AWS region in which you
want your snapshot backups stored, and specify the gateway time zone.
h
AWS Snowball Security
c
AWS Snowball is a simple, secure method for physically transferring large amounts of
r
data to Amazon S3, EBS, or Amazon S3 Glacier storage. This service is typically used
by customers who have over 100 GB of data and/or slow connection speeds that would
A
result in very slow transfer rates over the Internet. With AWS Snowball, you prepare a
portable storage device that you ship to a secure AWS facility. AWS transfers the data
directly off of the storage device using Amazon’s high-speed internal network, thus
bypassing the Internet. Conversely, data can also be exported from AWS to a portable
storage device.
Like all other AWS services, the AWS Snowball service requires that you securely
identify and authenticate your storage device. In this case, you will submit a job request
to AWS that includes your Amazon S3 bucket, Amazon EBS region, AWS Access Key
ID, and return shipping address. You then receive a unique identifier for the job, a digital
signature for authenticating your device, and an AWS address to ship the storage
device to. For Amazon S3, you place the signature file on the root directory of your
device. For Amazon EBS, you tape the signature barcode to the exterior of the device.
The signature file is used only for authentication and is not uploaded to Amazon S3 or
EBS.
Page 50
Amazon Web Services Amazon Web Services: Overview of Security Processes
For transfers to Amazon S3, you specify the specific buckets to which the data should
be loaded and ensure that the account doing the loading has write permission for the
buckets. You should also specify the access control list to be applied to each object
loaded to Amazon S3.
For transfers to EBS, you specify the target region for the EBS import operation. If the
storage device is less than or equal to the maximum volume size of 1 TB, its contents
are loaded directly into an Amazon EBS snapshot. If the storage device’s capacity
exceeds 1 TB, a device image is stored within the specified S3 log bucket. You can then
create a RAID of Amazon EBS volumes using software such as Logical Volume
d
Manager, and copy the image from S3 to this new volume.
e
For added protection, you can encrypt the data on your device before you ship it to
AWS. For Amazon S3 data, you can use a PIN-code device with hardware encryption or
v
TrueCrypt software to encrypt your data before sending it to AWS. For EBS and
i
Amazon S3 Glacier data, you can use any encryption method you choose, including a
PIN-code device. AWS will decrypt your Amazon S3 data before importing using the
PIN code and/or TrueCrypt password you supply in your import manifest. AWS uses
h
your PIN to access a PIN-code device, but does not decrypt software-encrypted data for
import to Amazon EBS or Amazon S3 Glacier. The following table summarizes your
c
encryption options for each type of import/export job.
r
Table 4: Encryption options for import/export jobs
A
Import to Amazon S3
Source Target Result
• Files on a device file • Objects in an existing • One object for each file.
system Amazon S3 bucket • AWS erases your device
• Encrypt data using PIN- • AWS decrypts the data after every import job prior
code device and/or before performing the to shipping
TrueCrypt before shipping import
device
Export from Amazon S3
Source Target Result
Page 51
Amazon Web Services Amazon Web Services: Overview of Security Processes
• Objects in one or more • Files on your storage • One file for each object
Amazon S3 buckets device • AWS encrypts your data
• Provide a PIN code and/or • AWS formats your device prior to shipping
password that AWS will • AWS copies your data to • Use PIN-code device
use to encrypt your data an encrypted file container and/or TrueCrypt to
on your device decrypt the files
d
• Entire device • One archive in an existing • Device image stored as a
• Encrypt the data using the Amazon S3 Glacier vault single archive
encryption method of your • •
e
AWS does not decrypt AWS erases your device
choice before shipping your device after every import job prior
to shipping
v
Import to Amazon EBS (Device Capacity < 1 TB)
i
Source Target Result
• • •
h
Entire device One Amazon EBS Device image is stored as
• Encrypt the data using the snapshot a single snapshot
encryption method of your • •
c
AWS does not decrypt If the device was
choice before shipping your device encrypted, the image is
r
encrypted
• AWS erases your device
after every import job prior
A
to shipping
Page 52
Amazon Web Services Amazon Web Services: Overview of Security Processes
After the import is complete, AWS Snowball will erase the contents of your storage
device to safeguard the data during return shipment. AWS overwrites all writable blocks
on the storage device with zeroes. You will need to repartition and format the device
after the wipe. If AWS is unable to erase the data on the device, it will be scheduled for
destruction and our support team will contact you using the email address specified in
the manifest file you ship with the device.
When shipping a device internationally, the customs option and certain required
subfields are required in the manifest file sent to AWS. AWS Snowball uses these
values to validate the inbound shipment and prepare the outbound customs paperwork.
d
Two of these options are whether the data on the device is encrypted or not and the
encryption software’s classification. When shipping encrypted data to or from the United
e
States, the encryption software must be classified as 5D992 under the United States
Export Administration Regulations.
i v
Amazon Elastic File System Security
Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for
h
use with Amazon EC2 instances in the AWS Cloud. With Amazon EFS, storage
capacity is elastic, growing and shrinking automatically as you add and remove files.
c
Amazon EFS file systems are distributed across an unconstrained number of storage
r
servers, enabling file systems to grow elastically to petabyte- scale and allowing
massively parallel access from Amazon EC2 instances to your data.
A
Data Access
With Amazon EFS, you can create a file system, mount the file system on an Amazon
EC2 instance, and then read and write data from to and from your file system. You can
mount an Amazon EFS file system on EC2 instances in your VPC, through the Network
File System versions 4.0 and 4.1 (NFSv4) protocol.
To access your Amazon EFS file system in a VPC, you create one or more mount
targets in the VPC. A mount target provides an IP address for an NFSv4 endpoint. You
can then mount an Amazon EFS file system to this end point using its DNS name,
which will resolve to the IP address of the EFS mount target in the same Availability
Zone as your EC2 instance.
You can create one mount target in each Availability Zone in a region. If there are
multiple subnets in an Availability Zone in your VPC, you create a mount target in one of
the subnets, and all EC2 instances in that Availability Zone share that mount target. You
Page 53
Amazon Web Services Amazon Web Services: Overview of Security Processes
can also mount an EFS file system on a host in an on-premises datacenter using AWS
Direct Connect.
When using Amazon EFS, you specify Amazon EC2 security groups for your EC2
instances and security groups for the EFS mount targets associated with the file
system. Security groups act as a firewall, and the rules you add define the traffic flow.
You can authorize inbound/outbound access to your EFS file system by adding rules
that allow your EC2 instance to connect to your Amazon EFS file system via the mount
target using the NFS port.
d
After mounting the file system via the mount target, you use it like any other POSIX-
compliant file system. Files and directories in an EFS file system support standard Unix-
e
style read/write/execute permissions based on the user and group ID asserted by the
mounting NFSv4.1 client. For information about NFS- level permissions and related
v
considerations, see Working with Users, Groups, and Permissions at the Network File
i
System (NFS) Level.
All Amazon EFS file systems are owned by an AWS Account. You can use IAM policies
h
to grant permissions to other users so that they can perform administrative operations
on your file systems, including deleting a file system or modifying a mount target’s
c
security groups. For more information about EFS permissions, see Overview of
r
Managing Access Permissions to Your Amazon EFS Resources.
A
Amazon EFS is designed to be highly durable and highly available. All data and
metadata is stored across multiple Availability Zones, and all service components are
designed to be highly available. EFS provides strong consistency by synchronously
replicating data across Availability Zones, with read-after-write semantics for most file
operations. Amazon EFS incorporates checksums for all metadata and data throughout
the service. Using a file system checking process (FSCK), EFS continuously validates a
file system's metadata and data integrity.
Data Sanitization
Amazon EFS is designed so that when you delete data from a file system, that data will
never be served again. If your procedures require that all data be wiped via a specific
method, such as those detailed in DoD 5220.22-M (“National Industrial Security
Program Operating Manual “) or NIST 800-88 (“Guidelines for Media Sanitization”), we
recommend that you conduct a specialized wipe procedure prior to deleting the file
system.
Page 54
Amazon Web Services Amazon Web Services: Overview of Security Processes
Database Services
Amazon Web Services provides a number of database solutions for developers and
businesses—from managed relational and NoSQL database services, to in- memory
caching as a service and petabyte-scale data-warehouse service.
d
offload the administrative burdens of operating and scaling distributed databases to
AWS, so you don’t have to worry about hardware provisioning, setup and configuration,
e
replication, software patching, or cluster scaling.
v
You can create a database table that can store and retrieve any amount of data, and
i
serve any level of request traffic. DynamoDB automatically spreads the data and traffic
for the table over a sufficient number of servers to handle the request capacity you
specified and the amount of data stored, while maintaining consistent, fast performance.
h
All data items are stored on Solid State Drives (SSDs) and are automatically replicated
across multiple availability zones in a region to provide built-in high availability and data
c
durability.
r
You can set up automatic backups using a special template in AWS Data Pipeline that
was created just for copying DynamoDB tables. You can choose full or incremental
A
backups to a table in the same region or a different region. You can use the copy for
disaster recovery (DR) in the event that an error in your code damages the original
table, or to federate DynamoDB data across regions to support a multi-region
application.
To control who can use the DynamoDB resources and API, you set up permissions in
AWS IAM. In addition to controlling access at the resource-level with IAM, you can also
control access at the database level—you can create database-level permissions that
allow or deny access to items (rows) and attributes (columns) based on the needs of
your application. These database level permissions are called fine-grained access
controls, and you create them using an IAM policy that specifies under what
circumstances a user or application can access a DynamoDB table. The IAM policy can
restrict access to individual items in a table, access to the attributes in those items, or
both at the same time.
Page 55
Amazon Web Services Amazon Web Services: Overview of Security Processes
You can optionally use web identity federation to control access by application users
d
who are authenticated by Login with Amazon, Facebook, or Google. Web identity
federation removes the need for creating individual IAM users; instead, users can sign
e
in to an identity provider and then obtain temporary security credentials from AWS
Security Token Service (AWS STS). AWS STS returns temporary AWS credentials to
v
the application and allows it to access the specific DynamoDB table.
i
In addition to requiring database and user permissions, each request to the DynamoDB
service must contain a valid HMAC-SHA256 signature, or the request is rejected. The
h
AWS SDKs automatically sign your requests; however, if you want to write your own
c
HTTP POST requests, you must provide the signature in the header of your request to
Amazon DynamoDB. To calculate the signature, you must request temporary security
r
credentials from the AWS Security Token Service. Use the temporary security
credentials to sign your requests to Amazon DynamoDB. Amazon DynamoDB is
accessible via TLS/SSL-encrypted endpoints.
A
Amazon Relational Database Service (Amazon RDS) Security
Amazon RDS allows you to quickly create a relational database (DB) instance and
flexibly scale the associated compute resources and storage capacity to meet
application demand. Amazon RDS manages the database instance on your behalf by
performing backups, handling failover, and maintaining the database software.
Currently, Amazon RDS is available for MySQL, Oracle, Microsoft SQL Server, and
PostgreSQL database engines.
Amazon RDS has multiple features that enhance reliability for critical production
databases, including DB security groups, permissions, SSL connections, automated
backups, DB snapshots, and multi-AZ deployments. DB instances can also be deployed
in an Amazon VPC for additional network isolation.
Page 56
Amazon Web Services Amazon Web Services: Overview of Security Processes
Access Control
When you first create a DB Instance within Amazon RDS, you will create a master user
account, which is used only within the context of Amazon RDS to control access to your
DB Instance(s). The master user account is a native database user account that allows
you to log on to your DB Instance with all database privileges. You can specify the
master user name and password you want associated with each DB Instance when you
create the DB Instance. Once you have created your DB Instance, you can connect to
the database using the master user credentials. Subsequently, you can create
additional user accounts so that you can restrict who can access your DB Instance.
d
You can control Amazon RDS DB Instance access via DB Security Groups, which are
e
similar to Amazon EC2 Security Groups but not interchangeable. DB Security Groups
act like a firewall controlling network access to your DB Instance. Database Security
Groups default to a “deny all” access mode and customers must specifically authorize
v
network ingress. There are two ways of doing this: authorizing a network IP range or
i
authorizing an existing Amazon EC2 Security Group. DB Security Groups only allow
access to the database server port (all others are blocked) and can be updated without
h
restarting the Amazon RDS DB Instance, which allows a customer seamless control of
their database access.
c
Using AWS IAM, you can further control access to your RDS DB instances. AWS IAM
r
enables you to control what RDS operations each individual AWS IAM user has
permission to call.
A
Network Isolation
For additional network access control, you can run your DB Instances in an Amazon
VPC. Amazon VPC enables you to isolate your DB Instances by specifying the IP range
you wish to use, and connect to your existing IT infrastructure through industry-standard
encrypted IPsec VPN. Running Amazon RDS in a VPC enables you to have a DB
instance within a private subnet. You can also set up a virtual private gateway that
extends your corporate network into your VPC, and allows access to the RDS DB
instance in that VPC. Refer to the Amazon VPC User Guide for more details.
For Multi-AZ deployments, defining a subnet for all availability zones in a region will
allow Amazon RDS to create a new standby in another availability zone should the need
arise. You can create DB Subnet Groups, which are collections of subnets that you may
want to designate for your RDS DB Instances in a VPC. Each DB Subnet Group should
have at least one subnet for every availability zone in a given region. In this case, when
you create a DB Instance in a VPC, you select a DB Subnet Group; Amazon RDS then
uses that DB Subnet Group and your preferred availability zone to select a subnet and
Page 57
Amazon Web Services Amazon Web Services: Overview of Security Processes
an IP address within that subnet. Amazon RDS creates and associates an Elastic
Network Interface to your DB Instance with that IP address.
DB Instances deployed within an Amazon VPC can be accessed from the Internet or
from Amazon EC2 Instances outside the VPC via VPN or bastion hosts that you can
launch in your public subnet. To use a bastion host, you will need to set up a public
subnet with an EC2 instance that acts as an SSH Bastion. This public subnet must have
an Internet gateway and routing rules that allow traffic to be directed via the SSH host,
which must then forward requests to the private IP address of your Amazon RDS DB
instance.
d
DB Security Groups can be used to help secure DB Instances within an Amazon VPC.
e
In addition, network traffic entering and exiting each subnet can be allowed or denied
via network ACLs. All network traffic entering or exiting your Amazon VPC via your
v
IPsec VPN connection can be inspected by your on- premises security infrastructure,
i
including network firewalls and intrusion detection systems.
Encryption
h
You can encrypt connections between your application and your DB Instance using
c
SSL. For MySQL and SQL Server, RDS creates an SSL certificate and installs the
certificate on the DB instance when the instance is provisioned. For MySQL, you launch
r
the mysql client using the --ssl_ca parameter to reference the public key in order to
encrypt connections. For SQL Server, download the public key and import the certificate
into your Windows operating system.
A
Oracle RDS uses Oracle native network encryption with a DB instance. You simply add
the native network encryption option to an option group and associate that option group
with the DB instance. Once an encrypted connection is established, data transferred
between the DB Instance and your application will be encrypted during transfer. You
can also require your DB instance to only accept encrypted connections.
Amazon RDS supports Transparent Data Encryption (TDE) for SQL Server (SQL Server
Enterprise Edition) and Oracle (part of the Oracle Advanced Security option available in
Oracle Enterprise Edition). The TDE feature automatically encrypts data before it is
written to storage and automatically decrypts data when it is read from storage.
Note: SSL support within Amazon RDS is for encrypting the connection
between your application and your DB Instance; it should not be relied on
for authenticating the DB Instance itself.
Page 58
Amazon Web Services Amazon Web Services: Overview of Security Processes
While SSL offers security benefits, be aware that SSL encryption is a compute intensive
operation and will increase the latency of your database connection. To learn how SSL
works with SQL Server, you can read more in the Amazon Relational Database Service
User Guide.
d
Turned on by default, the automated backup feature of Amazon RDS enables point-in-
time recovery for your DB Instance. Amazon RDS will back up your database and
transaction logs and store both for a user-specified retention period. This allows you to
e
restore your DB Instance to any second during your retention period, up to the last 5
minutes. Your automatic backup retention period can be configured to up to 35 days.
i v
During the backup window, storage I/O may be suspended while your data is being
backed up. This I/O suspension typically lasts a few minutes. This I/O suspension is
avoided with Multi-AZ DB deployments, since the backup is taken from the standby.
h
DB Snapshots are user-initiated backups of your DB Instance. These full database
c
backups are stored by Amazon RDS until you explicitly delete them. You can copy DB
r
snapshots of any size and move them between any of AWS’s public regions, or copy
the same snapshot to multiple regions simultaneously. You can then create a new DB
Instance from a DB Snapshot whenever you desire.
A
DB Instance Replication
Amazon cloud computing resources are housed in highly available data center facilities
in different regions of the world, and each region contains multiple distinct locations
called Availability Zones. Each Availability Zone is engineered to be isolated from
failures in other Availability Zones, and to provide inexpensive, low-latency network
connectivity to other Availability Zones in the same region.
To architect for high availability of your Oracle, PostgreSQL, or MySQL databases, you
can run your RDS DB instance in several Availability Zones, an option called a Multi-AZ
deployment. When you select this option, Amazon automatically provisions and
maintains a synchronous standby replica of your DB instance in a different Availability
Zone. The primary DB instance is synchronously replicated across Availability Zones to
the standby replica. In the event of DB instance or Availability Zone failure, Amazon
RDS will automatically failover to the standby so that database operations can resume
quickly without administrative intervention.
Page 59
Amazon Web Services Amazon Web Services: Overview of Security Processes
For customers who use MySQL and need to scale beyond the capacity constraints of a
single DB Instance for read-heavy database workloads, Amazon RDS provides a Read
Replica option. Once you create a read replica, database updates on the source DB
instance are replicated to the read replica using MySQL’s native, asynchronous
replication. You can create multiple read replicas for a given source DB instance and
distribute your application’s read traffic among them. Read replicas can be created with
Multi-AZ deployments to gain read scaling benefits in addition to the enhanced
database write availability and data durability provided by Multi-AZ deployments.
d
Automatic Software Patching
Amazon RDS will make sure that the relational database software powering your
e
deployment stays up-to-date with the latest patches. When necessary, patches are
applied during a maintenance window that you can control. You can think of the
v
Amazon RDS maintenance window as an opportunity to control when DB Instance
i
modifications (such as scaling DB Instance class) and software patching occur, in the
event either are requested or required. If a “maintenance” event is scheduled for a given
week, it will be initiated and completed at some point during the 30-minute maintenance
h
window you identify.
c
The only maintenance events that require Amazon RDS to take your DB Instance offline
are scale compute operations (which generally take only a few minutes from start-to-
r
finish) or required software patching. Required patching is automatically scheduled only
for patches that are security and durability related. Such patching occurs infrequently
A
(typically once every few months) and should seldom require more than a fraction of
your maintenance window. If you do not specify a preferred weekly maintenance
window when creating your DB Instance, a 30-minute default value is assigned. If you
wish to modify when maintenance is performed on your behalf, you can do so by
modifying your DB Instance in the AWS Management Console or by using the
ModifyDBInstance API. Each of your DB Instances can have different preferred
maintenance windows, if you so choose.
Running your DB Instance as a Multi-AZ deployment can further reduce the impact of a
maintenance event, as Amazon RDS will conduct maintenance via the following steps:
1) Perform maintenance on standby, 2) Promote standby to primary, and 3) Perform
maintenance on old primary, which becomes the new standby.
Page 60
Amazon Web Services Amazon Web Services: Overview of Security Processes
snapshot copy was asked for, it cannot be restored and will not be listed by any of the
tools or APIs.
Event Notification
You can receive notifications of a variety of important events that can occur on your
RDS instance, such as whether the instance was shut down, a backup was started, a
failover occurred, the security group was changed, or your storage space is low. The
Amazon RDS service groups events into categories that you can subscribe to so that
you can be notified when an event in that category occurs. You can subscribe to an
d
event category for a DB instance, DB snapshot, DB security group, or for a DB
parameter group. RDS events are published via AWS SNS and sent to you as an email
e
or text message. For more information about RDS notification event categories, refer to
the Amazon Relational Database Service User Guide.
i v
Amazon Redshift Security
Amazon Redshift is a petabyte-scale SQL data warehouse service that runs on highly
h
optimized and managed AWS compute and storage resources. The service has been
architected to not only scale up or down rapidly, but to significantly improve query
c
speeds – even on extremely large datasets. To increase performance, Redshift uses
techniques such as columnar storage, data compression, and zone maps to reduce the
r
amount of IO needed to perform queries. It also has a massively parallel processing
(MPP) architecture, parallelizing and distributing SQL operations to take advantage of
A
all available resources.
When you create a Redshift data warehouse, you provision a single-node or multi-node
cluster, specifying the type and number of nodes that will make up the cluster. The node
type determines the storage size, memory, and CPU of each node. Each multi-node
cluster includes a leader node and two or more compute nodes. A leader node
manages connections, parses queries, builds execution plans, and manages query
execution in the compute nodes. The compute nodes store data, perform computations,
and run queries as directed by the leader node. The leader node of each cluster is
accessible through ODBC and JDBC endpoints, using standard PostgreSQL drivers.
The compute nodes run on a separate, isolated network and are never accessed
directly.
After you provision a cluster, you can upload your dataset and perform data analysis
queries by using common SQL- based tools and business intelligence applications.
Page 61
Amazon Web Services Amazon Web Services: Overview of Security Processes
Cluster Access
By default, clusters that you create are closed to everyone. Amazon Redshift enables
you to configure firewall rules (security groups) to control network access to your data
warehouse cluster. You can also run Redshift inside an Amazon VPC to isolate your
data warehouse cluster in your own virtual network and connect it to your existing IT
infrastructure using industry-standard encrypted IPsec VPN.
The AWS account that creates the cluster has full access to the cluster. Within your
AWS account, you can use AWS IAM to create user accounts and manage permissions
d
for those accounts. By using IAM, you can grant different users permission to perform
only the cluster operations that are necessary for their work.
e
Like all databases, you must grant permission in Redshift at the database level in
addition to granting access at the resource level. Database users are named user
v
accounts that can connect to a database and are authenticated when they login to
i
Amazon Redshift. In Redshift, you grant database user permissions on a per-cluster
basis instead of on a per-table basis. However, a user can see data only in the table
h
rows that were generated by his own activities; rows generated by other users are not
visible to him.
c
The user who creates a database object is its owner. By default, only a superuser or the
r
owner of an object can query, modify, or grant permissions on the object. For users to
use an object, you must grant the necessary permissions to the user or the group that
contains the user. And only the owner of an object can modify or delete it.
A
Data Backups
Amazon Redshift distributes your data across all compute nodes in a cluster. When you
run a cluster with at least two compute nodes, data on each node will always be
mirrored on disks on another node, reducing the risk of data loss. In addition, all data
written to a node in your cluster is continuously backed up to Amazon S3 using
snapshots. Redshift stores your snapshots for a user-defined period, which can be from
one to thirty-five days. You can also take your own snapshots at any time; these
snapshots leverage all existing system snapshots and are retained until you explicitly
delete them.
Amazon Redshift continuously monitors the health of the cluster and automatically re-
replicates data from failed drives and replaces nodes as necessary. All of this happens
without any effort on your part, although you may see a slight performance degradation
during the re-replication process.
Page 62
Amazon Web Services Amazon Web Services: Overview of Security Processes
You can use any system or user snapshot to restore your cluster using the AWS
Management Console or the Amazon Redshift APIs. Your cluster is available as soon
as the system metadata has been restored and you can start running queries while user
data is spooled down in the background.
Data Encryption
When creating a cluster, you can choose to encrypt it in order to provide additional
protection for your data at rest. When you enable encryption in your cluster, Amazon
Redshift stores all data in user-created tables in an encrypted format using hardware-
d
accelerated AES-256 block encryption keys. This includes all data written to disk as well
as any backups.
e
Amazon Redshift uses a four-tier, key-based architecture for encryption. These keys
consist of data encryption keys, a database key, a cluster key, and a master key:
i v
• Data encryption keys encrypt data blocks in the cluster. Each data block is
assigned a randomly-generated AES- 256 key. These keys are encrypted by
using the database key for the cluster.
h
• The database key encrypts data encryption keys in the cluster. The database
c
key is a randomly-generated AES- 256 key. It is stored on disk in a separate
r
network from the Amazon Redshift cluster and passed to the cluster across a
secure channel.
• The cluster key encrypts the database key for the Amazon Redshift cluster. You
A
can use either AWS or a hardware security module (HSM) to store the cluster
key. HSMs provide direct control of key generation and management, and make
key management separate and distinct from the application and the database.
• The master key encrypts the cluster key if it is stored in AWS. The master key
encrypts the cluster-key-encrypted database key if the cluster key is stored in an
HSM.
You can have Redshift rotate the encryption keys for your encrypted clusters at any
time. As part of the rotation process, keys are also updated for all of the cluster's
automatic and manual snapshots.
Page 63
Amazon Web Services Amazon Web Services: Overview of Security Processes
To encrypt your table load data files when you upload them to Amazon S3, you can use
Amazon S3 server-side encryption. When you load the data from Amazon S3, the
COPY command will decrypt the data as it loads the table.
d
purposes.
e
Amazon Redshift manages all the work of setting up, operating, and scaling your data
v
warehouse, including provisioning capacity, monitoring the cluster, and applying
i
patches and upgrades to the Amazon Redshift engine. Patches are applied only during
specified maintenance windows.
h
SSL Connections
c
To protect your data in transit within the AWS cloud, Amazon Redshift uses hardware-
accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY,
r
UNLOAD, backup, and restore operations. You can encrypt the connection between
your client and the cluster by specifying SSL in the parameter group associated with the
cluster. To have your clients also authenticate the Redshift server, you can install the
A
public key (.pem file) for the SSL certificate on your client and use the key to connect to
your clusters.
Amazon Redshift offers the newer, stronger cipher suites that use the Elliptic Curve
Diffie-Hellman Ephemeral protocol. ECDHE allows SSL clients to provide Perfect
Forward Secrecy between the client and the Redshift cluster. Perfect Forward Secrecy
uses session keys that are ephemeral and not stored anywhere, which prevents the
decoding of captured data by unauthorized third parties, even if the secret long-term key
itself is compromised. You do not need to configure anything in Amazon Redshift to
enable ECDHE; if you connect from a SQL client tool that uses ECDHE to encrypt
communication between the client and server, Amazon Redshift will use the provided
cipher list to make the appropriate connection.
Page 64
Amazon Web Services Amazon Web Services: Overview of Security Processes
d
for low-latency access. Cached information may include the results of I/O-intensive
database queries or the results of computationally-intensive calculations.
e
The Amazon ElastiCache service automates time-consuming management tasks for in-
v
memory cache environments, such as patch management, failure detection, and
recovery. It works in conjunction with other Amazon Web Services (such as Amazon
i
EC2, Amazon CloudWatch, and Amazon SNS) to provide a secure, high-performance,
and managed in- memory cache. For example, an application running in Amazon EC2
h
can securely access an Amazon ElastiCache Cluster in the same region with very low
latency.
c
Using the Amazon ElastiCache service, you create a Cache Cluster, which is a
r
collection of one or more Cache Nodes, each running an instance of the Memcached
service. A Cache Node is a fixed-size chunk of secure, network- attached RAM. Each
A
Cache Node runs an instance of the Memcached service, and has its own DNS name
and port. Multiple types of Cache Nodes are supported, each with varying amounts of
associated memory. A Cache Cluster can be set up with a specific number of Cache
Nodes and a Cache Parameter Group that controls the properties for each Cache Node.
All Cache Nodes within a Cache Cluster are designed to be of the same Node Type and
have the same parameter and security group settings.
Amazon ElastiCache allows you to control access to your Cache Clusters using Cache
Security Groups. A Cache Security Group acts like a firewall, controlling network access
to your Cache Cluster. By default, network access is turned off to your Cache Clusters.
If you want your applications to access your Cache Cluster, you must explicitly enable
access from hosts in specific EC2 security groups. Once ingress rules are configured,
the same rules apply to all Cache Clusters associated with that Cache Security Group.
To allow network access to your Cache Cluster, create a Cache Security Group and use
the Authorize Cache Security Group Ingress API or CLI command to authorize the
desired EC2 security group (which in turn specifies the EC2 instances allowed). IP-
Page 65
Amazon Web Services Amazon Web Services: Overview of Security Processes
range based access control is currently not enabled for Cache Clusters. All clients to a
Cache Cluster must be within the EC2 network, and authorized via Cache Security
Groups.
ElastiCache for Redis provides backup and restore functionality, where you can create
a snapshot of your entire Redis cluster as it exists at a specific point in time. You can
schedule automatic, recurring daily snapshots or you can create a manual snapshot at
any time. For automatic snapshots, you specify a retention period; manual snapshots
are retained until you delete them. The snapshots are stored in Amazon S3 with high
durability, and can be used for warm starts, backups, and archiving.
d
Application Services
e
Amazon Web Services offers a variety of managed services to use with your
v
applications, including services that provide application streaming, queueing, push
i
notification, email delivery, search, and transcoding.
h
Amazon CloudSearch is a managed service in the cloud that makes it easy to set up,
c
manage, and scale a search solution for your website. Amazon CloudSearch enables
r
you to search large collections of data such as web pages, document files, forum posts,
or product information. It enables you to quickly add search capabilities to your website
without having to become a search expert or worry about hardware provisioning, setup,
A
and maintenance. As your volume of data and traffic fluctuates, Amazon CloudSearch
automatically scales to meet your needs.
Page 66
Amazon Web Services Amazon Web Services: Overview of Security Processes
• The document service endpoint is used to submit documents to the domain for
indexing and is accessed through a domain-specific endpoint: http://doc-
domainname-domainid.us-east- 1.cloudsearch.amazonaws.com/
• The search endpoint is used to submit search requests to the domain and is
d
accessed through a domain-specific endpoint: http://search- domainname-
domainid.us-east-1.cloudsearch.amazonaws.com
e
Like all AWS Services, Amazon CloudSearch requires that every request made to its
v
control API be authenticated so only authenticated users can access and manage your
i
CloudSearch domain. API requests are signed with an HMAC- SHA1 or HMAC-SHA256
signature calculated from the request and the user’s AWS Secret Access key.
Additionally, the Amazon CloudSearch control API is accessible via SSL-encrypted
h
endpoints. You can control access to Amazon CloudSearch management functions by
creating users under your AWS Account using AWS IAM, and controlling which
c
CloudSearch operations these users have permission to perform.
r
Amazon Simple Queue Service (Amazon SQS) Security
A
Amazon SQS is a highly reliable, scalable message queuing service that enables
asynchronous message-based communication between distributed components of an
application. The components can be computers or Amazon EC2 instances or a
combination of both. With Amazon SQS, you can send any number of messages to an
Amazon SQS queue at any time from any component. The messages can be retrieved
from the same component or a different one right away or at a later time (within 4 days).
Messages are highly durable; each message is persistently stored in highly available,
highly reliable queues. Multiple processes can read/write from/to an Amazon SQS
queue at the same time without interfering with each other.
Amazon SQS access is granted based on an AWS Account or a user created with AWS
IAM. Once authenticated, the AWS Account has full access to all user operations. An
AWS IAM user, however, only has access to the operations and queues for which they
have been granted access via policy. By default, access to each individual queue is
restricted to the AWS Account that created it. However, you can allow other access to a
queue, using either an SQS-generated policy or a policy you write.
Page 67
Amazon Web Services Amazon Web Services: Overview of Security Processes
Amazon SQS is accessible via SSL-encrypted endpoints. The encrypted endpoints are
accessible from both the Internet and from within Amazon EC2.
Data stored within Amazon SQS is not encrypted by AWS; however, the user can
encrypt data before it is uploaded to Amazon SQS, provided that the application utilizing
the queue has a means to decrypt the message when retrieved. Encrypting messages
before sending them to Amazon SQS helps protect against access to sensitive
customer data by unauthorized persons, including AWS.
d
Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy
e
to set up, operate, and send notifications from the cloud. It provides developers with a
highly scalable, flexible, and cost-effective capability to publish messages from an
v
application and immediately deliver them to subscribers or other applications.
i
Amazon SNS provides a simple web services interface that can be used to create topics
that customers want to notify applications (or people) about, subscribe clients to these
h
topics, publish messages, and have these messages delivered over clients’ protocol of
choice (i.e., HTTP/HTTPS, email, etc.).
c
Amazon SNS delivers notifications to clients using a “push” mechanism that eliminates
r
the need to periodically check or “poll” for new information and updates. Amazon SNS
can be leveraged to build highly reliable, event-driven workflows and messaging
applications without the need for complex middleware and application management.
A
The potential uses for Amazon SNS include monitoring applications, workflow systems,
time-sensitive information updates, mobile applications, and many others. Amazon SNS
provides access control mechanisms so that topics and messages are secured against
unauthorized access. Topic owners can set policies for a topic that restrict who can
publish or subscribe to a topic. Additionally, topic owners can encrypt transmission by
specifying that the delivery mechanism must be HTTPS.
Amazon SNS access is granted based on an AWS Account or a user created with AWS
IAM. Once authenticated, the AWS Account has full access to all user operations. An
AWS IAM user, however, only has access to the operations and topics for which they
have been granted access via policy. By default, access to each individual topic is
restricted to the AWS Account that created it. However, you can allow other access to
SNS, using either an SNS-generated policy or a policy you write.
Page 68
Amazon Web Services Amazon Web Services: Overview of Security Processes
d
Amazon SWF provides simple API calls that can be executed from code written in any
e
language and run on your EC2 instances, or any of your machines located anywhere in
the world that can access the Internet. Amazon SWF acts as a coordination hub with
v
which your application hosts interact. You create desired workflows with their
associated tasks and any conditional logic you wish to apply and store them with
i
Amazon SWF.
h
Amazon SWF access is granted based on an AWS Account or a user created with AWS
IAM. All actors that participate in the execution of a workflow— deciders, activity
c
workers, workflow administrators—must be IAM users under the AWS Account that
owns the Amazon SWF resources. You cannot grant users associated with other AWS
r
Accounts access to your Amazon SWF workflows. An AWS IAM user, however, only
has access to the workflows and resources for which they have been granted access
A
via policy.
Unfortunately, with other email systems, it's possible for a spammer to falsify an email
header and spoof the originating email address so that it appears as though the email
originated from a different source. To mitigate these problems, Amazon SES requires
users to verify their email address or domain in order to confirm that they own it and to
prevent others from using it. To verify a domain, Amazon SES requires the sender to
publish a DNS record that Amazon SES supplies as proof of control over the domain.
Page 69
Amazon Web Services Amazon Web Services: Overview of Security Processes
Amazon SES periodically reviews domain verification status, and revokes verification in
cases where it is no longer valid.
Amazon SES takes proactive steps to prevent questionable content from being sent, so
that ISPs receive consistently high-quality email from our domains and therefore view
Amazon SES as a trusted email origin. Below are some of the features that maximize
deliverability and dependability for all of our senders:
d
• Amazon SES maintains complaint feedback loops with major ISPs. Complaint
feedback loops indicate which emails a recipient marked as spam. Amazon SES
e
provides you access to these delivery metrics to help guide your sending
strategy.
i v
• Amazon SES uses a variety of techniques to measure the quality of each user’s
sending. These mechanisms help identify and disable attempts to use Amazon
SES for unsolicited mail, and detect other sending patterns that would harm
h
Amazon SES’s reputation with ISPs, mailbox providers, and anti-spam services.
c
• Amazon SES supports authentication mechanisms such as Sender Policy
Framework (SPF) and DomainKeys Identified Mail (DKIM). When you
r
authenticate an email, you provide evidence to ISPs that you own the domain.
Amazon SES makes it easy for you to authenticate your emails. If you configure
A
your account to use Easy DKIM, Amazon SES will DKIM-sign your emails on
your behalf, so you can focus on other aspects of your email-sending strategy.
To ensure optimal deliverability, we recommend that you authenticate your
emails.
As with other AWS services, you use security credentials to verify who you are and
whether you have permission to interact with Amazon SES. For information about which
credentials to use, see Using Credentials with Amazon SES. Amazon SES also
integrates with AWS IAM so that you can specify which Amazon SES API actions a user
can perform.
If you choose to communicate with Amazon SES through its SMTP interface, you are
required to encrypt your connection using TLS. Amazon SES supports two mechanisms
for establishing a TLS-encrypted connection: STARTTLS and TLS Wrapper. If you
choose to communicate with Amazon SES over HTTP, then all communication will be
protected by TLS through Amazon SES’s HTTPS endpoint. When delivering email to its
Page 70
Amazon Web Services Amazon Web Services: Overview of Security Processes
final destination, Amazon SES encrypts the email content with opportunistic TLS, if
supported by the receiver.
d
bucket for input and output, and the buckets can be in any AWS region. The Elastic
Transcoder accepts input files in a wide variety of web, consumer, and professional
e
formats. Output file types include the MP3, MP4, OGG, TS, WebM, HLS using MPEG-2
TS, and Smooth Streaming using fmp4 container types, storing H.264 or VP8 video and
v
AAC, MP3, or Vorbis audio.
i
You'll start with one or more input files, and create transcoding jobs in a type of
workflow called a transcoding pipeline for each file. When you create the pipeline, you'll
h
specify input and output buckets as well as an IAM role. Each job must reference a
media conversion template called a transcoding preset, and will result in the generation
c
of one or more output files. A preset tells the Elastic Transcoder what settings to use
r
when processing a particular input file. You can specify many settings when you create
a preset, including the sample rate, bit rate, resolution (output height and width), the
number of reference and keyframes, a video bit rate, some thumbnail creation options,
A
etc.
A best effort is made to start jobs in the order in which they’re submitted, but this is not
a hard guarantee and jobs typically finish out of order since they are worked on in
parallel and vary in complexity. You can pause and resume any of your pipelines if
necessary.
Elastic Transcoder supports the use of SNS notifications when it starts and finishes
each job, and when it needs to tell you that it has detected an error or warning
condition. The SNS notification parameters are associated with each pipeline. It can
also use the List Jobs by Status function to find all of the jobs with a given status (e.g.,
"Completed") or the Read Job function to retrieve detailed information about a particular
job.
Like all other AWS services, Elastic Transcoder integrates with AWS Identity and
Access Management (IAM), which allows you to control access to the service and to
other AWS resources that Elastic Transcoder requires, including Amazon S3 buckets
Page 71
Amazon Web Services Amazon Web Services: Overview of Security Processes
and Amazon SNS topics. By default, IAM users have no access to Elastic Transcoder or
to the resources that it uses. If you want IAM users to be able to work with Elastic
Transcoder, you must explicitly grant them permissions.
Amazon Elastic Transcoder requires every request made to its control API be
authenticated so only authenticated processes or users can create, modify, or delete
their own Amazon Transcoder pipelines and presets. Requests are signed with an
HMAC-SHA256 signature calculated from the request and a key derived from the user’s
secret key. Additionally, the Amazon Elastic Transcoder API is only accessible via SSL-
encrypted endpoints.
d
Durability is provided by Amazon S3, where media files are redundantly stored on
e
multiple devices across multiple facilities in an Amazon S3 region. For added protection
against users accidently deleting media files, you can use the Versioning feature in
v
Amazon S3 to preserve, retrieve, and restore every version of every object stored in an
i
Amazon S3 bucket. You can further protect versions using Amazon S3 Versioning's
MFA Delete feature. Once enabled for an Amazon S3 bucket, each version deletion
request must include the six-digit code and serial number from your multi-factor
h
authentication device.
c
Amazon AppStream 2.0 Security
r
The Amazon AppStream 2.0 service provides a framework for running streaming
applications, particularly applications that require lightweight clients running on mobile
A
devices. It enables you to store and run your application on powerful, parallel-
processing GPUs in the cloud and then stream input and output to any client device.
This can be a pre-existing application that you modify to work with Amazon AppStream
2.0 or a new application that you design specifically to work with the service.
The Amazon AppStream 2.0 SDK simplifies the development of interactive streaming
applications and client applications. The SDK provides APIs that connect your
customers’ devices directly to your application, capture and encode audio and video,
stream content across the Internet in near real-time, decode content on client devices,
and return user input to the application.
Because your application's processing occurs in the cloud, it can scale to handle
extremely large computational loads.
Amazon AppStream 2.0 deploys streaming applications on Amazon EC2. When you
add a streaming application through the AWS Management Console, the service
creates the AMI required to host your application and makes your application available
Page 72
Amazon Web Services Amazon Web Services: Overview of Security Processes
to streaming clients. The service scales your application as needed within the capacity
limits you have set to meet demand. Clients using the Amazon AppStream 2.0 SDK
automatically connect to your streamed application.
In most cases, you’ll want to ensure that the user running the client is authorized to use
your application before letting him obtain a session ID. We recommend that you use
some sort of entitlement service, which is a service that authenticates clients and
authorizes their connection to your application. In this case, the entitlement service will
also call into the Amazon AppStream 2.0 REST API to create a new streaming session
for the client. After the entitlement service creates a new session, it returns the session
d
identifier to the authorized client as a single-use entitlement URL. The client then uses
the entitlement URL to connect to the application. Your entitlement service can be
e
hosted on an Amazon EC2 instance or on AWS Elastic Beanstalk.
v
Amazon AppStream 2.0 utilizes an AWS CloudFormation template that automates the
i
process of deploying a GPU EC2 instance that has the AppStream 2.0 Windows
Application and Windows Client SDK libraries installed; is configured for SSH, RDC, or
VPN access; and has an elastic IP address assigned to it. By using this template to
h
deploy your standalone streaming server, all you need to do is upload your application
to the server and run the command to launch it.
rc
You can then use the Amazon AppStream 2.0 Service Simulator tool to test your
application in standalone mode before deploying it into production.
A
Amazon AppStream 2.0 also utilizes the STX Protocol to manage the streaming of your
application from AWS to local devices. The Amazon AppStream 2.0 STX Protocol is a
proprietary protocol used to stream high-quality application video over varying network
conditions; it monitors network conditions and automatically adapts the video stream to
provide a low-latency and high- resolution experience to your customers. It minimizes
latency while syncing audio and video as well as capturing input from your customers to
be sent back to the application running in AWS.
Analytics Services
Amazon Web Services provides cloud-based analytics services to help you process and
analyze any volume of data, whether your need is for managed Hadoop clusters, real-
time streaming data, petabyte scale data warehousing, or orchestration.
Page 73
Amazon Web Services Amazon Web Services: Overview of Security Processes
d
Amazon S3, where you can then retrieve it or use it as input in another job flow.
e
When launching job flows on your behalf, Amazon EMR sets up two Amazon EC2
security groups: one for the master nodes and another for the slaves. The master
v
security group has a port open for communication with the service. It also has the SSH
port open to allow you to SSH into the instances, using the key specified at startup. The
i
slaves start in a separate security group, which only allows interaction with the master
instance. By default, both security groups are set up to not allow access from external
h
sources, including Amazon EC2 instances belonging to other customers. Since these
are security groups within your account, you can reconfigure them using the standard
c
EC2 tools or dashboard. To protect customer input and output datasets, Amazon EMR
r
transfers data to and from Amazon S3 using SSL.
Amazon EMR provides several ways to control access to the resources of your cluster.
A
You can use AWS IAM to create user accounts and roles and configure permissions
that control which AWS features those users and roles can access. When you launch a
cluster, you can associate an Amazon EC2 key pair with the cluster, which you can then
use when you connect to the cluster using SSH. You can also set permissions that
allow users other than the default Hadoop user to submit jobs to your cluster.
By default, if an IAM user launches a cluster, that cluster is hidden from other IAM users
on the AWS account. This filtering occurs on all Amazon EMR interfaces—the console,
CLI, API, and SDKs—and helps prevent IAM users from accessing and inadvertently
changing clusters created by other IAM users.
It is useful for clusters that are intended to be viewed by only a single IAM user and the
main AWS account. You also have the option to make a cluster visible and accessible
to all IAM users under a single AWS account.
For an additional layer of protection, you can launch the EC2 instances of your EMR
cluster into an Amazon VPC, which is like launching it into a private subnet. This allows
Page 74
Amazon Web Services Amazon Web Services: Overview of Security Processes
you to control access to the entire subnetwork. You can also launch the cluster into a
VPC and enable the cluster to access resources on your internal network using a VPN
connection. You can encrypt the input data before you upload it to Amazon S3 using
any common data encryption tool. If you do encrypt the data before it’s uploaded, you
then need to add a decryption step to the beginning of your job flow when Amazon
Elastic MapReduce fetches the data from Amazon S3.
d
data. It can accept any amount of data, from any number of sources, scaling up and
down as needed. You can use Kinesis in situations that call for large-scale, real-time
e
data ingestion and processing, such as server logs, social media or market data feeds,
and web clickstream data.
i v
Applications read and write data records to Amazon Kinesis in streams. You can create
any number of Kinesis streams to capture, store, and transport data. Amazon Kinesis
automatically manages the infrastructure, storage, networking, and configuration
h
needed to collect and process your data at the level of throughput your streaming
applications need. You don’t have to worry about provisioning, deployment, or ongoing-
c
maintenance of hardware, software, or other services to enable real-time capture and
r
storage of large-scale data.
Amazon Kinesis also synchronously replicates data across three facilities in an AWS
A
Region, providing high availability and data durability.
In Amazon Kinesis, data records contain a sequence number, a partition key, and a
data blob, which is an un-interpreted, immutable sequence of bytes. The Amazon
Kinesis service does not inspect, interpret, or change the data in the blob in any way.
Data records are accessible for only 24 hours from the time they are added to an
Amazon Kinesis stream, and then they are automatically discarded.
Page 75
Amazon Web Services Amazon Web Services: Overview of Security Processes
AWS services (such as DynamoDB, Redshift, and Amazon S3) as well as third-party
products like Apache Storm.
You can control logical access to Kinesis resources and management functions by
creating users under your AWS Account using AWS IAM, and controlling which Kinesis
operations these users have permission to perform. To facilitate running your producer
or consumer applications on an Amazon EC2 instance, you can configure that instance
with an IAM role. That way, AWS credentials that reflect the permissions associated
with the IAM role are made available to applications on the instance, which means you
don’t have to use your long-term AWS security credentials. Roles have the added
d
benefit of providing temporary credentials that expire within a short timeframe, which
adds an additional measure of protection. See the AWS Identity and Access
e
Management User Guide for more information about IAM roles.
v
The Amazon Kinesis API is only accessible via an SSL-encrypted endpoint (kinesis.us-
i
east-1.amazonaws.com) to help ensure secure transmission of your data to AWS. You
must connect to that endpoint to access Kinesis, but you can then use the API to direct
AWS Kinesis to create a stream in any AWS Region.
h
AWS Data Pipeline Security
c
The AWS Data Pipeline service helps you process and move data between different
r
data sources at specified intervals using data-driven workflows and built-in dependency
checking. When you create a pipeline, you define data sources, preconditions,
A
destinations, processing steps, and an operational schedule. Once you define and
activate a pipeline, it will run automatically according to the schedule you specified.
With AWS Data Pipeline, you don’t have to worry about checking resource availability,
managing inter-task dependencies, retrying transient failures/timeouts in individual
tasks, or creating a failure notification system. AWS Data Pipeline takes care of
launching the AWS services and resources your pipeline needs to process your data
(e.g., Amazon EC2 or EMR) and transferring the results to storage (e.g., Amazon S3,
RDS, DynamoDB, or EMR).
When you use the console, AWS Data Pipeline creates the necessary IAM roles and
policies, including a trusted entities list for you. IAM roles determine what your pipeline
can access and the actions it can perform. Additionally, when your pipeline creates a
resource, such as an EC2 instance, IAM roles determine the EC2 instance's permitted
resources and actions. When you create a pipeline, you specify one IAM role that
governs your pipeline and another IAM role to govern your pipeline's resources (referred
to as a "resource role"), which can be the same role for both. As part of the security best
Page 76
Amazon Web Services Amazon Web Services: Overview of Security Processes
practice of least privilege, we recommend that you consider the minimum permissions
necessary for your pipeline to perform work and define the IAM roles accordingly.
Like most AWS services, AWS Data Pipeline also provides the option of secure
(HTTPS) endpoints for access via SSL.
d
individual user accounts with credentials for access to AWS services. It also includes
services for creating and updating stacks of AWS resources, deploying applications on
e
those resources, and monitoring the health of those AWS resources. Other tools help
you manage cryptographic keys using hardware security modules (HSMs) and log AWS
v
API activity for security and compliance purposes.
i
AWS Identity and Access Management (IAM)
h
IAM allows you to create multiple users and manage the permissions for each of these
users within your AWS Account. A user is an identity (within an AWS Account) with
c
unique security credentials that can be used to access AWS Services. IAM eliminates
the need to share passwords or keys, and makes it easy to enable or disable a user’s
r
access as appropriate.
A
IAM enables you to implement security best practices, such as least privilege, by
granting unique credentials to every user within your AWS Account and only granting
permission to access the AWS services and resources required for the users to perform
their jobs. IAM is secure by default; new users have no access to AWS until
permissions are explicitly granted.
IAM is also integrated with the AWS Marketplace, so that you can control who in your
organization can subscribe to the software and services offered in the Marketplace.
Since subscribing to certain software in the Marketplace launches an EC2 instance to
run the software, this is an important access control feature. Using IAM to control
access to the AWS Marketplace also enables AWS Account owners to have fine-
grained control over usage and software costs.
IAM enables you to minimize the use of your AWS Account credentials. Once you
create IAM user accounts, all interactions with AWS Services and resources should
occur with IAM user security credentials.
Page 77
Amazon Web Services Amazon Web Services: Overview of Security Processes
Roles
An IAM role uses temporary security credentials to allow you to delegate access to
users or services that normally don't have access to your AWS resources. A role is a set
of permissions to access specific AWS resources, but these permissions are not tied to
a specific IAM user or group. An authorized entity (e.g., mobile user, EC2 instance)
assumes a role and receives temporary security credentials for authenticating to the
resources defined in the role.
Temporary security credentials provide enhanced security due to their short life- span
d
(the default expiration is 12 hours) and the fact that they cannot be reused after they
expire. This can be particularly useful in providing limited, controlled access in certain
e
situations:
v
applications) who do not have AWS Accounts. With roles, you can give them
i
access to your AWS resources for a limited amount of time. This is useful if you
have non-AWS users that you can authenticate with an external service, such as
h
Microsoft Active Directory, LDAP, or Kerberos. The temporary AWS credentials
used with the roles provide identity federation between AWS and your non-AWS
c
users in your corporate identity and authorization system.
r
If your organization supports SAML 2.0 (Security Assertion Markup Language
2.0), you can create trust between your organization as an identity provider (IdP)
and other organizations as service providers. In AWS, you can configure AWS as
A
the service provider and use SAML to provide your users with federated single-
sign on (SSO) to the AWS Management Console or to get federated access to
call AWS APIs.
Roles are also useful if you create a mobile or web-based application that
accesses AWS resources. AWS resources require security credentials for
programmatic requests; however, you shouldn't embed long-term security
credentials in your application because they are accessible to the application's
users and can be difficult to rotate.
Instead, you can let users sign in to your application using Login with Amazon,
Facebook, or Google, and then use their authentication information to assume a
role and get temporary security credentials.
Page 78
Amazon Web Services Amazon Web Services: Overview of Security Processes
d
table, it must have security credentials. Using roles instead of creating individual
IAM accounts for each application on each instance can save significant time for
e
customers who manage a large number of instances or an elastically scaling fleet
using AWS Auto Scaling.
v
The temporary credentials include a security token, an Access Key ID, and a Secret
i
Access Key. To give a user access to certain resources, you distribute the temporary
security credentials to the user you are granting temporary access to. When the user
h
makes calls to your resources, the user passes in the token and Access Key ID, and
signs the request with the Secret Access Key.
c
The token will not work with different access keys. How the user passes in the token
r
depends on the API and version of the AWS product the user is making calls to. For
more information about temporary security credentials, see AWS Security Token
A
Service API Reference.
The use of temporary credentials means additional protection for you because you don’t
have to manage or distribute long-term credentials to temporary users. In addition, the
temporary credentials get automatically loaded to the target instance so you don’t have
to embed them somewhere unsafe like your code. Temporary credentials are
automatically rotated or changed multiple times a day without any action on your part,
and are stored securely by default.
For more information about using IAM roles to auto-provision keys on EC2 instances,
see the AWS Identity and Access Management Documentation.
Page 79
Amazon Web Services Amazon Web Services: Overview of Security Processes
such as CPU utilization, disk reads and writes, and network traffic. You can set up
CloudWatch alarms to notify you if certain thresholds are crossed, or to take other
automated actions such as adding or removing EC2 instances if Auto Scaling is
enabled.
CloudWatch captures and summarizes utilization metrics natively for AWS resources,
but you can also have other logs sent to CloudWatch to monitor. You can route your
guest OS, application, and custom log files for the software installed on your EC2
instances to CloudWatch, where they will be stored in durable fashion for as long as
you'd like. You can configure CloudWatch to monitor the incoming log entries for any
d
desired symbols or messages and to surface the results as CloudWatch metrics. You
could, for example, monitor your web server's log files for 404 errors to detect bad
e
inbound links or invalid user messages to detect unauthorized login attempts to your
guest OS.
i v
Like all AWS Services, Amazon CloudWatch requires that every request made to its
control API be authenticated so only authenticated users can access and manage
CloudWatch. Requests are signed with an HMAC-SHA1 signature calculated from the
h
request and the user’s private key. Additionally, the Amazon CloudWatch control API is
only accessible via SSL- encrypted endpoints.
rc
You can further control access to Amazon CloudWatch by creating users under your
AWS Account using AWS IAM, and controlling what CloudWatch operations these
users have permission to call.
A
AWS CloudHSM Security
The AWS CloudHSM service provides customers with dedicated access to a hardware
security module (HSM) appliance designed to provide secure cryptographic key storage
and operations within an intrusion-resistant, tamper- evident device. You can generate,
store, and manage the cryptographic keys used for data encryption so that they are
accessible only by you. AWS CloudHSM appliances are designed to securely store and
process cryptographic key material for a wide variety of uses such as database
encryption, Digital Rights Management (DRM), Public Key Infrastructure (PKI),
authentication and authorization, document signing, and transaction processing. They
support some of the strongest cryptographic algorithms available, including AES, RSA,
and ECC, and many others.
The AWS CloudHSM service is designed to be used with Amazon EC2 and VPC,
providing the appliance with its own private IP within a private subnet. You can connect
to CloudHSM appliances from your EC2 servers through SSL/TLS, which uses two-way
Page 80
Amazon Web Services Amazon Web Services: Overview of Security Processes
Selecting CloudHSM service in the same region as your EC2 instance decreases
network latency, which can improve your application performance. You can configure a
client on your EC2 instance that allows your applications to use the APIs provided by
the HSM, including PKCS#11, MS CAPI and Java JCA/JCE (Java Cryptography
Architecture/Java Cryptography Extensions).
Before you begin using an HSM, you must set up at least one partition on the appliance.
d
A cryptographic partition is a logical and physical security boundary that restricts access
to your keys, so only you control your keys and the operations performed by the HSM.
e
AWS has administrative credentials to the appliance, but these credentials can only be
used to manage the appliance, not the HSM partitions on the appliance. AWS uses
v
these credentials to monitor and maintain the health and availability of the appliance.
i
AWS cannot extract your keys nor can AWS cause the appliance to perform any
cryptographic operation using your keys.
h
The HSM appliance has both physical and logical tamper detection and response
mechanisms that erase the cryptographic key material and generate event logs if
c
tampering is detected. The HSM is designed to detect tampering if the physical barrier
r
of the HSM appliance is breached. In addition, after three unsuccessful attempts to
access an HSM partition with HSM Admin credentials, the HSM appliance erases its
HSM partitions.
A
When your CloudHSM subscription ends and you have confirmed that the contents of
the HSM are no longer needed, you must delete each partition and its contents as well
as any logs. As part of the decommissioning process, AWS zeroizes the appliance,
permanently erasing all key material.
Page 81
Amazon Web Services Amazon Web Services: Overview of Security Processes
CloudTrail captures information about API calls to an AWS resource, whether that call
was made from the AWS Management Console, CLI, or an SDK. If the API request
returned an error, CloudTrail provides the description of the error, including messages
for authorization failures. It even captures AWS Management Console sign-in events,
creating a log record every time an AWS account owner, a federated user, or an IAM
user simply signs into the console.
Once you have enabled CloudTrail, event logs are delivered about every 5 minutes to
the Amazon S3 bucket of your choice. The log files are organized by AWS Account ID,
region, service name, date, and time. You can configure CloudTrail so that it aggregates
d
log files from multiple regions and/or accounts into a single Amazon S3 bucket. By
default, a single trail will record and deliver events in all current and future regions. In
e
addition to S3, you can send events to CloudWatch Logs, for custom metrics and
alarming, or you can upload the logs to your favorite log management and analysis
v
solutions to perform security analysis and detect user behavior patterns. For rapid
i
response, you can create CloudWatch Events rules to take immediate action to specific
events.
h
By default, log files are stored indefinitely. The log files are automatically encrypted
using Amazon S3's Server Side Encryption and will remain in the bucket until you
c
choose to delete or archive them. For even more security you can use KMS to encrypt
r
the log files using a key that you own. You can use Amazon S3 lifecycle configuration
rules to automatically delete old log files or archive them to Amazon S3 Glacier for
additional longevity at significant savings.
A
By enabling the optional log file validation, you can validate that logs have not been
added, deleted, or tampered with.
Like every other AWS service, you can limit access to CloudTrail to only certain users.
You can use IAM to control which AWS users can create, configure, or delete AWS
CloudTrail trails as well as which users can start and stop logging. You can control
access to the log files by applying IAM or Amazon S3 bucket policies. You can also add
an additional layer of security by enabling MFA Delete on your Amazon S3 bucket.
Mobile Services
AWS mobile services make it easier for you to build, ship, run, monitor, optimize, and
scale cloud-powered applications for mobile devices. These services also help you
authenticate users to your mobile application, synchronize data, and collect and analyze
application usage.
Page 82
Amazon Web Services Amazon Web Services: Overview of Security Processes
Amazon Cognito
Amazon Cognito provides identity and sync services for mobile and web-based
applications. It simplifies the task of authenticating users and storing, managing, and
syncing their data across multiple devices, platforms, and applications. It provides
temporary, limited-privilege credentials for both authenticated and unauthenticated
users without having to manage any backend infrastructure.
Amazon Cognito works with well-known identity providers like Google, Facebook, and
Amazon to authenticate end users of your mobile and web applications. You can take
d
advantage of the identification and authorization features provided by these services
instead of having to build and maintain your own. Your application authenticates with
one of these identity providers using the provider’s SDK. Once the end user is
e
authenticated with the provider, an OAuth or OpenID Connect token returned from the
v
provider is passed by your application to Cognito, which returns a new Amazon Cognito
ID for the user and a set of temporary, limited-privilege AWS credentials.
i
To begin using Amazon Cognito, you create an identity pool through the Amazon
h
Cognito console. The identity pool is a store of user identity information that is specific
to your AWS account. During the creation of the identity pool, you will be asked to
c
create a new IAM role or pick an existing one for your end users. An IAM role is a set of
permissions to access specific AWS resources, but these permissions are not tied to a
r
specific IAM user or group. An authorized entity (e.g., mobile user, EC2 instance)
assumes a role and receives temporary security credentials for authenticating to the
A
AWS resources defined in the role. Temporary security credentials provide enhanced
security due to their short life-span (the default expiration is 12 hours) and the fact that
they cannot be reused after they expire. The role you select has an impact on which
AWS services your end users will be able to access with the temporary credentials. By
default, Amazon Cognito creates a new role with limited permissions – end users only
have access to the Amazon Cognito Sync service and Amazon Mobile Analytics. If your
application needs access to other AWS resources such as Amazon S3 or DynamoDB,
you can modify your roles directly from the IAM management console.
With Amazon Cognito, there’s no need to create individual AWS accounts or even IAM
accounts for every one of your web/mobile app’s end users who will need to access
your AWS resources. In conjunction with IAM roles, mobile users can securely access
AWS resources and application features, and even save data to the AWS cloud without
having to create an account or log in.
However, if they choose to do this later, Amazon Cognito merges data and identification
information. Because Amazon Cognito stores data locally as well as in the service, your
Page 83
Amazon Web Services Amazon Web Services: Overview of Security Processes
end users can continue to interact with their data even when they are offline. Their
offline data may be stale, but anything they put into the dataset, they can immediately
retrieve whether they are online or not. The client SDK manages a local SQLite store so
that the application can work even when it is not connected. The SQLite store functions
as a cache and is the target of all read and write operations. Cognito's sync facility
compares the local version of the data to the cloud version, and pushes up or pulls
down deltas as needed. Note that in order to sync data across devices, your identity
pool must support authenticated identities. Unauthenticated identities are tied to the
device, so unless an end user authenticates, no data can be synced across multiple
d
devices.
With Amazon Cognito, your application communicates directly with a supported public
e
identity provider (Amazon, Facebook, or Google) to authenticate users. Amazon
Cognito does not receive or store user credentials—only the OAuth or OpenID Connect
v
token received from the identity provider. Once Amazon Cognito receives the token, it
i
returns a new Amazon Cognito ID for the user and a set of temporary, limited-privilege
AWS credentials.
h
Each Amazon Cognito identity has access only to its own data in the sync store, and
this data is encrypted when stored. In addition, all identity data is transmitted over
c
HTTPS. The unique Amazon Cognito identifier on the device is stored in the appropriate
r
secure location—on iOS for example, the Amazon Cognito identifier is stored in the iOS
keychain. User data is cached in a local SQLite database within the application’s
sandbox; if you require additional security, you can encrypt this identity data in the local
A
cache by implementing encryption in your application.
You can integrate Amazon Mobile Analytics with your application without requiring users
of your app to be authenticated with an identity provider (like Google, Facebook, or
Amazon). For these unauthenticated users, Mobile Analytics works with Amazon
Cognito to provide temporary, limited-privilege credentials. To do this, you first create an
identity pool in Amazon Cognito. The identity pool will use IAM roles, which is a set of
permissions not tied to a specific IAM user or group but which allows an entity to access
Page 84
Amazon Web Services Amazon Web Services: Overview of Security Processes
specific AWS resources. The entity assumes a role and receives temporary security
credentials for authenticating to the AWS resources defined in the role. By default,
Amazon Cognito creates a new role with limited permissions – end users only have
access to the Amazon Cognito Sync service and Amazon Mobile Analytics. If your
application needs access to other AWS resources such as Amazon S3 or DynamoDB,
you can modify your roles directly from the IAM management console.
You can integrate the AWS Mobile SDK for Android or iOS into your application or use
the Amazon Mobile Analytics REST API to send events from any connected device or
service and visualize data in the reports. The Amazon Mobile Analytics API is only
d
accessible via an SSL-encrypted endpoint (https://mobileanalytics.us-east-
1.amazonaws.com).
e
Applications
i v
AWS applications are managed services that enable you to provide your users with
secure, centralized storage and work areas in the cloud.
h
Amazon WorkSpaces
c
Amazon WorkSpaces is a managed desktop service that allows you to quickly provision
r
cloud-based desktops for your users. Simply choose a Windows 7 bundle that best
meets the needs of your users and the number of WorkSpaces that you would like to
launch. Once the WorkSpaces are ready, users receive an email informing them where
A
they can download the relevant client and log into their WorkSpace. They can then
access their cloud-based desktops from a variety of endpoint devices, including PCs,
laptops, and mobile devices.
However, your organization’s data is never sent to or stored on the end-user device
because Amazon WorkSpaces uses PC-over-IP (PCoIP), which provides an interactive
video stream without transmitting actual data. The
PCoIP protocol compresses, encrypts, and encodes the users’ desktop computing
experience and transmits ‘pixels only’ across any standard IP network to end- user
devices.
In order to access their WorkSpace, users must sign in using a set of unique credentials
or their regular Active Directory credentials. When you integrate Amazon WorkSpaces
with your corporate Active Directory, each WorkSpace joins your Active Directory
domain and can be managed just like any other desktop in your organization. This
means that you can use Active Directory Group Policies to manage your users’
Page 85
Amazon Web Services Amazon Web Services: Overview of Security Processes
WorkSpaces to specify configuration options that control the desktop. If you choose not
to use Active Directory or other type of on-premises directory to manage your user
WorkSpaces, you can create a private cloud directory within Amazon WorkSpaces that
you can use for administration.
To provide an additional layer of security, you can also require the use of multi- factor
authentication upon sign in in the form of a hardware or software token. Amazon
WorkSpaces supports MFA using an on-premise Remote Authentication Dial in User
Service (RADIUS) server or any security provider that supports RADIUS authentication.
It currently supports the PAP, CHAP, MS- CHAP1, and MS-CHAP2 protocols, along
d
with RADIUS proxies.
e
Each Workspace resides on its own EC2 instance within a VPC. You can create
WorkSpaces in a VPC you already own or have the WorkSpaces service create one for
v
you automatically using the WorkSpaces Quick Start option. When you use the Quick
i
Start option, WorkSpaces not only creates the VPC, but it performs several other
provisioning and configuration tasks for you, such as creating an Internet Gateway for
the VPC, setting up a directory within the VPC that is used to store user and
h
WorkSpace information, creating a directory administrator account, creating the
specified user accounts and adding them to the directory, and creating the WorkSpace
c
instances. Or the VPC can be connected to an on-premises network using a secure
r
VPN connection to allow access to an existing on-premises Active Directory and other
intranet resources. You can add a security group that you create in your Amazon VPC
to all the WorkSpaces that belong to your Directory. This allows you to control network
A
access from Amazon WorkSpaces in your VPC to other resources in your Amazon VPC
and on-premises network.
Because it’s a managed service, AWS takes care of several security and maintenance
tasks like daily backups and patching. Updates are delivered automatically to your
WorkSpaces during a weekly maintenance window. You can control how patching is
configured for a user’s WorkSpace. By default, Windows Update is turned on, but you
have the ability to customize these settings, or use an alternative patch management
approach if you desire. For the underlying OS, Windows Update is enabled by default
Page 86
Amazon Web Services Amazon Web Services: Overview of Security Processes
on WorkSpaces, and configured to install updates on a weekly basis. You can use an
alternative patching approach or to configure Windows Update to perform updates at a
time of your choosing.
You can use IAM to control who on your team can perform administrative functions like
creating or deleting WorkSpaces or setting up user directories. You can also set up a
WorkSpace for directory administration, install your favorite Active Directory
administration tools, and create organizational units and Group Policies in order to more
easily apply Active Directory changes for all your WorkSpaces users.
d
Amazon WorkDocs
e
Amazon WorkDocs is a managed enterprise storage and sharing service with feedback
capabilities for user collaboration. Users can store any type of file in a WorkDocs folder
v
and allow others to view and download them. Commenting and annotation capabilities
i
work on certain file types such as MS Word, and without requiring the application that
was used to originally create the file.
h
WorkDocs notifies contributors about review activities and deadlines via email and
performs versioning of files that you have synced using the WorkDocs Sync application.
c
User information is stored in an Active Directory-compatible network directory. You can
r
either create a new directory in the cloud, or connect Amazon WorkDocs to your on-
premises directory. When you create a cloud directory using WorkDocs’ quick start
setup, it also creates a directory administrator account with the administrator email as
A
the username. An email is sent to your administrator with instructions to complete
registration. The administrator then uses this account to manage your directory.
When you create a cloud directory using WorkDocs’ quick start setup, it also creates
and configures a VPC for use with the directory. If you need more control over the
directory configuration, you can choose the standard setup, which allows you to specify
your own directory domain name, as well as one of your existing VPCs to use with the
directory. If you want to use one of your existing VPCs, the VPC must have an Internet
gateway and at least two subnets. Each of the subnets must be in a different Availability
Zone.
Using the Amazon WorkDocs Management Console, administrators can view audit logs
to track file and user activity by time, IP address, and device, and choose whether to
allow users to share files with others outside their organization. Users can then control
who can access individual files and disable downloads of files they share.
Page 87
Amazon Web Services Amazon Web Services: Overview of Security Processes
All data in transit is encrypted using industry-standard SSL. The WorkDocs web and
mobile applications and desktop sync clients transmit files directly to Amazon WorkDocs
using SSL. WorkDocs users can also utilize Multi-Factor Authentication, or MFA, if their
organization has deployed a Radius server. MFA uses the following factors: username,
password, and methods supported by the Radius server. The protocols supported are
PAP, CHAP, MS-CHAPv1, and MS- CHAPv2.
You choose the AWS Region where each WorkDocs site’s files are stored. Amazon
WorkDocs is currently available in the US-East (Virginia), US-West (Oregon), and EU
(Ireland) AWS Regions. All files, comments, and annotations stored in WorkDocs are
d
automatically encrypted with AES-256 encryption.
e
Document Revisions
i v
Date Description
h
February 2019 Added information about deleting objects in Amazon S3 Glacier.
c
December 2018 Edit made to the Amazon Redshift Security topic.
r
May 2017 Added section on AWS Config Security Checks.
A
March 2017 Migrated into new format.
Page 88