Professional Documents
Culture Documents
GCP Security Framework For FSIs Technical Paper 1677480212
GCP Security Framework For FSIs Technical Paper 1677480212
Framework for
Financial Services
Technical paper
Table of contents
Section 1 Executive Summary
Section 5 Appendix
5.1 Variations from the Google Security
Foundations Guide
5.2 Commonly Used Acronyms
Section 6 Bibliography
Table of tables
Table 1: Security Tools and Dependencies
Table 2: Recommended Organization Policies
Table 3: Additional Policy Controls
Table 4: Default IAM Groups
Table 5: Folder Level Group
Table 6: IAM CIS Benchmarks v1.1.0
Table 7: Hierarchical Firewall Policy
Table 8: VPC Firewall Rules
Table 9: Cloud Armor Security Policies
Table 10: Network CIS Benchmarks v1.1.0
Table 11: Recommended Organization Policies for GCE Instances
Table 12: Security Service CIS Benchmarks v1.1.0
Table 13: De-identification Methods
Table 14: Google Default Data at Rest Encryption
Table 15: ETD Default Rules
Table 16: KTD Detectors
Table 17: Logging CIS Benchmarks v1.1.0
Table 18: Variations from Google Foundations
Table 19: Commonly Used Acronyms
Disclaimer:
This content is provided for general information purposes and is not
intended to be used in place of consultation with our professional
advisors. This document refers to marks owned by third parties. All
such third-party marks are the property of their respective owners.
No sponsorship, endorsement, or approval of this content by the
owners of such marks is intended, expressed, or implied.
The content contained in this document is correct as of January
2023. This technical paper represents the status quo as of the time
it was written. Google Cloud’s products, security policies, and
systems might change going forward as Google continually
improves protection mechanisms.
This reference framework is for informational purposes only.
Accenture does not intend the information or recommendations in
this guide to constitute as definitive configuration advice. Each
organization is responsible for independently evaluating their own
particular use of the services as appropriate to support its legal,
compliance, security, and functional obligations.
Executive Summary
The journey to cloud for financial institutions (FIs) starts with the same goal as other industries
transitioning to the cloud: increase developer velocity and scalability while accelerating time to
value for the end customers.
Cloud and virtualization technologies introduce several benefits to banking and financial services
organizations. The adoption of these technologies allows for (but is not limited to):
1 2
3 4
While the rewards of transitioning to the cloud are appealing, there can be inherent risk – and due
to the sensitive and high-value nature of the data FIs commonly process and exchange, it is critical
that any significant change to the IT infrastructure is secure by design. Aside from the transaction
data itself, individual client data in the form of personally identifiable information (PII) as well as
sensitive and proprietary records must be protected. Moreover, to ensure that strong security
controls are properly configured and privacy by design frameworks are applied – regulatory
bodies mandate numerous requirements, recurring audits, and certifications of compliancy.
In addition, there are constant threats in the form of cyber-attacks that target cloud
infrastructure and navigating the shared responsibility model can add an extra level of
complexity. The collection of these elements – security control configuration, regulatory
compliancy, protection against cyber-attacks, and navigating the shared responsibility model
– often makes transitioning to the cloud a daunting and tumultuous process for FIs.
One of the most perennial threats to FIs is the exfiltration (or inadvertent exposure) of data to
malicious or unapproved actors or systems. Data exfiltration is a security breach that involves
sensitive, proprietary, or secure information that carries a significant material loss to a
person, system, or organization. Although organizations are generally improving in the area
of detecting unauthorized access attempts, data breaches are ever evolving; the average
total cost of a data breach in 2020 was $3.86 million1 – accentuating the importance of strong
data exfiltration controls in a cloud environment.
An additional concern for FIs is successfully integrating and deploying security monitoring
controls. A 2020 Data Breach Report from IBM indicated that the average time to detect a
network breach is roughly 280 days.2 Without insights into the activity in a cloud environment –
the overall attack surface is vastly expanded.
This technical paper attempts to address the collection of security concerns by providing a
comprehensive preventative and detective reference framework that indicates how FIs can
secure their Google Cloud environment.
The paper begins by providing an overview of the Banking and Financial Services sector and
outlines current trends in the industry. Then, before discussing the reference framework
itself, the paper describes the methodology for producing the reference framework. The
reference framework builds off the December 2022 Google Security Foundations Guide
sample architecture and includes specific considerations that need to be taken for Google
Cloud services that either store or use sensitive data.
Note: The reference framework described in this technical paper is an illustrative reference
framework, not a one-size-fits-all solution on how a Google Cloud environment should be
architected for a FI. This technical paper should not be used as an implementation guide.
Links to implementation guides will be included at the end of each section or subsection when
applicable.
By performing these activities and adopting an enterprise-wide strategy, FIs can reduce
concerns surrounding cloud adoption while successfully leveraging the cloud. Aside from
adoption strategies, it is equally important that security is taken into consideration as early as
possible during the migration process.
To support adoption, Google Cloud offers a range of native security solutions (and
supports compatibility with third party solutions) to help FIs enable better business
outcomes. Cloud native security solutioning helps FIs be:
When looking at main reasons on why FIs are adopting the cloud, there are a few primary,
influencing factors:
• Technical
cyber/security control
gaps.
• Meeting regulatory
compliance.
These concerns draw the question: If business demand and adoption rates for cloud are
rapidly increasing, but there are looming security and compliance concerns – how can FIs
securely and confidently migrate to the cloud? Section 3 describes how Google Cloud’s
infrastructure is an effective enabler for cloud migration, and Section 4 describes how to
effectively configure a Google Cloud environment in a secure default state using a
preventative and detective architecture.
With Google Cloud, the catalog of offerings continues to rapidly grow, and each Google Cloud
service exposes a wide range of customizable configurations and controls that enables
business and security needs. Google embeds best practices in its cloud platform that provides
FIs with a presence of mind when adopting the cloud; FIs can be assured that they are
building on top of a reliable, secured foundation and either optimize the controls or take
advantage of additional service-specific security guidance.
Google Cloud’s product and service offerings range from classic platform as a service (PaaS),
to infrastructure as a service (IaaS), to software as a service (SaaS). At large, the boundaries of
responsibility between the company and the cloud provider change based on the services that
have been selected by the customer.
At a minimum, as a part of their shared responsibility for security, public cloud providers
should enable companies to start with a solid, secured foundation. Providers should
empower customers and make it easy for them to understand and execute their part of the
shared responsibility model. With Google Cloud, there are opportunities for organizations
to have a closer collaboration with Google to address security and risk. Google promotes
shared fate for risk management in the cloud by providing unique tools, detailed
guidance, and best practices to reduce customer risk from day one. Google also has a risk
protection program that can be read about further in the Announcing the Risk Protection
Program: Moving from shared responsibility to shared fate document.
As a cloud provider, the security responsibilities of Google and the security responsibilities of the
FI must be clearly defined. The three types of as-a-service platforms are shown in Figure 1 below.
This model identifies the areas of responsibility of FIs leveraging Google Cloud in IaaS, PaaS, and
SaaS deployments.
Content
Access Policies
Usage
Deployment
Identity
Operations
Network Security
Audit Logging
Networking
Boot
Hardware
As depicted, there are clear incentives to use the offerings on the cloud as opposed to on-
premises. For IaaS, PaaS, and SaaS deployments – as the responsibility decreases so too does
the overhead cost. The clear delineation also helps companies understand what they are
responsible for and enables a more seamless transition to the cloud.
Google Cloud boasts operational resiliency in its hardware and infrastructure. Operational
resiliency can be defined as the “ability to deliver operations, including critical operations and
core business lines, through a disruption from any hazard.”6
By adopting Google Cloud, financial services firms can strengthen their operational resilience
and address risks in new ways, for two key reasons:7
The areas of operational resiliency span across several components. These are depicted in
Figure 2, along with a few bullets that explain how Google Cloud ensures operational resiliency
within these components.
For more information on the security of Google Cloud’s infrastructure, refer to the Infrastructure
design for availability and resilience whitepaper.
Section 4 will now delve into the recommended reference framework that can be used for FIs to
confidently migrate to Google Cloud.
Reference Framework
4.1 Introduction
A reference framework is integral for ensuring a cloud environment is properly configured,
governed, and functioning. Since FIs often store sensitive data, this reference framework splits
the Google Cloud environment into two portions: restricted (e.g., customer PII, records,
transaction data, payment card information, etc.) and base. By delineating between the two,
FIs can use the reference framework to determine the level of controls that need to be applied
based on the type of data they are storing or using in their Google Cloud environment.
This technical paper uses banking.com as the sample enterprise that will be referred to through
the remainder of the document for explanatory architectural examples. banking.com is used as
a fictional, demonstrative example, and is not intended to refer to any actual business or entity.
The paper will build off the December 2022 Google Security Foundations Guide sample
architecture and include specific considerations that need to be taken for Google Cloud
services that either store or use the sensitive data. The document will also briefly discuss third
party tooling. For a comparative analysis of the components in the Google Security
Foundations Guide vs. this paper, refer to Appendix Section 5.1
Note: The Google Cloud services covered in this technical paper include Google Compute
Engine, Kubernetes Engine, Cloud Storage, BigQuery, and Cloud Dataflow. The services are
explicitly covered in Section 4.3.3, but the security implications (and the underlying,
foundational components that enable the services) are discussed throughout the entirety of
Section 4. No additional services beyond these are discussed.
To prepare the reference framework, a defense in depth strategy was leveraged. Defense in
depth focuses on comprehensively securing the infrastructure, protecting the data, and
protecting access. It involves the application of multiple countermeasures in a layered or
stepwise manner to achieve security objectives. The methodology involves layering
heterogeneous security technologies in the common attack vectors to ensure that attacks
missed by one technology are caught by another.9 The defense in depth approach includes
(and delineates between) preventative and detective controls to ensure security in the FI’s
Google Cloud environment.
Although the Resource Hierarchy and Organization Policies meet the definition of preventative
controls, they discussed separately in Section 4.2.2. These are separated from the other
preventative controls since the Resource Hierarchy and Organizational Policies are the
beginning foundational components when configuring and architecting a Google Cloud
environment and will need to be enabled upon instantiation.
Note: Many components of a Google Cloud environment will differ depending on the FI and the
FI’s purpose for using Google Cloud. The differences can include elements such as: the current
IT landscape, use of legacy technology, pre-existing cloud environments (i.e., hybrid cloud,
multi-cloud, only on-prem, etc.), and many others. These differences are not only dependencies,
but also driving factors that shape how an FI should effectively adopt cloud usage on Google
Cloud. For discussions on addressing cloud adoption dependencies and planning a migration to
Google Cloud, please refer to the Contact Us section in Appendix Section 5.3.
Identity Aware
Application Requests None
Proxy/BeyondCorp
Web Application
Cloud Armor None
Firewall
Google Personnel
Access Approvals None
Access Control
Depends on the
File Integrity
Monitoring Service, refer to the None
section
Data in transit
encryption is used to
protect data that is
travelling externally.
Encryption in Transit Encrypted by default
Data movement within
Google is generally
Data Privacy and
Protection authenticated but not
encrypted
Key Management
Cloud KMS None
Service
Encryption Key Customer Managed
None
Management Encryption Keys (CMEK)
Security Command
Pane of Glass Visibility A SIEM is also used
Center (SCC)
Security Command
Center Premium –
Policy Monitoring None
Security Health
Security Command Analytics (SHA)
Center
Security Command
Center Premium –
Threat Detection Event Threat Detection None
(ETD) and Container
Threat Detection (KTD)
1. A single organization is
used for all environments
and services for
banking.com to manage all
resources and policies in
one place.
Note: The folder structure can vary depending on the FIs usage of Google Cloud. The areas of
variation will be discussed in Section 4.2.1.
The next two subsections will focus on the resource hierarchy and organizational structure of
the banking.com Google Cloud environment.
It is important to note that policies are inherited downwards in the resource hierarchy. For
example, in Figure 3 below, a policy in the Top-Level Folder (TLF) will apply to its underlying
folders and then to the projects. This applies for IAM roles, IAM permissions, and organization
policies.
Important: If you set a new policy lower in the hierarchy, it will override the parent. Essentially,
policies are inherited downwards unless you choose to override it with a new policy further
down the hierarchy.
• IAM users
• IAM role bindings Org Node
• Organization policies
Inherited
downwards
Inherited
downwards
• IAM users
Shared VPC Host Secrets
• IAM role bindings
Project Project
• Organization policies
Application Project
Service Project
The resource hierarchy structure is ultimately dependent on several factors, including (but not
limited to):
Note: This technical paper will not cover CICD or resource deployment, and as such, a separate
bootstrap folder that hosts the CICD pipeline is not included in the figure below.
The banking.com resource hierarchy is depicted in Figure 4. Note that each of the environment
folders contain both base and restricted projects. The base project hosts non-sensitive data,
whereas the restricted project contains data, such as PII, that needs additional security
controls.
banking.com
Org Node
Logging
Base Shared VPC Host Base Shared VPC Host Base Shared VPC Host
Cloud Log Sink
Prod spoke project Non-prod spoke project Dev spoke project
Billing
Restricted Shared VPC Host Restricted Shared VPC Host Restricted Shared VPC Host
BigQuery dataset with exports
Prod spoke project Non-prod spoke project Dev spoke project
Interconnect
Interconnect management
Secrets
Org level secrets
DNS Hub
DNS resolution
Note: There are no underlying folders below the Top-Level Folders (TLFs) in this architecture. As
mentioned above, underlying folders may need to be leveraged by FIs depending on the needs
of their Google Cloud environment. For example, a FI with three business entities that all need
separate dev, prod, and non-prod environments could have a folder structure where an entity
folder is the TLF, and each entity folder contains their own environment folders.
Though policies and IAM bindings are inherited downwards in the resource hierarchy, security
controls are often applied at the project level. Figure 5, below, provides a more granular view of
project level security controls.
Internet
Project
Project
VPC Network
VPC Network
VPC FW Cloud
Rules DNS Reg ion 1
Subn et
Reg ion 1 Unauthorized Proj ect
Cloud
NAT GCE GCE
10 .2 40.0 .4 10 .2 40.0 .5
Subn et Cloud
Storage a.b.c.d/x
GCE GCE
10 .2 40.0 .2 10 .2 40.0 .3
BigQuery
a.b.c.d/x
Public APIs
At the project and resource level, controls and native services such as VPC Service Controls,
firewall rules, Cloud NAT, network tagging, project-level IAM bindings, service accounts, and
others work to protect FIs infrastructure, applications, and services. In this regard, the project
boundary can be thought of as a trust boundary for resources. Each of these aforementioned
controls and services will be discussed in further detail in their respective sections.
This list constraint defines the set of Restrict to intra folder peering
VPC networks that are allowed to be (unless resources need to
compute.restrictVpcPeering peered with the VPC networks communicate in different
belonging to this project, folder, or folders)
organization.
In addition to the Organization Policy constraints from the December 2022 Google Security
Foundations Guide, banking.com includes 7 additional organization policies, some of which
only apply to sensitive folders or projects:
7. compute.trustedImageProjects: By
default, instances can be created from
images in any project that shares images
publicly or explicitly with the user. This
creates a allow/deny list of publisher
projects that can be pulled from. Ensuring
that the images are trusted and secure.
In addition to the Organization Policy constraints, the December 2022 Google Security
Foundations Guide includes a section of additional policy controls. It is recommended that
these controls are applied to tighten the FIs cloud security posture. The controls are listed in
Table 3 below.
Limit session and You can change the session timeout for Google Google Workspace or Cloud
gcloud timeouts Cloud sessions (including the gcloud tool) to Identity Admin Console
durations as low as 1 hour.
By integrating proper security controls, FIs can minimize the threat landscape and mitigate the
number of threat vectors.
The preventative controls for banking.com are grouped together and discussed based on their
capabilities. The groupings consist of Identity and Access Management, Network Security,
Service Security, Security Monitoring, and Data Privacy and Protection. Each section will
describe the importance of implementing its respective controls to reduce the overall attack
surface. The sections will also provide links to implementation guides where applicable.
Note: The services inside of each of the projects in Figure 6 are intentionally trivial. The services
are only included for demonstration, and Section 4.3.3 will delineate between controls for
sensitive vs. non-sensitive projects on a service-by-service level.
Authentication
Cloud Identity
banking.com
Base Hub Res trict ed Hub DNS Project Billing Project Untrusted
DNS VPC
Base Hub VPC Res trict ed Hub VPC Untrusted VPC
BigQuery Groups
Reg ion Reg ion Exports Reg ion
Private zone:
Common Folder Projcets
“gcp.banking.com.”
Zone Zone Zone
Users
Subnet Subnet Subnet
Authentication
Interconnect Project
Cloud Cloud HTTP(S)
VLAN VLAN WAF Single Sign-on
Router Router DNS Peering LB
Dedicated to
shown for a single
purchasing &
a.b.c.d/24 a.b.c.d/24 project, but a.b.c.d/24
managing
peering would be
interconnect Security Key
established across
connections
all host project’s
private zones
Identity Aware
Proxy
Dev VPC Hos t Res trict ed Dev VPC Prod VPC Host Res trict ed Prod VPC Non-prod VPC Host Res trict ed Non-prod
Cloud Cloud Cloud
Project Hos t Project Project Hos t Project Project VPC Host Project
Identity Identity
VPC Host Projects
Identity
Priv ate Priv ate Priv ate Priv ate Priv ate Priv ate
zone: “.” zone: “.” zone: “.” zone: “.” zone: “.” zone: “.”
Subnet 1 Subnet 2 Subnet 1 Subnet 2 Subnet 1 Subnet 2 Subnet 1 Subnet 2 Subnet 1 Subnet 1 Subnet 2
a.b.c.d /x a.b.c.d /x a.b.c.d /x a.b.c.d /x a.b.c.d /x a.b.c.d /x a.b.c.d /x a.b.c.d /x a.b.c.d /x a.b.c.d /x a.b.c.d /x
Service Projects
Compute Ap plication Compute Ap plication Compute Ap plication Compute Ap plication Compute Ap plication
Project Project Project Project Project Project Project Project Project Project
DLP Project
Each of the controls and infrastructure components that are depicted in Figure 6 will be discussed
in the subsequent sections, along with additional information on how they function. This figure is
not prescriptive, but rather shows preventative control elements that contribute to securing a
Google Cloud environment.
IAM attacks can take several forms. In the cloud, some prominent attacks target administrators
and developers. Others involve escalating privileges to the management plane. The Capital
One breach in 2019 involved several components in a cyber kill chain process – one of which
was receiving IAM role access to a storage service in cloud.10 For this reason, it is imperative
that banking.com follows the principle of least privilege, properly uses Cloud Identity, and
leverages multi-factor authentication (MFA) so users only have the amount of access required
to perform their job.
When considering IAM, there are three central components: Identity federation, identity
management, and zero trust. These components are depicted in Figure 7 below and will be
discussed in further detail in the subsequent sections.
On-Prem
External
User AD Domain AD FS
banking.com banking.com
GCDS
Authentication
Cloud Identity
banking.com
Example project
Users
IAM
Authentication
Permissions
Authorization
Single Sign-on
Resource
Security Key
Identity Aware
Proxy
To implement identity federation, the FI can use Google Cloud Directory Sync (GCDS) or Active
Directory Federation Services (AD FS). To run a successful synchronization with GCDS, it is
recommended that the FI follow the Google suggested best practices.
In addition to following the GCDS best practices, it is vital to have proper mapping between the
AD environment and the Google Cloud environment. Although the structures of AD and Google
Cloud are like one another, since the structure of the AD environment will vary from organization
to organization, there is no singular recommended structure in Google Cloud.
Instead, the FI should review their existing AD structure (i.e., forests, domains, organizational
units, groups, and users) and perform a mapping exercise to determine which architectural
pattern best fits the company’s requirements. Other recommendations from Section 4.2.1 apply
as well, such as segmenting different departments and teams in the resource hierarchy from
one another.
Note: “During initial setup, the Directory Sync process must be authenticated interactively
using a 3-legged OAuth workflow. [It is] strongly recommend that [the FI] create a dedicated
account for this sync, as opposed to using an employee's admin account, because the sync will
fail if the account performing the sync no longer exists or doesn't have proper privileges.”12
Instead, groups are assigned roles and permissions, and users are added to the groups. Group
membership is propagated through AD. The same policies and practices for Identity
Management apply to both the sensitive and non-sensitive projects in the banking.com
architecture.
A user’s access should only be limited to the level needed to accomplish their specific job or
task. If a user needs additional privileges, that user’s access would need to be strictly
controlled and monitored. As much as possible, primitive (or legacy) roles should be avoided
(i.e., roles/owner, roles/editor, and roles/viewer). The legacy roles are overly permissive, and
their usage is not recommended. Instead, users should use predefined roles or custom roles
for more granular access. These roles would be bound to groups in the banking.com
architecture, so it is important to segment groups as much as possible.
The December 2022 Google Security Foundations Guide includes a list of groups that are
created by default in their banking.com foundation – noting that additional groups should be
configured on an as-needed workload-by-workload basis. These are listed in Table 4.
Note: These groups follow the naming conventions that are also outlined in the Google
Security Foundations Guide.
Global secrets
project
Members are responsible for Secret Manager
grp-gcp-secrets- Prod secrets
putting secrets into Secrets Admin
admin@banking.com Non-Prod
Manager.
projects Dev
Projects
grp-gcp-
businesscode- Groups are created on a per- Service- and
environment- business code or per-environment project-specific Specified Service
codedeveloper@ basis to manage resources within a permissions
banking.com particular project.
In addition to these, this technical paper recommends that groups that are assigned ownership
to folders in banking.com be given the following baseline roles from Table 5.
Note: This list is non-exhaustive, and deviations may be appropriate depending on the
purpose of the folder and the needs of the folder owners.
Google also has a Policy Intelligence suite that helps enterprises understand and manage their
policies to reduce their risk. Policy Intelligence has a collection of features that provide visibility
and automation in the cloud environment. These features include:
Note: Any administrative operations or privileged identities should have their roles and
permissions monitored for access very carefully.
Figure 8 depicts how IAP functions for authentication and authorization to GCE instances.
Note: IAP can also be used for App Engine and On-Prem.
On-Prem
External
User
HTTPS Connection
(with TCP Forwarding)
Google Cloud Platform
Por t 3389
GCE
Cloud Identity 10 .2 40.0 .3
banking.com
For zero-trust, FIs can use BeyondCorp Enterprise which shifts access controls away from the
network perimeter to be context aware for individual users. This enables a secure, work from
any location without VPN, set of access controls. BeyondCorp allows for single sign-on, access
control policies, access proxy, and user and device-based authentication and authorization.
The BeyondCorp principles are:
Data access control can prevent a rogue user or attacker from accessing sensitive resources,
conducting malicious activity, performing configuration changes, or moving laterally to other
resources.
Access approval should be used when the FI’s resources need to be accessed by Google
personnel. This includes an approval process and an auditing and logging record of the Google
personnel’s interaction with the resource. Figure 9 depicts the access approval structure.
Google Client
Support Ap prover
Project
Explicit
ap proval
Access Approval
Access Cloud Pub/Sub
Approval API
Zone
Subnet
GCE GCE
10 .2 40.0 .2 10 .2 40.0 .3
a.b.c.d/x
Ensure that IAM users are not assigned Do not assign any IAM user the roles: Service
the Service Account User or Service Account User, or Service Account Token Creator.
1.6
Account Token Creator roles at project Instead, assign users these roles to the specific
level. service account.
Ensure that Separation of duties is Do not grant the same IAM user both roles: Service
Account Admin, and Service Account User.
1.8 enforced while assigning service account
related roles to users.
Do not use legacy roles.
Do not assign the binding allUsers or
Ensure that Cloud KMS cryptokeys are not allAuthenticatedUsers access to cryptokeys.
1.9 anonymously or publicly accessible. Instead grant specific users/groups the binding so
access is not anonymous.
Ensure KMS encryption keys are rotated Set the rotation period of all KMS keys to 90 days
1.10 within a period of 90 days. or less. Covered in the Cloud KMS section.
Ensure API keys are not created for a Use standard authentication methods instead of
1.12 project. API keys when possible.
Ensure API keys are restricted to use by For every API Key, ensure the section ley
1.13 only specified Hosts and Apps. restrictions parameter application restrictions is
not set to None.
Ensure API keys are restricted to only APIs For every API Key, ensure the section Key
1.14 that application needs access. restrictions parameter API restrictions is not set to
None.
Ensure API keys are rotated every 90 Monitor your API keys, identifying any API key that
1.15 days. was created more than 90 days in the past, and
regenerating that key.
There are also concerns for Layer 7 (application layer) attacks. Protecting against L7 attacks
involves both L7 policies and firewall rules. Traditional network security, if done wrong, can
expose FI's to data breach risk (i.e., open ports, overly permissive firewall rules, etc.). For
example, SQL injection and cross-site scripting (XSS) attackers can extract sensitive
information, obtain privileged access, modify/delete data, or gain full control of the web
application.
For FIs, securing data is of the utmost importance, and FIs often store some form of sensitive
data (be it transaction data, individual client data in the form of PII, or internal sensitive and
proprietary records). To protect against network-based threats, proper network security
controls and configurations should be deployed.
Google Cloud offers a robust stack of native network security controls to reduce the threat
landscape and minimize risk. To protect resources in the organization’s environment, FIs should
secure their Virtual Private Clouds (VPCs), implement a web application firewall, restrict access,
and implement service-based segmentation.
Figure 10 captures the network security controls that will be described in the subsequent
sections.
On-prem Legend
External
DNS Server Outbound forwarding
Link local address:
e.g., 169.254.10.2 banking.com Inbound forwarding
Common Folder
Us-central1 Us-east4
DNS Project
Interconnect Project
Zone Zone
DNS VPC
Subnet Subnet Dedicated to
Common Folder Projcets
purchasing and
Private zone: managing
Cloud Cloud interconnect
VLAN VLAN “gcp.banking.com.”
Router Router connections
a.b.c.d/24 a.b.c.d/24
Entity Folder
Dev VPC Hos t Res trict ed Dev VPC Non-prod VPC Host Res trict ed Non-prod
Cloud Cloud
Project Hos t Project Project VPC Host Project
Identity Identity
Priv ate Priv ate Priv ate Priv ate
zone: “.” zone: “.” zone: “.” zone: “.”
VPC Host Projects
Note: With the hierarchical firewall policy that is described in Section 4.3.2.2, all project’s
firewalls will be exposed to the Health Check IP range. The Health Check IP range has the same
IP range as the external GCLBs. Therefore, it is important that the org policy restricts the ability
to create external GCLBs on sensitive projects. Otherwise, by default, external GCLBs could be
deployed and an external exposure on the external GCLB can be added.
IP Ranges:
Allowing Health 35.191.0.0/16, 80, 443 (NOTE: sometimes
Checks TCP
209.85.152.0/22, and 8080 is needed as well)
209.85.204.0/22
Inner VPC traffic Depends on VPCs CIDR TCP, UDP, ICMP Any
range
This firewall policy is very restrictive, and exceptions will need to be made. But, by starting in a
restrictive state, teams will be forced to adjust their firewall policies only for exposures deemed,
as necessary. Careful consideration should be given to users or groups that have the Cloud
Identity permissions to create and delete firewall rules.
Note: You can refer to the December 2022 Google Security Foundations Guide for an
overview of hierarchical firewall implementation.
In certain situations, however, it may be more appropriate to set the firewall policy at the VPC
level instead of the folder level through Hierarchical Firewalls. For example, if certain workloads
require a special micro-segmentation configuration from other workloads (since Hierarchical
Firewalls currently do not support network tags).
VPCs are comprised of an additional partition layer called a subnet, and each subnet is
explicitly associated with a single region. To ensure that resources are as private and as secure
as possible, FIs should manage traffic with firewall rules, centralized network control, and
secure outbound connections.
The firewall rules on a VPC determine which traffic to allow or deny for the resources attached
to the VPC. The rules allow you to specify the type of traffic, such as ports and protocols, and
the source or destination of the traffic, including IP addresses, subnets, tags, and service
accounts. Network tags are used to make firewall rules and traffic routes apply only to specific
VM instances in the project.
Table 9 lists the recommended default firewall rules and network tags for VPCs on all projects.
These firewall rules do not include external, egress rules; if external egress is needed, manual
creation of the firewall rule is required. This will be necessary to enable certain workloads. It is
recommended that the FI uses Firewall Insights to determine firewall rule usage and
configuration issues. The ingress rules also restricts inbound traffic to the external GCLB IP
range (listed as Allowing Health Checks).
As mentioned in the Cloud Load Balancing section, with the policy in Table 8, all project’s
firewalls will be exposed to the Health Check IP range. The Health Check IP range has the same
IP range as the external GCLBs. Therefore, it is important that the org policy restricts the ability
to create external GCLBs on sensitive projects. Otherwise, by default, external GCLBs could be
deployed and an external exposure on the external GCLB can be added. If exposure to the
external GCLBs is required, the org policy constraint will need to be relaxed.
Note: If private VMs or GKE clusters require external egress traffic, refer to Cloud NAT in
Section 4.3.2.6.
SSH remote
access FI’s IPs remote-access TCP 22 (SSH)
RDP remote
access FI’s IPs remote-access TCP 3389 (RDP)
TCP, UDP,
Inner VPC traffic Depends on VPCs CIDR ICMP Any
range
Note: Resources in service projects can communicate with one another across the project
boundaries.
One key benefit of using shared VPCs is the ability to manage the network controls for a wide
range of projects centrally rather than having each individual project VPC managed separately.
Shared VPC lets organization administrators delegate administrative responsibilities, such as
creating and managing instances, to Service Project Admins while maintaining centralized
control over network resources like subnets, routes, and firewalls.14
For most simple use cases, a single (non-Shared) VPC network provides the networking
functions that are required. Shared VPCs should be used for administration of multiple working
groups. If a Shared VPC is not logical for your use case, but you need to establish
communication between VPCs, VPC Network Peering is the next viable solution. For additional
information on best practices when setting up a VPC, refer to Google’s best practices and
reference architectures for VPC design document.
Banking.com uses VPC-SCs to define a service perimeter around the resources that are hosting
sensitive data in restricted projects. If there are projects in different VPC service perimeters
that need access to the sensitive resources, a perimeter bridge can be used to allow
communication across perimeters.
Note: The traffic between perimeters over a perimeter bridge is bi-directional by default. If
needed, the FI can separately control ingress/egress traffic through policies for specific
perimeters. Refer to the Google document VPC Service Controls Ingress and egress rules
for more information on how to set up ingress and egress rules.
Figure 11 depicts how VPC-SCs function in a Google Cloud environment. VPC-SCs provide
segmentation by isolating Google Cloud resources and VPCs across three network paths:
VPC-SC enables the ability to lock access to otherwise public Google Cloud service APIs (e.g.,
BigQuery, Google Cloud Storage, etc.) and provides private access only.
The VPC-SC perimeters explicitly enable FIs the ability to allow list both specific google
services and specific application access via their projects.
Internet
Project Project
VPC Network
VPC Network
Reg ion 1
Reg ion 1
Zone
Zone
Subnet
Subnet Unauthorized Proj ect
a.b.c.d/x
a.b.c.d/x
BigQuery
Public APIs
In banking.com, if a private resource (e.g., GCE, GKE) needs to communicate externally to the
Internet, Cloud NAT will need to be leveraged. Cloud NAT allows GCE instances and GKE
clusters without external IP addresses to send outbound, egress traffic to the Internet, and
receive the response traffic from the destination.
Cloud NAT has gateways that are associated with a subnet within the VPC of a project. The
gateways are configured to apply to the primary IP address range of a subnet. If a subnet does
not have an associated Cloud NAT gateway, VMs within that subnet cannot access the Internet.
Figure 12 depicts a Cloud NAT configuration, including the subnet implications.
Internet
Project
VPC Network
Reg ion 1
Cloud
NAT
Zone
Subnet
GCE GCE
10 .2 40.0 .2 10 .2 40.0 .3
a.b.c.d/x
In Google Cloud, Cloud Armor protects against DDoS and web-based attacks. Cloud Armor has
security policies that protect applications by regulating which requests are allowed or denied
access to the underlying load balancers. The security policies are comprised of rules such as the
incoming request’s IP address, IP range, or region code. Cloud Armor acts at Google Cloud’s
network edge, either allowing or denying access to the external HTTP(S) GCLB that sits in the
Google point of presence. The security policies are attached to the backend service of the
external HTTP(S) GCLB and must be associated with the backend service.
Table 9 below describes the recommended security policies that are applied to Cloud Armor
instances in banking.com.
Note: There are preconfigured rules that protect against common application layer attacks that
are also included in the table. It is recommended that these rules be carefully evaluated in
detective mode prior to enabling them in block mode. There may be scenarios that lead to
application degradation – hence, careful review and considerations are necessary.
Deny traffic from This should be used if a web application is not available origin.region_code ==
outside your region in a specific region. 'XX'
Remote file inclusion Defends against remote file inclusion attacks rfi-<version>
(RFI) attacks policy
Remote code
execution (RCE) Defends against remote code execution attacks. rce-<version>
attacks policy
Figure 13 depicts how Cloud armor functions with the global, network edge external HTTP(S) LB.
Internet
Untrusted VPC
Reg ion
Zone
Example Project
Subnet
VPC Network
Cloud HTTP(S)
GCE
Armor LB
10 .2 40.0 .2
a.b.c.d/24
Moreover, Google Cloud Packet Mirroring can be used as a part of Google Cloud IDS to detect
network intrusions. The Google Cloud packet mirroring forwards all network traffic from your
Compute Engine VMs or Google Cloud clusters to a designated address. FIs can also use third
party tools like IDS through Palo Alto Networks or Bro/Zeek for additional network security.
“VPC Flow Logs samples each VM's TCP, UDP, ICMP, ESP, and GRE flows. Both inbound and
outbound flows are sampled. These flows can be between the VM and another VM, a host in
your on-premises data center, a Google service, or a host on the internet. If a flow is captured
by sampling, VPC Flow Logs generates a log for the flow. Each flow record includes the
information described in the Record format section.”19
It is recommended that flow logs are turned on for each VM instance and GKE node.
In order to prevent use of legacy networks, a project Delete the networks in the legacy
3.2 should not have a legacy network mode.
configured.
Ensure that RSASHA1 is not used for the key-signing key The algorithm used for key signing
in Cloud DNS DNSSEC. should be a recommended one and
3.4 it should be strong.
Ensure that VPC Flow Logs is enabled for every subnet in Set Flow Logs to On.
3.7 a VPC Network.
Ensure no HTTPS or SSL proxy load balancers permit SSL Ensure that each target proxy entry
policies with weak cipher suites. in the Frontend table has an SSL
3.8 Policy configured.
After the foundational components of the Google Cloud environment are secured (i.e., the
resource hierarchy, organization policies, identity and access management, and networking), it
is equally important to secure the services inside of the Google Cloud projects that the FI is
leveraging. The services described in the subsequent sections are five of the more commonly
used services in Google Cloud, and include Google Compute Engine (GCE), Kubernetes Engine
(GKE), Cloud Storage, BigQuery, and Dataflow.
Without hardened services, the attack surface of a cloud environment drastically increases.
Often, the beginning stages of the cyber kill chain occurs due to misconfigurations or
vulnerabilities on the services in a cloud environment. To mitigate the risk of security breaches,
services should be secure upon instantiation.
Note: Within Google Cloud, there are a large number of services and APIs that are offered to
customers. The services have varying purposes, and the usage of the services will depend on
the solution that the organization is pursuing. This section is non-exhaustive, and only includes
a subset of Google Cloud services.
Important: This technical paper does not delve into two important security controls for GCE
instances: CICD and Operating System (OS) patching. The CICD is an automated deployment
method for services inside of Google Cloud. OS patching is an essential preventative
maintenance that keeps the OS on VMs up-to-date, stable, and safe from malware and other
threats. For information on CICD pipeline deployment, refer to the December 2022 Google
Security Foundations Guide – this technical paper will not cover image-build pipelines or
image deployment.
Note: When GCE instances are created, Google Cloud provides an option to select an
encryption key management solution. Recommendations for this are included in Section
4.3.5.2.
4.3.3.1.3 OS Login
It is also important to properly configure the GCE instances. Project wide-SSH keys should be
disabled, and instead, OS Login should be used in tandem with two-factor authentication (2FA).
This is recommended because project wide-SSH keys allows the owner of the key privileged
access to the Linux instances.
4.3.3.1.4 Images
On GCE, images provide the base operating environment for applications to run. The images are
integral for ensuring application deployment can scale quickly and reliably. Images can also be
used to archive application versions for business continuity and disaster recovery.
Images should be provisioned with the most up-to-date patches to reduce the possibility of
vulnerability exploitation. The patches should be applied on a regular basis, and using a
golden standard image. In addition, it may be necessary for teams in the organization to want
to share images between projects. This will require the compute.imageUser,
compute.instanceAdmin, and compute.storageAdmin role. These roles should be associated
with an image user group mentioned in Section 4.3.1.
It is also recommended that FIs use trusted image policies. Trusted image policies restrict your
project members so that they can create boot disks only from images that contain approved
software that meets your policy or security requirements. Trusted images are set using an org
policy constraint.
Lastly, as recommended in the December 2022 Google Security Foundations Guide, in cases
where the Compute Instance Admin (v1) role (roles/compute.instanceAdmin.v1) is required by
users or groups, you can create a custom role that has all the permissions of the Compute
Instance Admin (v1) role apart from the compute.instances.setTags permission. Someone with
this custom role can't add or remove tags on VMs; the tags must be added by a controlled
mechanism, depending on workload type.
4.3.3.1.6 Logging
As discussed in Section 4.3.2.9, VPC Flow Logs should be turned on for GCE instances. VPC
Flow Logs provides FIs with real-time visibility into network throughput and performance.
Using Flow Logs, FIs can understand network usage, optimize network traffic expenses, and
can use the logs for security analysis and network forensics.
Flow Logs work for multiple use-cases and traffic patterns, such as VM-to-VM in the same VPC,
VM to external flows, VM-to-VM in a Shared VPC, etc. For more information on VPC Flow Log
traffic patterns, refer to the Traffic pattern examples documentation. For enablement of flow
logs, refer to the Enabling VPC Flow Logs documentation.
Note: Logging through a centralized SIEM is discussed in Section 4.4.3.
GKE is structured slightly different from the standard project structure within GCE and has
different networking concepts. A cluster consists of at least one control plane and has multiple
worker machines called nodes. Within the nodes are pods. Pods are the smallest deployable
objects in GKE. Pods contain one or more containers. Figure 14 depicts the logical structure of
GKEs architecture.
Project
VPC Network
Region 1
Subnet
Cluster
Node
Namespace
Pod
GKE
a.b.c.d/24
There are several controls that need to be enabled on a new GKE instance to ensure it is secure,
and Google Cloud makes it simple. Google enables GKE with a few unique security controls
above what's available in raw GKE cluster deployments. The security controls discussed in the
remainder of the section include the following:
Moreover, Google has documentation that lists best practices for GKE instances, including
Hardening your cluster's security, PCI DSS compliance on GKE, and Introducing GKE Autopilot: a
revolution in managed Kubernetes (which describes automation and management for GKE
nodes). Materials in the sections below will pull from concepts discussed in these two documents
to synthesize their contents.
GKE also has network policies. Network policies are a method to implement micro-segmentation
and can restrict lateral movement within a cluster. It is recommended to enable network policies
when creating the GKE cluster. Without network policies in place, all pod-to-pod traffic is allowed
by default.
The last element of networking is namespaces. Namespaces can logically segment instances
inside of a GKE node pool. In general, usage of both namespaces and network policies is
important for FIs that need to fully segment instances. A guide on implementing namespaces
can be found here.
When configuring the GKE instance, FIs should enable auto-upgrading nodes, auto-repairing
nodes, and cloud logging and monitoring. For vulnerability scanning, organizations can use
Google Container Analysis Vulnerability Scanning, which will be discussed in further detail in
Section 4.3.4.2. It is also recommended to leverage Container Threat Detection to monitor the
state of container images (refer to Section 4.4.1.2.2).
Nodes
4.3.3.2.5 Encryption
GKE supports customer managed encryption keys (CMEK) so FIs can control the keys
themselves if they choose to (rather than rely on the default encryption). This is important for
FIs since GKE supports encryption key management (EKM) for customers who want to use and
manage their own fully external key management system. Encryption is discussed further in
Section 4.3.5.3.
4.3.3.2.6 Logging
As discussed in Section 4.3.2.9, and like GCE instances, VPC Flow Logs should be turned on
for GKE. VPC Flow Logs provides FIs with real-time visibility into network throughput and
performance. Using Flow Logs, FIs can understand network usage, optimize network traffic
expenses, and can use the logs for security analysis and network forensics.
Flow Logs work for multiple use-cases and traffic patterns in GKE, such as pod to cluster IP flow,
GKE external load balancer flows, GKE ingress flows, and pod to external flows. For more
information on VPC Flow Log traffic patterns for GKE, refer to the second half of the Traffic
pattern examples section in Google’s documentation. For enablement of flow logs, refer to the
Enabling VPC Flow Logs documentation.
As an example, unsecure and misconfigured storage services were the culprit in several cloud-
based attacks in the past.20 If bucket read/write privileges are acquired, an attacker can list
assets, modify existing content, create new content, induce denial of service (modify objects
to prevent public loading), and exfiltrate the data. This jeopardizes the availability of the
services and the integrity of the data. Since FIs often store sensitive data in the cloud, it is
imperative that they secure their cloud storage instances.
Securing Cloud Storage involves the combination of multiple security controls. The security
controls discussed in the remainder of the section include the following:
Note: Since Cloud Storage does not have VPC Flow Logs, no specifics are discussed for the
Logging section – and is subsumed by the content in Section 4.4.2.
VPC Service Controls can also be leveraged at the project level to segment and isolate the
Cloud Storage instances from resources on other projects. The VPC Service Controls enable
the customer to limit which projects (and which services in those projects) can access the data
stored on Google Cloud Storage (essentially acting as an allow list of services).
4.3.3.3.4 Encryption
Encryption requirements are becoming increasingly stringent through compliance
requirements such as PCI DSS and GDPR, adding complexity to properly encrypting data in the
cloud. Cloud Storage uses Cloud KMS to encrypt stored data. Cloud KMS offers a range of
cryptographic key options and offers encryption management tools for security and
compliance purposes. Encryption will be discussed further in Section 4.3.5.2.
4.3.3.4 BigQuery
BigQuery is an enterprise data warehouse that can store and query massive datasets through
the enablement of SQL queries on Google’s infrastructure. This can be used for analysis of
many types of data (log data, IoT data, financial processing data, etc.). BigQuery is functionally
comprised into three components: storage, ingestion, and querying. Data can be stored
directly on BigQuery, or it can be ingested from a different source (e.g., Cloud Storage
instances, Cloud Dataflow instances, etc.). The data is then queried and analyzed inside of
BigQuery.
Since BigQuery often involves the storage (or ingestion) and analysis of potentially sensitive
data, it is integral to secure the BigQuery instances. The security controls discussed in the
remainder of the section include the following:
Note: Since BigQuery does not have VPC Flow Logs, no specifics are discussed for the
Logging section – and Logging content is subsumed by Section 4.4.2.
There are certain use-cases when row-level security should be leveraged and there are multiple
methods for securing data within rows of a BigQuery instance. For example, FIs can use
authorized views, row-level access policies, or store data in separate tables. The Google
BigQuery documentation includes a section that describes when to use row-level security
compared to the other available alternatives.
With column-level security, organizations should use authorized views to ensure that table and
dataset level IAM policies are used to confirm proper access. Figure 16 depicts how authorized
views restricts the access to sensitive data through the principle of least privilege.
Start
NO NO
NO
As an example, front office bank clerks can query user account balances but, through column-
level security, may not have access to query the user’s entire wealth portfolio.
4.3.3.4.3 Encryption
BigQuery automatically encrypts all data before it is written to disk. By default, encryption keys
are used to protect organization’s data. BigQuery also supports customer managed encryption
keys (CMEK) so customers can control the keys themselves if they choose to (rather than rely
on the default encryption).
This is important for FIs since BigQuery supports encryption key management (EKM) for
customers who want to use and manage their own fully external key management system.
Encryption will be discussed further in Section 4.3.5.2.
4.3.3.5 Dataflow
Cloud Dataflow is a service that processes datasets and can read, transform, and write data.
After the data has been transformed, the writing occurs in an external sink (sinks can include a
Pub/Sub topic, Cloud Storage, BigQuery, etc.). To deploy Cloud Dataflow, it is recommended
to set up a pipeline. Pipelines can be based off of templates or from SQL. For information and
recommendations on deploying Cloud Dataflow pipelines, refer to the Tips and tricks to get
your Cloud Dataflow pipelines into production guide.
Cloud Dataflow has key features that make its usage practical and appealing for FIs. Cloud
Dataflow is useful for capturing, processing, and analyzing data from systems whose data is in
a format that is not easily analyzed (e.g., websites, mobile apps, IoT devices, etc.). For
example, an FI could use Cloud Dataflow for pulling data from scanned images of contractual
agreements for loans. The security controls discussed in the remainder of the section include
the following:
Note: Since Cloud Dataflow does not have VPC Flow Logs, no specifics are discussed for the
Logging section – and Logging content is subsumed by Section 4.4.2.
4.3.3.5.1 Networking
The primary networking components that contribute towards securing Cloud Dataflow
instances have been discussed in Section 4.3.2. However, note that it is recommended to
store Cloud Dataflow instances in a separate VPC from other resources that should not have
access to the data being process by Cloud Dataflow.
VPC Service Controls (discussed in Section 4.3.2.5) should be leveraged to segment and
isolate the Cloud Dataflow instances by their projects from undesired and insecure
connections. Since Cloud Dataflow reads from and writes to a designated sink, it is important
to ensure the resources can communicate with one another – accentuating the importance of
proper network segmentation and architecture design. Refer to the Accessing Google Cloud
resources across multiple Google Cloud projects guide for more information on how to
connect resources in different projects.
When running a Cloud Dataflow pipeline, two service accounts are used to manage the security
and permissions: the dataflow service account and the controller service account. The
permissions on the dataflow service account should not be adjusted. If the dataflow service
account loses permissions on the project, Cloud Dataflow cannot perform management tasks.
By default, the Compute Engine service account is used as the controller service account. It is
recommended that these permissions are reviewed and scoped down to only the necessary
permissions for the job function. Refer to the Security and permissions for pipelines on Google
Cloud guide for more information on these service account’s functions.
4.3.3.5.3 Encryption
Data on Cloud Dataflow is encrypted at rest and in transit. All communication with Google
Cloud sources and sinks is encrypted and is carried over HTTPS. All inter-worker
communication occurs over a private network and is subject to your project's permissions and
firewall rules. Refer to the Data access and security guide for more information on Cloud
Dataflow pipeline data security. Encryption will be discussed further in Section 4.3.2.5.
FIs typically have sensitive system information (e.g., binaries, config data, logs, etc.) stored in
files.25 When an attacker compromises these files, it opens the possibilities to data
exfiltration/manipulation or system/application compromise. By leveraging FIM on GCE and GKE
instances, FIs can protect against these types of attacks. In addition, PCI DSS requirements 5.1,
5.2, and 11.5 mandate the use of FIM on any in-scope host – accentuating the importance of
having FIM in Google Cloud. The next sections describe how to enable FIM on Container
instances and Compute Engine instances in Google Cloud.
In addition, organizations should lock down containers so only specific, allowed folders have
write-access. This is performed by running the containers as a non-root user and using file
system permissions to prevent write access to all but the working directories within the
container file system. Refer to the Installing antivirus and file integrity monitoring on Container-
Optimized OS guide for more information on FIM with GKE.
For vulnerability scanning on App Engine, GKE, and/or GCE web applications – Web Security
Scanner should be used. Using Web Security Scanner, FIs can identify security vulnerabilities
on their services. There are two forms of scanning: managed and custom. Managed scans
should be used to centrally manage vulnerability scanning for all projects Custom scans
should be used for projects that require more granular scanning. Both custom scans and
managed scans are available in the Security Command Center (refer to Section 4.4.1)
after configuring the Web Security Scanners. For additional best practices, refer to the
Overview of Web Security Scanner Best Practices.
For image vulnerabilities, Google provides several security services to help build security into
the CI/CD pipeline. To identify vulnerabilities in your container images, FIs should use Google
Container Analysis Vulnerability Scanning. When a container image is pushed to Google
Container Registry (GCR), vulnerability scanning automatically scans images for known
vulnerabilities and exposures from known CVE sources. Vulnerabilities are assigned severity
levels (critical, high, medium, low, and minimal) based on CVSS scores. For the broader
components of the Google Cloud environment, such as Cloud Monitoring and Logging, Cloud
Storage, Cloud SQL, IAM, etc. – Security Health Analytics should be leveraged. Security
Health Analytics is a built-in service in Security Command Center and will be discussed in
Section 4.4.1.1.
Note: The December 2022 Google Security Foundations Guide’s Section 12.3 describes how
Web Security Scanner can be used in an example deployment.
Another option is to use a third-party solution such as Qualys. You can refer to the Securing
Google Cloud Platform with Qualys report for an overview on how to integrate Qualys in
Google Cloud.
Although organizations are generally improving in the area of detecting unauthorized access
attempts, data breaches are ever evolving. A DLP solution reduces the likelihood of data
breaches, interruptions to workflows, and potential loss of integrity. FIs should deploy Cloud
DLP to mitigate risks associated with personal information exposure, IP exfiltration, data
visibility, and non-compliance.
Cloud DLP works by de-identifying sensitive data (such as PII) with techniques like redaction,
masking, tokenization, and other methods. The sensitive data is separated from the non-
sensitive data based on predefined information types (referred to as infoTypes). The
infoTypes include fields such as: PHONE_NUMBER, US_SOCIAL_SECURITY_NUMBER,
CREDIT_CARD_NUMBER, etc. A full list of the built in infoTypes can be found here. In
addition, custom infoType detectors can be set. It is recommended that FIs use Cloud DLP
on any datasets that contain sensitive information to reduce chances of exposure.
For example, if banking.com receives information in Google Cloud containing names, credit
card numbers, etc. from clients, DLP can de-identify the data that should not be exposed to
certain systems. Choosing the proper de-identification technique depends on the type of data
being de-identified. It is recommended that the FI examine the type of data they are storing in
Google Cloud and determine which components should be de-identified. Based on the type of
data (and the business purpose), certain de-identification methods should be used (e.g.,
bucketing, tokenization, replacement, masking, redaction). Table 13 lists these de-
identification methods along with the level that they hide sensitive values.
Tokenization or Encrypts sensitive information with a KMS key and replaces it with a
Pseudonymization hash or token by CryptoReplaceFfxFpeConfig or
CryptoDeterministicConfig infoType transformations
There are various use cases for using Cloud DLP – two of which are captured in Figure 17 below.
The left portion of the diagram shows de-identification and re-identification of PII. The depicted
solution creates an automated data pipeline to de-identify any sensitive data the FI may have
stored in Cloud Storage.
The right portion of the diagram shows automating the classification of data in Cloud Storage.
The depicted solution uses a data quarantine and classification system using Cloud Storage
triggers in Cloud Functions and Cloud DLP.
Da ta
de-identif ication Da ta valida tion
Uploa ds files t o GCS
Da ta re-identif ication
Perf or ms conf igurat ion File is moved to the pr oper classification bucket
management (key and
DL P templa te)
Secur it y
Admins
Cloud
Functions
Cloud DLP
4.3.5.2 Encryption
Encryption is the process of encoding information so that only authorized users and accounts
can decipher the ciphertext into plaintext and access the information. Using encryption, private
and sensitive data is protected, and the overall security of data transfer and storage is
improved. In general, there are two forms of data that are encrypted: data at rest and data in
transit. For encryption, no third-party solutions are necessary. Google encrypts data at rest and
in transit by default. However, there are some caveats for data in transit encryption (i.e., times
when data in transit encryption is not performed), which will be discussed in the Data in Transit
section below.
Block Storage
For requests entering from the internet or external IPs to Google Cloud services, Google’s front
end terminates traffic for incoming HTTP(S), TCP, and TLS proxy traffic; provides DDoS attack
countermeasures; and routes and load balances traffic to the cloud services. For more
information on encryption in transit in Google Cloud, refer to the Google documentation.
In addition to Cloud KMS, secrets can be managed with Secret Manager. Secret Manager stores
API keys, passwords, certificates, and other sensitive data. The data is stored in a single source
of truth, and all access to the data is audited and logged. In general, combining the usage of
Cloud KMS and Secret Manager is recommended to secure FIs Google Cloud environment.
It is also recommended that FIs use customer-managed encryption keys (CMEK) to encrypt
service’s data at rest using KMS keys that are self-owned and managed. The data cannot be
decrypted without access to the key. It is important to note that CMEK does not necessarily
provide more security than the default encryption mechanisms, and also results in additional
cost incurrence. The reason CMEK is recommended in banking.com is since it provides
additional capabilities, such as:28
3. Automatically or manually
protecting the data by
using a stricter encryption
standard than AES-256.
The list of current Google Cloud services that have CMEK integrations can be found in the Using
Cloud KMS with other products document.
To learn more about how to leverage CMEK, refer to the Google Cloud Key Management Service
deep dive whitepaper.
Note: The December 2022 Google Security Foundations Guide provides additional guidance
on how to configure a KMS and Secret Manager solution, including resource organization,
infrastructure decisions, key lifecycle components, and several other areas.
The detective controls for banking.com are grouped together and discussed based on their
capabilities. The groupings consist of Security Command Center, Cloud Logging, Security
Information and Event Management (SIEM), Security Analytics, and Asset Inventory Management.
Each of the controls that contribute to this framework will be discussed in the subsequent
sections, along with a deep dive into how these controls function.
Figure 18 depicts a high-level overview of the collection of the detective controls for
banking.com.
DC1
Log egress through
DC1 NAT gateway
External
SIEM Service
On-prem Router
Link local address: Log egress through
e.g., 169.254.10.2
on-prem
Zone Zone
Logging VPC
Subnet Subnet
Security
Cloud Cloud Dataflow VM Dataflow VM Command Center
VLAN VLAN
Router Router
a.b.c.d/24 a.b.c.d/24
The SCC has two tier options: Standard and Premium. The Premium tier includes all the
features from the Standard tier and includes:
It is recommended that organizations subscribe to the Premium tier, since the ETD, KTD, and SHA
are essential detective controls for protecting FI’s Google Cloud environments from risks such as
data exfiltration. In addition, SHA includes monitoring and reporting for compliancy standards
like PCI DSS. When setting up SCC, the default settings should not be changed unless there is a
specific security concern that necessitates changes. For more information on the costs
associated with Premium and Standard tier options, refer to the Google pricing guide.
In terms of functionality, the SCC aggregates findings from various security sources in the
Google Cloud environment. These sources can include SCC’s built-in services (ETD, KTD, SHA,
and Web Security Scanner), third-party partners (e.g., Qualys, Capsule8, etc.), or the
organization’s own security detector sources. SCC also includes detectors for Cloud DLP and
Anomaly Detection.
Note: If your sources include third-party partners, it is recommended that you generate
custom findings through the findings and sources APIs.
The security sources send their findings to SCC where they are displayed on SCC’s dashboard.
However, in order to quickly act on these findings, it is important that organization’s leverage the
built-in Security Command Center Pub/Sub topic to set up alerting and notifications. The SCC
Pub/Sub can also be connected to the SIEM solution, which will be discussed in Section 4.4.3.
Figure 19 depicts the notification and alerting process.
User
SCC Project
Security
Command Center
SCC
Sources
For a detailed implementation outline on setting up SCC Alerts and Notifications, refer to
Section 10.1.3 of the December 2022 Google Cloud Security Foundations Guide. The
Foundations Guide also includes examples of successful notification configurations and topic
patterns.
The Premium tier of SCC includes Security Health Analytics (SHA), which helps identify
misconfigurations and compliance violations inside of the organization’s Google Cloud
resources. The Vulnerabilities tab of the SCC dashboard summarizes and depicts the
misconfiguration and compliance findings that were identified by SHA – making it easily
actionable and digestible. These findings are presented as a table of recommendations which
are sorted by their severity and mapped to CIS, NIST-800-53, PCI-DSS, and ISO 27001
benchmarks. It is recommended that alerts and notifications are set up for SHA findings.
Firewall
Audit Logs
Logs
Event Threat
Detection
Flow Logs Other Logs
Example Project
Pod
Cloud Pub/
Container Functions
Sub
Threat
GKE Detection
It is recommended to create custom detection rules in addition to the default rules – should the
FI need additional security analyses. This can be accomplished by storing the log data in
BigQuery, and then running unique or recurring SQL queries that capture your threat models.
To use ETD, logs must be enabled at the organization, folder, or project level. By default, these
logs are turned off. ETD should be used in areas of the environment that need additional security
analysis. Log sources for ETD include the following, along with a link to the enablement guides:
1. SSH logs/syslog
2. VPC flow logs
3. Cloud Audit Logs
a. Admin Activity logs are always written; you can't configure or disable them
b. Data Access logs
4. Cloud DNS logs
5. Firewall Rules logs
6. Cloud NAT logs
For an overview on ETD usage, refer to Google’s Using Event Threat Detection guide.
KTD works by using a detector to collect low-level behavior in the guest kernel. As depicted in
figure 21 (above), event information is forwarded to detector service and analyzed to
determine whether there are any findings or incidents. If an incident is detected, it is
automatically written to SCC and, optionally, Cloud Logging. KTD contains the following
detectors by default:
A binary that was not part of the original container The detector looks for a
image was executed. binary being executed
Added Binary that was not part of the
Executed If an added binary is executed by an attacker, it's a original container image
possible sign that an attacker has control of the or was modified from the
workload, and they are executing arbitrary commands. original container image.
A library that was not part of the original container The detector looks for a
image was loaded. library being loaded that
Added Library was not part of the
Loaded If an added library is loaded, it's a possible sign that original container image
an attacker has control of the workload, and they are or was modified from the
executing arbitrary code. original container image.
A process started with stream redirection to a
remote connected socket.
The detector looks for
With a reverse shell, an attacker can communicate from
Reverse Shell `stdin` bound to a
a compromised workload to an attacker-controlled remote socket.
machine. The attacker can then command and control
the workload to perform desired actions, for example, as
part of a botnet
Table 16: KTD Detectors
There are currently no custom detectors for KTD. It is recommended that FIs use KTD on any
containers that have access to sensitive information. For an overview on KTD usage, refer to
Google’s Using Container Threat Detection guide.
Log entries are made up of a variety of sources. These can include logs from GCE instances
starting up, new data being uploaded to Cloud Storage, a call made to an ML API, or anything
that an application writes to the standard or error output. After the logs are ingested and
aggregated, they are sent to a Log Sink and Logs Viewer for processing. Logs Viewer allows the
user to examine the log entries whereas the Log Sink can be used to export the logs to Cloud
Storage, BigQuery, or to Pub/Sub for archival, compliance/legal, and analysis purposes. Figure
21 depicts the logical architecture for Cloud Logging.
SIEM
External
SIEM Service
Common Folder
Logging Project
Logging VPC
Log Router
Dataflow VM
Logging API
Cloud Pub/
Sub
Firewall
Audit Logs
Logs
Cloud
Storage
Note: The time of log retention should be set in accordance with the FIs compliance
requirements.
Within the reference banking.com organization, project logs are centralized in a unified log
sink.
Section 9 of the December 2022 Google Cloud Security Foundations Guide provides a
guide on implementation of a Cloud Logging structure, which includes:
Note: For the “Add a filter text to the Logging/Metrics” prevention methods – refer to the
Google Cloud CIS Benchmarks v1.1.0 for a guide on filter creation.
Ensure that Cloud Audit Logging is configured In Audit Logs, ensure that Admin Read,
2.1 properly across all services and all users from a Data Write, and Data Read are enabled for
project all services and no exemptions are
allowed.
2.2 Ensure that sinks are configured for all log entries Covered in Cloud Logging.
2.3 Ensure that retention policies on log buckets are In the Cloud Storage browser, ensure that
configured using Bucket Lock the Retention policy is checked.
2.5 Ensure that the log metric filter and alerts exist for
Audit Configuration changes Add a filter text to the Logging/Metrics.
Ensure that the log metric filter and alerts exist for
2.6 Custom Role changes Add a filter text to the Logging/Metrics.
Ensure that the log metric filter and alerts exist for
2.7 VPC Network Firewall rule changes Add a filter text to the Logging/Metrics.
Ensure that the log metric filter and alerts exist for
2.8 VPC network route changes Add a filter text to the Logging/Metrics.
Ensure that the log metric filter and alerts exist for
2.9 VPC network changes Add a filter text to the Logging/Metrics.
Ensure that the log metric filter and alerts exist for
2.10 Cloud Storage IAM permission changes Add a filter text to the Logging/Metrics.
Ensure that the log metric filter and alerts exist for
2.11 SQL instance configuration changes Add a filter text to the Logging/Metrics.
Without a SIEM solution, an organization will not be alerted during attacks such as brute force
login attempts, file copying to drivers or emails, privilege escalation from the same
workstation, etc. The SIEM solution aggregates and consolidates data and provides a holistic
view of the network and infrastructure. This is a central security solution, and failure to
implement may result in non-compliance and a vastly expanded attack surface.
Figure 22 depicts the SIEM solution for the banking.com architecture (which is the same as
figure 21).
SIEM
Common Folder
Logging Project
Logging VPC
Log Router
Dataflow VM
Logging API
Cloud Pub/
Sub
Firewall
Audit Logs
Logs
Cloud
Storage
In the figure, the logs are sent to a Pub/Sub topic, where they are filtered to a Dataflow instance
before being exported to Splunk. There are multiple ways to send logs to on-premises. Google
recommends sending the logs over Cloud NAT, as described in the Deploying production-ready
log exports to Splunk using the Dataflow guide. Alternatively, the FI can send the logs over an
interconnect connection or VPN.
Refer to the Modern detection for modern threats: Changing the game on today’s threat actors
for more details on Chronicle and its benefits. Chronicle is recommended for use if FIs are
looking for an additional layer of security to help identify threats in their Google Cloud
environment.
Note: At the time of writing this paper, Chronicle should not be considered as a replacement
for a SIEM solution.
Note: For a list of asset types covered by CAIS, refer to the Supported Asset Types
documentation.
Security Command Center uses CAIS as its reference for asset inventory. CAIS enables FIs to
view their assets in one place and view historical discovery scans to identify new, modified, or
deleted assets – while Security Command Center gives enterprises consolidated visibility into
their Google Cloud assets across their organization. For example, with CAIS, enterprises can
quickly understand:
CAIS also monitors for configuration changes. If a configuration or state change triggers a policy
violation, FIs can either review detected results in SCC Premium or build their own custom policy
violation detections using Cloud Functions. For example, if external Gmail accounts are being
granted IAM permissions to banking.com projects, Cloud Functions can check and alert on the
changes. If the Cloud Function detects that the policy has been violated, the Cloud Function
reverts the change and sends a custom finding to the Security Command Center.
75
Appendix
Accenture is a global professional services company with leading capabilities in digital, cloud
and security. Combining unmatched experience and specialized skills across more than 40
industries, we offer Strategy and Consulting, Interactive, Technology and Operations services
— all powered by the world’s largest network of Advanced Technology and Intelligent Operations
centers. Our, 710,000 people deliver on the promise of technology and human ingenuity every
day, serving clients in more than 120 countries. We embrace the power of change to create value
and shared success for our clients, people, shareholders, partners and communities. Visit us at
www.accenture.com.
Copyright © 2022 Accenture. All rights reserved. Accenture and its logo are registered
trademarks of Accenture
Section 4.3.1.1 –
Identity Inclusion of Azure AD for IdP federation instead of limiting to AD
Federation
Section 4.3.1.1 – Inclusion of Access approval for when resources need to be accessed by
Access Google support personnel.
Approval
Section 4.3.1.2 – Additional compliance requirements from CIS Benchmarks v1.1.0 for IAM.
Additional
Requirements
Section 4.3.2.2 –
Hierarchical Added additional rules for SSH/RDP remote access, Web Server access,
Firewalls and Inner VPC traffic.
Section 4.3.2.4 – Added additional rules for SSH/RDP remote access, Web Server access,
VPC Firewalls and Inner VPC traffic.
Inclusion of specific details on Cloud NAT. Cloud NAT should be used on the
Section 4.3.2.6 projects hosting sensitive data for private resources that need egress traffic
– Cloud NAT to the Internet.
Section 4.3.2.7 – Inclusion of Cloud Armor security policies for protection of web applications
Cloud Armor
Section 4.3.2.8 –
Additional Additional compliance requirements from CIS Benchmarks v1.1.0 for
Requirements Networking.
Section 4.3.3 – Inclusion of security controls for a sub-set of commonly used services inside
Service of Google Cloud.
Security
Section 4.3.3.6
– Additional Additional compliance requirements from CIS Benchmarks v1.1.0 for Services.
Requirements
Section 4.3.4.1 – Inclusion of methods for addressing the PCI DSS requirement of FIM on
File Integrity endpoints.
Monitoring
Section 4.3.5.1 –
Data Loss Inclusion of Cloud DLP for de-identification of sensitive data on resources.
Prevention
Section 4.4.1.2.1
Event Threat Detection default rules and enablement. TBD – example of
– Event Threat custom rules.
Detection
Section 4.4.1.2.2
– Container Container Threat Detection detectors.
Threat Detection
Section 4.4.2.1 –
Additional Additional compliance requirements from CIS Benchmarks v1.1.0 for
Requirements Logging. TBD – Adding all required Logging/Metric filters to the main section.
AD Active Directory
FI Financial Institution
ACRONYM DEFINITION
IP Internet Protocol
OS Operating System
VM Virtual Machine
Bibliography
1 IBM Security. (2020). Cost of a Data Breach Report 2020.
https://www.ibm.com/security/digital-assets/cost-data- breach-report/#/
2 Ibid.
3 Cloud Security Alliance. (2020). Cloud Usage in the Financial Services Sector.
4 Ibid.
5 Google Cloud. (2021, December). Google Cloud security foundations
guide. https://services.google.com/fh/files/misc/google-cloud-
security-foundations-guide.pdf
6 Sound Practices to Strengthen Operational Resilience”, FRB, OCC, FDIC
7 Godfrey, N., Hannigan, D., Knott, D., & Abel, J. (2021). Strengthening Operational Resilience
in Financial Services by Migrating to Google Cloud. Google Cloud.
https://services.google.com/fh/files/misc/google_cloud_operational_resilience_fin_serv.pdf
8 “Third-party dependencies in cloud services”, Financial Stability Board
9 NIST. (n.d.). Defense-In-Depth. Computer Security Resource Center.
https://csrc.nist.gov/glossary/term/defense_in_depth#:~:text=4%20under%20Defense%2Din
%2DDepth,technology%20 are%20caught%20by%20another
10 Stella, J. (2019, August 1). A Technical Analysis of the Capital One Cloud
Misconfiguration Breach. Fugue. https://www.fugue.co/blog/a-technical-analysis-of-
the-capital-one-cloud-misconfiguration-breach
11 Google Cloud. (2021, December). Google Cloud security foundations
guide. https://services.google.com/fh/files/misc/google-cloud-
security-foundations-guide.pdf
12 Ibid.
13 Ibid.
14 Google Cloud. (2021, June 16). Shared VPC overview. Shared
VPC Overview. https://cloud.google.com/vpc/docs/shared-vpc
15 Khattar, M. (2020, July 11). Mitigating Data Exfiltration Risks in
GCP using VPC Service Controls. Medium.
https://medium.com/google-cloud/mitigating-data-exfiltration-
risks-in-gcp-using-vpc-service-controls-part-1-82e2b440197
16 Verdejo, D. (2018, October 15). High availability NAT gateway at
Google Cloud Platform with Cloud NAT. Medium.
https://medium.com/bluekiri/high-availability-nat-gateway-at-
google-cloud-platform-with-cloud-nat-8a792b1c4cc4
17 Google. (n.d.). Google Cloud Armor custom rules language reference. Google Cloud.
18 Google Cloud. (2021, August 10). Using VPC Flow Logs. Google Cloud.
https://cloud.google.com/vpc/docs/using-flow-logs
19 Google Cloud. (2021, August 10). VPC Flow Logs overview. Google Cloud.
https://cloud.google.com/vpc/docs/flow-logs
20 UpGuard Team. (2019, June 27). Data Warehouse: How a Vendor for Half the Fortune 100
Exposed a Terabyte of Backups. UpGuard. https://www.upguard.com/breaches/attunity-
data-leak
21 Chakraborty, S. (2020, June 8). 5 ways to enhance your cloud storage security and data
protection. Google Cloud. https://cloud.google.com/blog/products/storage-data-
transfer/5-ways-to-enhance-your-cloud-storage-security-and-data-protection
22 Ibid.
23 Ibid.
24 Google. (2022). Introduction to column-level security. Google Cloud.
https://cloud.google.com/bigquery/docs/column-level-security-intro
25 Google Cloud. (2021, December 19). Installing antivirus and file integrity monitoring on
Container-Optimized OS. Google Cloud.
https://cloud.google.com/architecture/installing-antivirus-and-file-integrity-monitoring-
on-container-optimized-os
26 https://cloud.google.com/compute/docs/os-patch-management
27 Vergadia, P. (2020, October 30). Understanding Data Encryption in Google Cloud.
Medium. https://medium.com/google-cloud/understanding-data-encryption-in-google-
cloud-c36d9095fb38
28 Google Cloud. (2021, December 19). Customer-managed encryption keys (CMEK). Google
Cloud. https://cloud.google.com/kms/docs/cmek#cmek
29 CIS Benchmarks v1.1.0. (2020, March 11). CIS Google Cloud Platform Foundation
Benchmark.
Copyright © 2022 Accenture. All rights reserved. Accenture and its logo are
trademarks of Accenture.