Agce Unit 2 Notes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

UNIT 2

CHAPTER 1

1. Interface Definition: A well-defined interface is crucial for an API's success. It involves specifying
the endpoints, request/response formats, supported methods (such as GET, POST, PUT, DELETE),
and data types. Clear documentation and the use of standard protocols (like REST or GraphQL) help
developers understand and interact with the API effectively.

2. Authentication and Authorization: APIs often require authentication and authorization mechanisms
to ensure secure access and protect sensitive data. Authentication verifies the identity of the requesting
party, while authorization determines the actions and resources a user can access. Common methods
include API keys, OAuth, JSON Web Tokens (JWT), and role-based access control (RBAC).

3. Management and Scalability: Effective API management involves tasks such as versioning, traffic
control, performance optimization, and scalability. Versioning allows for introducing changes without
breaking existing clients. Traffic control techniques include rate limiting to prevent abuse and
throttling to manage resource consumption. Performance optimization focuses on caching, load
balancing, and efficient data retrieval. Scalability strategies involve horizontal scaling,
containerization, and cloud infrastructure to handle increased usage and load.

4. Logging and Monitoring: Logging captures relevant events, errors, and data, which helps in
debugging and auditing. Monitoring involves collecting and analyzing metrics related to API
performance, availability, response times, and error rates. This information enables proactive
troubleshooting and capacity planning.

Cloud Endpoints

1. Protection: Cloud Endpoints helps secure your APIs by providing features like authentication and
authorization, ensuring only authorized users can access your API, and protecting sensitive data from
unauthorized access.

2. Speed: Cloud Endpoints is designed to handle high volumes of API traffic efficiently, ensuring fast
response times and reliable performance for your users, even during peak loads.
3. Monitoring and Logging: Cloud Endpoints integrates with monitoring and logging services,
allowing you to track API usage, measure performance metrics, and gain insights into the health and
behavior of your APIs, helping you identify and troubleshoot issues effectively.

4. Integration: Cloud Endpoints seamlessly integrates with other Google Cloud services, such as
serverless platforms like Cloud Functions and container orchestration systems like Google
Kubernetes Engine (GKE), allowing you to build and deploy your APIs with ease and leverage the
power of the broader Google Cloud ecosystem.

Apigee Edge is a platform that helps developers create and manage APIs. It provides a range of tools
and functionalities to design, build, and secure APIs, making it easier to connect different applications
and services. With Apigee Edge, developers can ensure that their APIs are reliable, secure, and
scalable, while also gaining insights into API usage and performance.

Apigee Edge
Apigee Edge is a platform that helps developers create and manage APIs. It provides a range of tools
and functionalities to design, build, and secure APIs, making it easier to connect different applications
and services. With Apigee Edge, developers can ensure that their APIs are reliable, secure, and
scalable, while also gaining insights into API usage and performance.
API gateway

An API gateway serves as a single entry point for clients to access multiple backend services or APIs.
It provides a centralized and consistent interface, handling tasks such as request routing, protocol
translation, security, and rate limiting. By acting as a gatekeeper, the API gateway enhances the
efficiency, security, and scalability of API interactions by consolidating common functionalities and
simplifying client-server communication

Pub/Sub:
Pub/Sub, short for Publish/Subscribe, is a messaging pattern and a scalable messaging service
provided by Google Cloud Platform (GCP). It enables asynchronous and decoupled communication
between publishers and subscribers.
Pub/Sub allows publishers to send messages to a specific topic, and subscribers interested in that topic
can receive those messages. It provides reliable and scalable message delivery, ensuring that messages
are delivered to all interested subscribers. This decoupled communication model enables systems to
scale independently and allows for real-time data streaming, event-driven architectures, and inter-
service communication in distributed systems.
Pub/Sub simplifies global messaging and event ingestion by providing a scalable and reliable
infrastructure for handling message-based communication across distributed systems. Pub/Sub allows
you to send and receive messages globally, making it easy to exchange data and events between
different applications and services. It provides a robust and scalable platform that handles the
complexities of message delivery, ensuring reliable and efficient communication across regions and
data centers. This simplifies the process of ingesting and processing events, enabling real-time data
streaming, event-driven architectures, and seamless integration between systems.

• Pub/Sub serves as a messaging system that facilitates communication between data gathering
systems and processing systems.
• Data gathering systems act as publishers, sending data messages to specific topics in Pub/Sub.
• Processing systems act as subscribers, subscribing to those topics to receive and process the
data messages.
• Pub/Sub ensures reliable and asynchronous delivery of data, decoupling the data gathering and
processing systems.
• This enables scalable and efficient data processing pipelines, allowing for real-time or batch
processing of large volumes of data.

• Pub/Sub acts as a buffer or intermediate layer, decoupling the sending and receiving of
messages across software applications.
• It allows applications to send messages to specific topics, and other applications can subscribe
to those topics to receive the messages.
• This buffering capability provided by Pub/Sub enables asynchronous and scalable
communication, enhancing the reliability and flexibility of interactions between software
applications.

Within the big data processing model, Pub/Sub typically sits at the initial stage, where it acts as a
central hub for ingesting and collecting data from various sources. It enables real-time data
streaming and event-driven architectures by providing a reliable and scalable messaging system.
After the data is ingested through Pub/Sub, it can be processed and analyzed by various big data
processing frameworks or services, such as Apache Spark, Apache Beam, or Google Cloud
Dataflow. These processing components can subscribe to Pub/Sub topics, receive the data messages,
and perform further computations or transformations on the data.
CHAPTER 2

Google’s infrastructure security layers:

1. Operational Security: Google implements robust operational security practices, including security
awareness training, access controls, incident response protocols, and regular security assessments, to
safeguard against internal threats and ensure the security of its operations.

2. Internet Communication: Google employs encryption and secure communication protocols, such
as HTTPS, to protect data transmitted over the internet, preventing interception and unauthorized
access to sensitive information.

3. Storage Services: Google's storage services, such as Google Cloud Storage, utilize encryption at
rest and access controls to protect data stored in its infrastructure, ensuring the confidentiality and
integrity of stored information.

4. User Identity: Google provides strong user identity and access management controls, including
multi-factor authentication and identity federation, to verify and protect user identities, preventing
unauthorized access to user accounts and resources.

5. Service Deployment: Google has robust deployment processes that include security checks and
testing to ensure that services and applications are deployed securely, minimizing vulnerabilities and
potential security risks.

6. Hardware Infrastructure: Google maintains a secure hardware infrastructure, employing measures


such as supply chain security, tamper-evident designs, and rigorous hardware testing to ensure the
integrity and security of the underlying hardware components.

There are several encryption options

Customer-Managed Encryption Keys (CMEK):-

• Customer-Managed Encryption Keys (CMEK) is a feature provided by Google Cloud Platform


(GCP) that allows customers to have control over the encryption keys used to protect their
data.
• With CMEK, customers can generate and manage their own encryption keys outside of
Google's infrastructure.
• These keys are then used to encrypt data stored in various GCP services, such as Google Cloud
Storage, Google BigQuery, or Google Cloud Datastore.
• By managing their encryption keys, customers can have greater control and ownership over
the encryption process. They can rotate, revoke, or destroy keys as needed, ensuring data
security and compliance with their specific requirements.

Customer-Supplied Encryption Keys (CSEK):-

• CSEK allows customers to provide their own encryption keys to protect their data stored in
the cloud. Instead of relying on the cloud service provider's default encryption keys,
customers generate and manage their own keys.
• By using customer-supplied encryption keys, customers retain full control and ownership of
the keys, ensuring that the cloud service provider cannot access their encrypted data without
the keys.
• CSEK provides an additional layer of security and addresses concerns about data privacy and
compliance, particularly for organizations with stringent regulatory requirements.
• With CSEK, customers have the flexibility to manage and rotate their encryption keys as
needed, providing them with enhanced control and security over their data stored in the
cloud.

Persistent disk encryption with Customer-Supplied Encryption Keys (CSEK):-

• With CSEK, customers generate and manage their encryption keys using their preferred key
management system or service outside of GCP.
• When creating a persistent disk in GCP, customers can specify their encryption key, which is
then used to encrypt the data stored on the disk.
• GCP's infrastructure encrypts the persistent disk using the customer-supplied encryption key,
ensuring that the data remains encrypted at rest and providing an additional layer of
protection.
• Since GCP does not have access to the customer's encryption key, only the customer can
access and decrypt the data stored on the persistent disk, giving them complete control over
their data security and privacy.

Three types of IAM (Identity and Access Management) roles:

1. Basic Roles: These are the basic, predefined roles that offer broad permissions at the project level.
Primitive roles include Owner, Editor, and Viewer. Owner has full control and access to resources,
Editor has permissions to modify resources, and Viewer has read-only access.

2. Predefined Roles: These roles provide more granular permissions and are designed for specific
services or resource types. Predefined roles are created and maintained by GCP and cover a wide
range of services and actions. Examples include roles like Compute Instance Admin, Storage Object
Viewer, or BigQuery Data Viewer.

3. Custom Roles: Custom roles allow customers to define their own roles with specific permissions
tailored to their requirements. Customers can combine and assign permissions at a fine-grained level,
providing more flexibility and control over access to resources.
By using these three types of IAM roles, GCP users can assign appropriate access privileges to
individuals or service accounts, ensuring that they have the necessary permissions to perform their
tasks while maintaining security and governance over GCP resources.

CHAPTER 3
A Virtual Private Cloud (VPC):

• A Virtual Private Cloud (VPC) is a virtual network infrastructure provided by cloud service
providers, such as Google Cloud Platform (GCP). It enables users to create and manage their
own isolated virtual networks in the cloud. Here's an explanation of a Virtual Private Cloud:
• A Virtual Private Cloud allows users to define their own logically isolated virtual network
environment within the cloud provider's infrastructure. It provides control over IP addressing,
routing, and network access policies.
• With a VPC, users can create subnets, which are subdivisions of the VPC with their own IP
ranges. These subnets can be used to group resources and apply specific network
configurations.
• A VPC also allows users to define firewall rules to control inbound and outbound traffic,
ensuring secure communication between resources within the VPC and controlling access from
external networks.
• By using a VPC, users can create and manage their own private network environment in the
cloud, offering flexibility, scalability, and security for their cloud-based applications and
services.

Primary networking products in Google Cloud:


1. Virtual Private Cloud (VPC): Create and manage your own isolated virtual network environment
in the cloud, with control over IP addressing, subnets, and firewall rules for secure communication.
2. Cloud Load Balancer: Distribute incoming network traffic across multiple instances or services to
ensure high availability and scalability for your applications.
3. Cloud CDN: Deliver content to users with low latency and high performance by caching and
serving it from Google's globally distributed network of edge locations.
4. Cloud Interconnect: Establish dedicated and reliable connections between your on-premises
network and Google Cloud, enabling high-bandwidth data transfers and private access to Google
services.
5. Cloud DNS: Manage and scale your domain name system (DNS) zones and records, providing
reliable and fast name resolution for your applications and services.
Routes and Firewall Rules in the Cloud

1. Default route: A catch-all route used when no specific match is found in the routing table, directing
network traffic to a default gateway or next hop for forwarding to other networks or the internet.

2. Subnet route: Defines a specific IP address range (subnet) and specifies the next hop or gateway
for traffic destined to that subnet, allowing for more precise routing within a network.

3. Static route: Manually configured route that provides explicit directions for network traffic based
on specific destination IP addresses or subnets, remaining unchanged until modified manually.

4. Dynamic route: Automatically updated route based on dynamic routing protocols, allowing for
dynamic calculation and adjustment of optimal paths for network traffic based on network conditions
and routing policies.

Firewalls play a crucial role in protecting virtual machine (VM) instances by preventing
unauthorized connections.
Firewalls act as a security barrier, filtering incoming and outgoing network traffic to VM instances.
They enforce access control policies, allowing only approved connections and blocking
unauthorized attempts, effectively safeguarding VM instances from potential threats and
unauthorized access.
Shared VPC versus VPC peering
CHAPTER 4
Critical monitoring and management activities:
1. Performance monitoring: This involves continuously monitoring the system's performance
metrics, such as CPU usage, memory utilization, network throughput, and response times. It helps
identify performance issues, bottlenecks, and potential areas for optimization.
2. Logging and error-reporting: Logging captures important system events, errors, and informational
messages, providing a record of system activity. Error-reporting mechanisms notify administrators
or developers about critical errors, exceptions, or failures occurring within the system, enabling
timely investigation and resolution.
3. Tracing performance bottlenecks: Tracing involves analyzing and identifying performance
bottlenecks or inefficiencies within the system. It helps pinpoint specific components, functions, or
database queries causing delays or degradation in system performance, allowing for targeted
optimizations.
4. Real-time debugging: Real-time debugging facilitates the identification and resolution of issues
during system operation. It involves inspecting the system's live state, monitoring variables, and
tracing code execution to diagnose and fix problems in real-time, minimizing downtime and
improving system stability.

Cloud Monitoring
✓ Identify trends, prevent issues
✓ Reduce monitoring overhead
✓ Improve signal-to-noise
✓ Fix problems faster
Cloud Logging
✓ Seamlessly resolve issues
✓ Scalable and fully managed
✓ All cloud logs in one place
✓ Real-time insights
Error Reporting
✓ Quickly understand errors
✓ Automatic and real-time
✓ Instant error notification
✓ Popular languages
Cloud Trace
✓ Find performance bottlenecks
✓ Fast, automatic issue detection
✓ Broad platform support
Cloud Debugger
✓ Debug in production
✓ Multiple source options
✓ Collaborate while debugging
✓ Use your workflows
Cloud Profiler
✓ Low-impact production profiling
✓ Broad platform support

You might also like